Next Article in Journal
A Sharp Rellich Inequality on the Sphere
Next Article in Special Issue
The Importance of Transfer Function in Solving Set-Union Knapsack Problem Based on Discrete Moth Search Algorithm
Previous Article in Journal
Probabilistic Interpretation of Solutions of Linear Ultraparabolic Equations
Previous Article in Special Issue
Energy-Efficient Scheduling for a Job Shop Using an Improved Whale Optimization Algorithm
Article

A Novel Simple Particle Swarm Optimization Algorithm for Global Optimization

School of Electrical Engineering & Automation, Jiangsu Normal University, Xuzhou 221116, China
*
Author to whom correspondence should be addressed.
Mathematics 2018, 6(12), 287; https://doi.org/10.3390/math6120287
Received: 28 October 2018 / Revised: 17 November 2018 / Accepted: 19 November 2018 / Published: 27 November 2018
(This article belongs to the Special Issue Evolutionary Computation)

Abstract

In order to overcome the several shortcomings of Particle Swarm Optimization (PSO) e.g., premature convergence, low accuracy and poor global searching ability, a novel Simple Particle Swarm Optimization based on Random weight and Confidence term (SPSORC) is proposed in this paper. The original two improvements of the algorithm are called Simple Particle Swarm Optimization (SPSO) and Simple Particle Swarm Optimization with Confidence term (SPSOC), respectively. The former has the characteristics of more simple structure and faster convergence speed, and the latter increases particle diversity. SPSORC takes into account the advantages of both and enhances exploitation capability of algorithm. Twenty-two benchmark functions and four state-of-the-art improvement strategies are introduced so as to facilitate more fair comparison. In addition, a t-test is used to analyze the differences in large amounts of data. The stability and the search efficiency of algorithms are evaluated by comparing the success rates and the average iteration times obtained from 50-dimensional benchmark functions. The results show that the SPSO and its improved algorithms perform well comparing with several kinds of improved PSO algorithms according to both search time and computing accuracy. SPSORC, in particular, is more competent for the optimization of complex problems. In all, it has more desirable convergence, stronger stability and higher accuracy.
Keywords: particle swarm optimization; confidence term; random weight; benchmark functions; t-test; success rates; average iteration times particle swarm optimization; confidence term; random weight; benchmark functions; t-test; success rates; average iteration times

1. Introduction

Since the 1950s, heuristic algorithms based on evolutionary algorithms (EAs) [1] have sprung up and been widely applied to the field of optimization control, such as moth search (MS) algorithm [2,3], genetic algorithm (GA) [4], ant colony optimization (ACO) algorithm [5], differential evolution (DE) algorithm [6], simulated annealing (SA) algorithm [7], krill herd (KH) algorithm, etc. [8,9,10,11,12]. Compared with traditional optimization methods such as golden section [13], Newton method [14,15], gradient method [16], heuristic algorithms have better biological characteristics and higher efficiency. It has been proved that heuristic algorithms perform well in some advanced existing fields e.g., grid computing [17], the superfluid management of 5G Networks [18], TCP/IP Mobile Cloud [19], IIR system identification [20], etc.
The Particle Swarm Optimization (PSO) algorithm proposed by Kennedy and Eberhart [21,22] in 1995 is also a member of the heuristic algorithm. Unlike other EAs, PSO does not require such steps as crossover, mutation, and selection, and it has fewer parameters. Its optimization process relies entirely on formula iteration, hence its calculation burden is low. The efficiency is very high, especially for continuous unimodal function model optimization. Due to these advantages, it has been widely used in various theoretical and practical problems such as function optimization [23], Non-Deterministic Polynomial(NP) problem [24], and multi-objective optimization [25].
PSO is a typical algorithm that relies on swarm intelligence [26,27,28,29,30,31] to optimize complex problems, and it is inspired by the foraging behavior of birds. It can be imagined that a group of gold rushers find gold in a region. They all have instruments that can detect gold mines under the stratum, and they can communicate with their nearest gold rushers. Through communication, they can know whether the person next to them finds gold. At the beginning, in order to explore this area more comprehensively, they randomly select a location to explore and maintain a certain distance. As the exploration begins, if someone finds some gold, the neighboring gold rushers can choose whether to change his position based on his own experience and whether he trusts him. This constant search may make it easier to find more gold than to be alone. In this example, a group of gold rushers and the gold are, respectively, equivalent to the particles of PSO and the optima that needs to be searched.
In actual operation, it is observed that PSO is very prone to premature convergence and falls into the local optima when faced with multimodal functions, especially some ones with traps or discontinuities. Based on this observation, a huge amount of particle swarm optimization variants have been proposed to deal with these issues. From the literature, it can be clearly observed that most of the existing PSO algorithms can be roughly divided into six categories: principle study, parameter setting, topology improvement, updating formula improvement, hybrid mechanism, practical application.
  • Principle study: The inertia weight factor, which adjusts the ability of PSO algorithm in local and global search was introduced by Shi and Eberhart [32], effectively avoiding falling into local optimum for PSO. Shi and Eberhart provided a way of thinking for future improvement. In 2001, Parsopoulos and Vrahatis [33]’s research showed that basic PSO can work effectively and stably in noisy environments, and in many cases, the presence of noise can also help PSO avoid falling into local optimum. The basic PSO was introduced for continuous nonlinear function [21,22]. However, because the basic PSO is easy to fall into the local optima, local PSO(LPSO) [34] was introduced in 2002. Clerc and Kennedy [35] proposed a constriction factor to enhance the explosion, stability, and convergence in a multidimensional complex space. Xu and Yu [36] used the super-martingale convergence theorem to analyze the convergence of the particle swarm optimization algorithm. The results showed that the particle swarm optimization algorithm achieves the global optima in probability and the quantum-behaved particle swarm optimization (QPSO) [37] has also been proved to have global convergence.
  • Parameter setting: A particle swarm optimization with fitness adjustment parameters (PSOFAB) [38], based on the fitness performance, was proposed in order to converge to an approximate optimal solution. The experimental results were analyzed by the Wilcoxon signed rank test, and its analysis showed that PSOFAP [38] is effective in increasing the convergence speed and the solution quality. It accurately adapts the parameter value without performing parametric sensitivity analysis. The inertia weight of the hybrid particle swarm optimization incorporating fuzzy reasoning and weighted particle (HPSOFW) [39] is changed based on defuzzification output. The chaotic binary PSO with time-varying acceleration coefficients (CBPSOTVAC) [40] using 116 benchmark problems from the OR-Library to test has time-varying acceleration coefficients for the multidimensional knapsack problem. A self-organizing hierarchical PSO [41] also uses time-varying acceleration coefficients.
  • Topology improvement: In 2006, Kennedy and Mendes [42] explained the neighborhood topologies in fully informed and best-of-neighborhood particle swarms in detail. A dynamic multiswarm particle swarm optimizer (DMSPSO) [43] was proposed, and it adopts a neighborhood topology including a random selection of small swarms with small neighborhood. Moreover, the regrouped group is dynamic and randomly assigned. In 2014, FNNPSO [44] use Fluid Neural Networks (FNNs) to create a dynamic neighborhood mechanism. The results showed that FNNPSO can outperform both the standard PSO algorithm and PCGT-PSO. Sun and Li proposed a two-swarm cooperation particle swarm optimization (TCPSO) [45] that used the slave swarm and the master swarm to exchange the information, which is beneficial for enhancing the convergence speed and maintaining the swarm diversity in TCPSO, and particles update the next particle with information from its neighboring particles, rather than its own history best solution and current velocity. This strategy makes the particles of the subordinate group more inclined to local optimization, thus accelerating convergence. Inspired by the cluster reaction of the starlings, Netjinda et al. [46] used the collective response mechanism to influence the velocity and position of the current particle by seven adjacent ones, thereby increasing the diversity of the particles. A nonparametric particle swarm optimization (NP-PSO) [47] combines local and global topologies with two quadratic interpolation operations to enhance the PSO capability without tuning any algorithmic parameter.
  • Updating formula improvement: Mendes [48] changed the PSO’s velocity and personal best solution updating formula and proposed a fully informed particle swarm (FIPS) algorithm to make good use of the whole entire swarm. Mendes [49] proposed a Comprehensive learning particle swarm optimizer (CLPSO) whose velocity updating formula eliminates the influence from global best solution to to suit the multimodal functions, and CLPSO uses two tournament-selected particles to help particles study better case during iteration. The results showed that CLPSO performs better than other PSO variants for multimodal problems. A learning particle swarm optimization (*LPSO) algorithm [50] was proposed with a new framework that changed the velocity updating formula so as to organically hybridize PSO with another optimization technique. *LPSO is composed of two cascading layers: exemplar generation and a basic PSO algorithm updating method. A new global particle swarm optimization (NGPSO) algorithm [51] uses a new position updating equation that relies on the global best particle to guide the searching activities of all particles. In the latter part of the NGPSO search, the random distribution based on uniform distribution is used to increase the particle swarm diversity and avoid premature convergence. Kiran proposed a PSO with a distribution-based position update rule (PSOd) [52] whose position updating formula is combined with three variables.
  • Hybrid mechanism: In 2014, Wang et al. [53] proposed a series of chaotic particle-swarm krill herd (CPKH) algorithms for global numerical optimization. The CPKH is a hybird Krill herd (KH) [54] algorithm with APSO [55] which has a mutation operator and chaotic theory. This hybrid algorithm, which with an appropriate chaotic map performs superiorly to the standard KH and other population-based optimization, has quick exploitation for solution. DPSO [56] is a accelerated PSO (APSO) [55] algorithm hybridized with a DE algorithm mutation operator. It has a superior performance due to combining the advantages from both APSO and DE. Wang et al., finally, studied and analyzed the effect of the DPSO parameters on convergence and performance by detailed parameter sensitivity studies. In he hybrid learning particle swarm optimizer with genetic disturbance (HLPSO-GD) [57], the genetic disturbance is used to cross the corresponding particle in the external archive, and new individuals are generated, which will improve the swarm’s ability to escape from the local optima. Gong et al. proposed a genetic learning particle swarm optimization (GLPSO) algorithm that uses genetic evolution to breed promising exemplars based on *LPSO [50] enhancing the global search ability and search efficiency of PSO. PSOTD [58] namely a particle swarm optimization algorithm with two differential mutation, which has a novel structure with two swarms and two layers including bottom layer and top layer, was proposed for 44 benchmark functions. HNPPSO [59] is a novel particle swarm optimization combined with a multi-crossover operation, a vertical crossover, and an exemplar-based learning strategy. To deal with production scheduling optimization in foundry, a hybrid PSO combined the SA [7] algorithm [60] was proposed.
  • Practical application: Zou et al. used NGPSO [51] to solve the economic emission dispatch (EED) problems and the results showed that NGPSO is the most efficient approach for solving the economic emission dispatch (EED) problems. PS-CTPSO [61] based on the predatory search strategy was proposed to do with web service combinatorial optimization, which is an NP problem, and it improves overall ergodicity. To improve the changeability of ship inner shell, IPSO [62] was proposed for a 50,000 DWT product oil tanker. MBPSO [63] was proposed for sensor management of LEO constellation to the problem of utilizing a low Earth orbit (LEO) infrared constellation in order to track the midcourse ballistic missile. GLNPSO [64] is for a capacitated Location-Routing Problem. The particle swarm algorithm is also applied to many other practical problems e.g., PID (Proportion Integration Differentiation) controller [65], optimal strategies of energy management integrated with transmission control for a hybrid electric vehicle [66], production scheduling optimization in foundry [60], etc.
In view of the shortcomings of PSO [21,22], three improvements are proposed in this paper. The first is Simple Particle Swarm Algorithm (SPSO). It does not use the velocity updating formula, and abandons the use of self-cognitive term. Although the speed of the algorithm has been greatly improved, some deficiencies have been found in actual tests. It is observed that the particles’ difference is too small to jump out of the local optimal solution, which is not suitable for searching for multimodal problems. For this purpose, a second improvement named Simple Particle Swarm Optimization with Confidence Term (SPSOC) is proposed in this paper. That is, the confidence term is introduced in the SPSO’s position updating formula. Although having a slight increase in time compared to SPSO, the results show that SPSOC is better for multi-peak function optimization. On the basis of this, the inertia weight is improved by introducing the difference between the stochastic objective function value and the worst one, and the final improvement is called Simple Particle Swarm Optimization based on Random weight and Confidence term (SPSORC). The inertial weight not only has a crucial effect on its convergence, but also plays an important role in balancing exploration and exploitation during the evolution. The strategy in this paper makes particle position movements more random. A large number of experiments suggest that all three improvements are very effective, and the combination of the three improvements has greatly improved the search efficiency of the particle swarm algorithm.
The rest of this paper is organized as follows: Section 2 introduces the basic PSO [21,22] and three recently improved PSO methods. In Section 3, three improvements are presented in detail. In Section 4, some analysis of PSO is further discussed. The experimental results are discussed and analyzed between four state-of-the-art PSOs and three improved ones proposed in this paper. Finally, this paper presents some important conclusions and the outlook of future work in Section 5.

2. Related Works

2.1. The Basic PSO

In general, the particle swarm optimization algorithm is composed of the position updating formula and the velocity updating formula. Each particle iterates with reference to its own history best solution p best and the global best value g best to change position and velocity information. The basic particle swarm optimization (bPSO) [21,22] algorithm iteration formula is as follows:
v i n t + 1 = v i n t + c 1 r 1 ( p best t x i n t ) + c 2 r 2 ( g best t x i n t ) ,
x i n t + 1 = x i n t + v i n t + 1 .
As shown in the above formula, Equations (1) and (2) are the velocity updating formula and the position updating formula, respectively. The particles whose population is m search for the optima in the -dimensional space. In that process, the i-th particle’s position in the n-dimensional space is x i n and the current velocity is v i n . p best is the individual history best solution and g best is the global one. t is the current iteration numbers. c 1 and c 2 are cognitive and social factors, and r 1 and r 2 are random numbers belonging to [0,1). Figure 1 is an optimization procedure of PSO.
From Figure 1, the area U is the solution space of a function. O is the theoretical optima that needs to be found. x i t is the position of the initial particle. The velocity v i t is the current particle velocity. v i t + 1 is the velocity after being affected by various aspects. Particle memory influence and swarm influence are parallel to the connecting lines from x i t to g best and p best , respectively, indicating the influence from g best and p best . In this generation, the particle i is affected by v i t first. After particle memory influence and swarm influence, i arrives at x i t + 1 from x i t at velocity v i t + 1 . From the next iteration, the particle will move from x i t + 1 towards the new position. It keeps iterating as the step above and moves to the theoretical optima more and more close.
The velocity updating formula had changed to Equation (3), when Shi and Eberhart put the inertia weight ω into it, and the position updating formula remained unchanged:
v i n t + 1 = ω v i n t + c 1 r 1 ( p best t x i n t ) + c 2 r 2 ( g best t x i n t ) .
The introduction of inertia weight effectively keeps a balance between the local and global search capability. The larger the inertia weight, the stronger the global search capability of the algorithm. On the contrary, the local search capability is more prominent. This particle swarm optimization model is the most commonly used nowadays, and many scholars have improved it.
The steps to achieve it are as follows:
Step 1:
Initialize the population randomly. Set the maximum number of iterations, population size, inertia weight, cognitive factors, social factors, position limits and the maximum velocity limit.
Step 2:
Calculate the fitness of each particle according to fitness function or models.
Step 3:
Compare the fitness of each particle with its own history best solution p best . If the fitness is smaller than p best , the smaller value is assigned to p best , otherwise, p best is reserved. Then, the fitness is compared with the global best solution g best , and the method is the same as selecting p best .
Step 4:
Use Equations (2) and (3) to update the particle position and velocity. In addition, we must make sure that its velocity and position are, respectively, within the maximum velocity limit and position limits.
Step 5:
Check if the theoretical optimum is reached, output the value and stop the operation; otherwise, return to Step 2 (Section 2.1) until it reaches the theoretical optima or peaks the maximum number of iterations.
In this paper, a basic particle swarm optimization with decreasing linear inertia weight is used. The weight formula is as Equation (4):
ω = ω m a x ω m a x ω m i n T × t .
In Equation (4), ω m a x is starting weight. ω m i n is final weight. t m a x is the maximum number of iterations. PSO needs to set a still more larger starting weight ω m a x according to the influence of inertia weight on the search capability of PSO, so as to pay more attention to the global optima. As the number of iterations increases, the weight will be decreased. The search process would be more inclined to explore the local optima, which is more conducive to the final convergence.

2.2. The PSO with a Distribution-Based Position Update Rule

In 2017, a distribution-based update rule for PSO (PSOd) [52] algorithm was proposed by Kiran. This improved strategy changed PSO’s iteration formula.
x i n t + 1 = μ + σ × Z .
Those three variables in Equation (5) work by Equations (6)–(8):
μ = x i n t + p best t + g best t 3 ,
σ = ( x i n t μ ) 2 + ( p best t μ ) 2 + ( p best t μ ) 2 3 ,
Z = ( 2 ln k 1 ) 1 2 × cos ( 2 π k 2 ) .
It works as follows:
Step 1:
The population is initialized randomly.
Step 2:
The fitness is calculated and compared to get the best individual history solution and the best global one.
Step 3:
Equation (5) is used to update the particle position that is limited in the upper and lower limits.
Step 4:
If the termination condition is met, the best solution is reported.

2.3. A Hybrid PSO with Sine Cosine Acceleration Coefficients

In order to make better use of parameters on PSO algorithm, such as inertia weight, learning factors, etc., Chen et al. proposed a hybrid PSO algorithm with the sine cosine acceleration coefficients (HPSOscac) [67].
Step 1:
The population is initialized randomly.
Step 2:
The reverse population of the initial population is calculated by Equation (9)
x i n = x max + x min x i n .
In this equation, x i n and x i n are initial population and reverse population, respectively. x max and x min are combined the upper and lower limits of particles position i.e., the solution space boundary.
Step 3:
Fitness values of those two populations are sorted, and the best half is used as the initial population. Then, the p best and g best are obtained by comparing.
Step 4:
Equations (10) and (11) are used to update the inertia weight and learning factors, respectively:
ω t + 1 = c 4 × sin ( π ω t ) , ω 1 = 0.4 ; c ϵ ( 0 , 4 ] ,
c 1 = 2 × sin ( ( 1 t T ) × π 2 ) + 0.5 , c 2 = 2 × cos ( ( 1 t T ) × π 2 ) + 0.5 .
Among them, c is a constant among 0 and 4. c 1 and c 2 are cognitive and social factors, respectively.
Step 5:
Updating the particle velocity and position, use Equations (1) and (12). The particle position updating formula is as follows:
x i n t + 1 = x i n t × W i n t + v i n t × W i n t + ρ × g best t × W i n t .
W i n t and W i n t are the dynamic weights that control position and velocity terms. Its formula is like Equation (13). ρ is a random value between 0 and l:
W i n t = exp f i f a v g 1 + exp f i f a v g t , W i n t = 1 W i n t .
In this formula, f i is the particle fitness value, and f a v g is the average one.
Step 6:
The iteration is ended if end condition is reached. Otherwise, it comes back to Step 2 (Section 2.3).

2.4. A Two-Swarm Cooperative PSO

A two-swarm cooperative particle swarm optimization (TCPSO) [45] was proposed who uses two particle swarms, the slave swarm and the master swarm with the clear division of their works to overcome the shortcomings such as lack of diversity, slow convergence in the later period, etc. It works like the following:
Step 1:
Initialization. Initialize the slave swarm and the master swarm’s velocity and position randomly.
Step 2:
Calculate the fitness of these two swarms and get the g best S , p best S , g best and p best M . The first two come from the slave swarm and the last two come from the master swarm.
Step 3:
Reproduction and updating.
Step 3.1:
Update the slave swarm by Equations (14) and (15). Ensure that velocity and position are within the limits:
v i n S , t + 1 = c 1 S r 1 ( 1 r 2 ) ( x k n S , t x i n S , t ) + c 2 S ( 1 r 1 ) r 2 ( g best x i n S , t ) ,
x i n S , t + 1 = x i n S , t + v i n S , t + 1 .
S in these two formulas means that this variable from the slave swarm, except g best in Equation (14) from the master swarm. Finally, we will get the g best S . x k is randomly chosen from the neighberhood of the x i according to Equation (16) [42]:
k ϵ [ i l 2 + 1 , i + l 2 ] , i f l i s e v e n , [ i l 1 2 , i + l 1 2 ] , i f l i s o d d .
l is the size of neighborhood. Sun and Li found that the size of neighborhood equal to 2 is best in their experiments.
Step 3.2:
Update the master swarm by Equations (17) and (18). Ensure that velocity and position are within the limits:
v i n M , t + 1 = ω M v i n M , t + c 1 M r 1 ( 1 r 2 ) ( 1 r 3 ) ( p best M x i n M , t ) + c 2 M r 2 ( 1 r 1 ) ( 1 r 3 ) ( g best S x i n M , t )   + c 3 M r 3 ( 1 r 1 ) ( 1 r 2 ) ( g best x i n M , t )
x i n M , t + 1 = x i n M , t + v i n M , t + 1 .
M here means that this variable is from the master swarm. In the end of Step 3.2 (Section 2.4), g best wil be obtained for the next iteration.
Step 4:
Get the optima if it meets the termination condition; otherwise, go to Step 2 (Section 2.4).

3. SPSO, SPSOC, SPSORC

3.1. Simple PSO

Zou et al. proposed a novel harmony search algorithm [68] that used the optimal harmony and worst harmony in the harmony memory to guide the configuration of the harmony vector. It obtained very suitable results. Inspired by its thoughts, we try to round off the velocity formula and cognitive term of PSO and directly use the social term to control the algorithm optimization, so that the formula Equation (21) is the most simplified, namely the Simple Particle Swarm Optimization (SPSO) algorithm. According to the results of the literature [69], the influence of the velocity term on the performance of the particle swarm algorithm can be neglected. Drawing on literature [69], we can simply do the following derivation. Before abandoning the velocity updating formula, SPSO velocity updating formula is shown as follows. Particles’ positions are updated according to Equation(2):
v i n t + 1 = ω v i n t + c r ( g best t x i n t ) .
According to Equations (19) and (2), we make the following assumptions:
Hypothesis 1.
The update of particles per dimension is independent from each other, except that g best is the one that connects the information to the other dimensions.
Hypothesis 2.
When particle i is updated, the other particles’ velocities and positions are not changed.
Hypothesis 3.
The particles’ positions are moving continuously.
According to the above assumptions, it is only necessary to prove a certain dimension of a certain particle search process that can be universal. Iterating over Equations (19) and (2) yields a second-order differential equation:
x t + 2 + ( r c ω 1 ) x t + 1 + ω x t = r c g best t .
We can observe that there is no velocity updating in Equation (20). This result can be applied to each dimension update of other particles. Now, we get SPSO’s updating formula:
x i n t + 1 = ω x i n t + c r ( g best t x i n t ) .
SPSO only uses this formula to iterate. The experimental results show that this strategy improves the search efficiency and stability of the bPSO.
It works like the following:
Step 1:
The maximum generation, population number, inertia weight, learning factor are set up. Population is initialized.
Step 2:
Fitness is calculated according to the function.
Step 3:
Every particle compares with its history best solution to get the p best and compares with the global best one to get the g best .
Step 4:
Particle position is updated by Equation (21).
Step 5:
If the theoretical optimal value is not found, the program returns to Step 2 (Section 3.1); otherwise, the program stops.
After changing, the particle direction is only affected by the global optima. Graphical display of one of the particle optimization process is shown in Figure 2.
As shown in the figure, compared with the bPSO optimization process diagram in Section 2.1, in the optimization of SPSO, the particles are only affected by g best and the direction of the particles always faces g best . This feature also brings some drawbacks. For example, whether the algorithm can or cannot search for the theoretical optima depends entirely on the selection position of the global optima, which makes it likely for particles develop in a certain local optimal direction. It is possible to reach the current g best value directly if the movement is fast enough. This is a search trajectory of one, while when all particles are only optimized in one direction, it obviously reduces the difference between the population. The lack of diversity directly leads to the fact that SPSO are easily trapped in local optimal solutions.
What is gratifying is that SPSO is very fast because of the simplification. This is very suitable for single-peak problems. This advantage can be clearly reflected in the experimental results in Section 4. However, the unconstrained functions, especially single-peak problems, are a minority after all. In order to make this improvement apply into more functions or environment, we propose adding a confidence term so that some part of the particles can determine the distance to advance based on its own level of trust to g best , so as to get rid of the defects that all particles are looking for at one point.

3.2. SPSO with Confidence Term

In order to better solve the multimodal problem and make the improvement universal, we decided to add a confidence item(SPSOC) that rewrites Equation (21) into Equation (22):
x i n t + 1 = ω 1 x i n t + c r 1 ( g best t x i n t ) ω 2 r 2 g best t .
Compared with SPSO, the algorithm formula adds one item, namely the confidence term. ω 2 is the inertia weight of the confidence term. r 2 is the random value between [0, 1].
Referring to Figure 3, the principle of the item can be understood as: at a certain iteration, the position calculated by the SPSO moves a distance suffered from confidence influence. The effect is equivalent to the particle being optimized from x i t to x i t + 1 . Then, the particle retreats a distance from the beginning in the opposite direction to the g best direction. Finally, this particle reaches the position of x i t + 1 . Using the inertia weight ω 2 and the random number r 2 , the distance of the particles retreating in the opposite direction would be uncertain. It can be imagined that the degree of particles trust at different generation is different, that is, the influence of g best is different. This improvement can effectively slow the convergence of particles, so that the particles are not too dense, thus maintaining particle diversity.
A discussion of the impact of this improved algorithm using a combination of different weights will be explained in the experiment of Section 4.4. In order to minimize the program running time and ensure that the program structure is simple and the effect is optimal, this paper makes ω 1 = ω 2 . SPSOC’s iteration process is the same as SPSO.

3.3. SPSOC Based on Random Weight

Adding a confidence item to the SPSO does significantly enhance the search ability of the algorithm, but it does not achieve theoretical optimization when searching for most of the benchmark functions. Compared with many improved PSOs proposed recently, SPSOC has no big advantage except for the short amount of time. Therefore, we think about randomization improvement of inertia weight named SPSOC based on random weight (SPSORC). The improved inertia weight formula is shown in Equation (23):
ω = p best r f b e s t f w o r s t f b e s t , i f s e t m i n i m u m a s t a r g e t , f b e s t p best r f b e s t f w o r s t , i f s e t m a x i m u m a s t a r g e t .
In this formula, if we set the minimum as the target we want to find, f b e s t is the minimum fitness in the current iteration, f w o r s t is the worst fitness target value in the current iteration, and p best r is one of the most p best that a random particle has searched for from total swarm.
The use of Equation (23) allows the weights to be generated randomly, which effectively reduces the possibility that the algorithm falls into a local solution and enhance the exploitation capability. This strategy will at least make algorithms better for some multimodel problems. The random weight, however, also increases the risk of finding non-optimal solutions. This will be reflected in the large amount of experimental data in Section 4, but the experimental results show that the overall search ability of SPSOC has been very significantly improved.
It is more concise that the flow of SPSORC is similar to that of the bPSO, which just calculates the random weight. Its procedure is shown in Table 1.

4. Experimental Study and Results Analysis

4.1. Benchmark Functions

The aim of this improved strategy is to solve the problem of unconstrained optimization better. In order to demonstrate the effectiveness of the algorithm more fully, this experiment will use 22 commonly used benchmark functions to simulate and contrast, including the unimodal benchmark functions represented by Sphere Function, the complex multimodel solution functions such as Rastrigrin Problem, the ill-conditioned quadratic Rosenbrock function, Xin–She Yang 3 with discontinuity and trap near the optimal solution, noise-containing functions like Quartic Function and other functions which is hard to find the best solution. Of course, these 22 functions also contain four test functions ( f 7 , f 15 , f 20 and f 21 ) with negative optima.
These 22 benchmark functions arranged in alphabetical order are shown in Table A1. The following test functions may change slightly in form for consistency or convenience because of the large number of types for test function versions, but the test results will not be affected. The last column, ‘Accuracy (50)’, is the convergence accuracy we want to reach for the test function in the 50-dimensional case, which will be used in Section 4.5.3, ‘Success Rate and Average Iteration Times’.

4.2. Parameters Setting and Simulation Environment

One of the reasons why particle swarm optimization algorithm was proposed late but had a relatively wide range of use is that it needs fewer parameters and is set up simply. When dealing with general problems, its requirements on population numbers, the maximum iteration numbers and other parameters are not high, which also determines that the algorithm has the advantages of small size and fast searching speed when it is implemented. Under normal circumstances, the population set at 40 can get a good solution for most problems. More complex problems can be solved by increasing the population number and the maximum iteration times.
Table 2 is about the specific parameter settings. N R is the number of times each algorithm searches for the benchmark functions. m is population number. T is the maximum iteration times per search. ω m a x and ω m i n are the maximum weight and the minimum weight. c 1 and c 2 are the acceleration factors.
Simulation environment is shown in Table 3.

4.3. Discussion on Improvement Necessity For SPSO

The search speed is a great advantage of SPSO because of a simple structure. However, its advantages are its disadvantages. The over-simplified structure makes the SPSO’s population lack of diversity, which makes it converge to local optima quickly, so the further improvement of SPSO becomes indispensable. Therefore, in Section 3, we present two improvements to SPSO. In this section, we will let SPSO, SPSOC and SPSORC solve the high dimension benchmark functions. Then, we discuss the necessity of those two improvement steps in Section 3 by analyzing its results.
In this experiment, the function dimension is set to 200 dimensions. The other parameters are consistent with the parameter setting table in Table 2 of Section 4.2. Table 4 shows the optimal results of the experiment. The minimum number in each set of data (min and mean) is represented in bold in the following table.
From the experimental results for the 200-dimensional benchmark functions in Table 4, we can see that SPSORC can search the other 21 functions for the theoretical optimal solution or a better solution than SPSO and SPSOC except for searching Quartic Function with noises. The optimization results of SPSO and SPSOC, however, are in straitened circumstances compared to SPSORC. SPSO gets better solutions four times, while SPSOC gets better solutions six times. Compared with the 30 search average solutions of SPSO and SPSOC, the optima of SPSOC is also smaller. it is indicated that the solution searched by SPSO after adding the confidence item can be kept smaller and its performance is greatly improved and the optimization capability is more enhanced after using random inertia weight. Thus, it can be seen that the two improvements to the SPSO are very necessary. SPSO is more inclined to exploration, which is more conducive to the local search of particles. Confidence term change the trajectory of some particles, which increases particle diversity. Meanwhile, the random inertia weight balances the exploitation capabilities of the algorithm so that it improves the search range and robustness of the algorithm significantly.

4.4. Discussion on Weight Selection for SPSOC

The proposed SPSOC has two inertia weights. The first inertia weight balances the search ability to global optima and the local one, while the second weight determines the degree to which the particle converges to the global optima in current generation. Obviously, whether these two weights are set properly or not has a significant impact on the performance of the algorithm. Then, the discussion of how these two inertia weights should be selected becomes quite necessary. The experiment comparing the optimal solution and the average solution found by the algorithm with different weights introduces three kinds of inertia weight strategies, which are divided into six kinds of situations. Those three kinds of inertia weights used in the experiment are as follows:
  • Linear decreasing inertia weight, i.e., Equation (4);
  • Classic nonlinear dynamic inertia weight, i.e., Equation (24);
    ω = ω m a x , x i n > f a v g , ω m i n ( ω m a x ω m i n ) × x i n f m i n f a v g f m i n , x i n f a v g .
  • Random inertia weight proposed in this paper, i.e., Equation (23).
Table 5 reports the results of this experiment. Taking ω 2 , 1 for example, the first subscript 2 indicates that ω 1 in Equation (22) uses the second kind of weight formula i.e., Equation (24), and the second subscript 1 indicates that ω 2 uses the first kind of weight formula i.e., Equation (4). The others are similar. Experimental benchmark functions’ upper dimensions are set at 100. We represnt the minimum value for min and mean in bold in the following table.
As can be seen from the experimental data in Table 5, when the ω 1 and ω 2 take the random weights proposed in this paper, the obtained optima are satisfactory. It has 19 times to find the best solution, but only dominated by other algorithms when searching for the three functions ( f 9 , f 11 and f 15 )—followed by ω 1 using the second kind of weight improvement strategy and ω 2 using the first strategy with the way. This method has six times to search for smaller results. The conclusion of this discussion is that the optimization of the algorithm is better when ω 1 is equal to ω 2 . If they all use the randomized weights proposed in this paper at the same time, the capability of the SPSOC will be the best and it can easily do this with most of the benchmark functions. Comparing with the 30-times average values, we can find that, when the weight ω 1 is equal to ω 2 , the average value is smaller and the randomization strategy proposed in this paper is the best among them. If they use the same weight equation, the algorithm will be simpler and faster because only one weight needs to be calculated.
However, this paper uses only six kinds of collocation ways which are combined into three kinds of improvement strategies to carry on the simulation experiment. Whether there is a better weight improvement strategy to make SPSOC have a better performance needs to be further developed and improved.

4.5. Comparison and Analysis with Other PSOs

The most commonly used method which better reflects that the improved algorithm is excellent is bound to be compared with other classical improvement methods. In this section, we have a comparative test between three improved strategies proposed in this paper and bPSO and its three representative improved ones namely, bPSO, PSOd, HPSO-SCAC and TCPSO. The experiment consists of three parts mainy. The first part is to test the seven kinds of particle swarm algorithms separately for 0-dimensional, 50-dimensional and 100-dimensional functions. Each function is searched for 30 times. A t-test is used to analyze the large amount of experimental data obtained. Twenty-two distinct evolution curves of fitness from optimizng 100-dimensional functions will be analyzed briefly. Here, all the experiments were conducted on the same conditions as Zhang et al. [70,71]. The second part is to analyze the complexity by the Big O notation [72] and the actual running time for search for the optima in 50-dimensional problems. The third part is to calculate the success rate and the average iteration times of seven algorithms in solving twenty-two 50-dimensional problems, respectively. The stability and effectiveness of the algorithm will be analyzed by these two indices. More details of those three parts will be elaborated in sequence in the following subsections.

4.5.1. Different Dimensional Experiments and t-Test Analysis

Students’ t-test (t-test) is a frequently used method of data analysis in statistics to compare whether two sets of data is in one solution space or not, that is, the comparasion for data differences. In this paper, a two-independent-samples t-test as the following formulas is used to analyze the difference between the 30 optima searched by SPSORC and the 30 ones by others:
t = ( X ¯ 1 X ¯ 2 ) ( μ 1 μ 2 ) S X ¯ 1 X ¯ 2 = ( X ¯ 1 X ¯ 2 ) S X ¯ 1 X ¯ 2 ,
S X ¯ 1 X ¯ 2 = S c 2 n 1 + S c 2 n 2 ,
where X ¯ 1 and X ¯ 2 are, respectively, the average of two sets of data; S c 2 is the combined variance; The sample size is 30; the two-tailed test level is taken as 0.05. The Matlab R2014a test2 function (MathWorks, Natick, MA, USA) instruction is used to calculate directly so as to avoid unnecessary calculation error in the paper.Table 6 shows the optima of the seven improved algorithms for the 10-dimensional, 50-dimensional and 100-dimensional benchmark functions from Table A1, respectively. In Table 6, each algorithm solves the specified function 30 times separately and minimum value(min), average values(mean) and standard deviation values(std) of them are calculated. The minimum one of this three sets of data are highlighted in boldface. ‘+’, ‘−’ and ‘=’ respectively indicate that the SPSORC results are ‘ b e t t e r ’ than, ‘ w o r s e ’ than and ‘ s a m e ’ as the improved algorithm. To calculate the SPSORC’s net score for convenience, ‘1’, ‘−1’, and ‘0’ corresponding to the three symbols here indicate the score of the SPSORC.
The search results of the heuristic algorithm are random, so the average value (mean) of the results after multiple searches is the most valuable data. Observing the mean values in Table 6, when searching for 10-dimensional functions, we can see that SPSORC outperforms the other six PSO on 16 functions ( f 1 , f 2 , f 3 , f 4 , f 6 , f 7 , f 10 , f 12 , f 13 , f 14 , f 16 , f 17 , f 18 , f 20 , f 21 and f 22 ) in terms of the criteria ‘mean’, and 15 out of 16 were theoretical optimal solutions. Secondly, the number of that SPSOC find the minimum mean solutions for 10-dimensional functions is 6 times ( f 1 , f 5 , f 7 , f 8 , f 9 , f 10 ). PSOd, TCPSO and SPSO only find the minimum mean once. bPSO and HPSOscac are unable to search for the minimum mean at one time. Regarding the three functions ( f 1 , f 7 , f 10 ), SPSOC and SPSORC can obtain the same mean.
On the other hand, both SPSORC and HPSOscac can find the same values in many functions including f 2 , f 3 , f 4 , f 6 , f 12 , f 13 , f 14 , f 16 , f 17 , f 21 and f 22 . In addition, SPSORC has achieved the best results for ninth, indicating that SPSORC and HPSOscac have the opportunity to yield a better solution than the average one, but the results are volatility, especially for HPSOscac.
The use of standard deviation (std) can observe the volatility of the algorithm results. The standard deviation is a measure of the degree to which a set of data averages is dispersed. A larger standard deviation represents a larger difference between most of the values and their average values; a smaller standard deviation means that these values are closer to the average. It is clear that the standard deviation of SPSORC is almost the smallest of all algorithms.
The comparison of ‘min’, ‘mean’ and ‘std’ from 50-dimensional and 100-dimensional dimensional functions between SPSORC and other six PSO variants is also illustrated in Table 6. It is clearly seen that, for both 50-dimensional and 100-dimensional functions except for f 9 , f 11 , f 15 , f 19 , SPSORC is able to obtain better ‘mean’ than the most of the other improved strategies. In 50-dimensional optima values, SPSORC outperforms the other six PSOs on 17 functions ( f 1 , f 2 , f 3 , f 4 , f 5 , f 6 , f 7 , f 10 , f 12 , f 13 , f 14 , f 16 , f 17 , f 18 , f 20 , f 21 and f 22 ), in which, 14 out of 17 were searched for theoretical optima. SPSOC has searched for the best solution five times ( f 1 , f 5 , f 7 , f 8 , f 10 ). Then, PSOd and SPSO has two times ( f 15 , f 19 ) and once ( f 9 ), respectively. Regrettably, other algorithms have no chance. Almost the same situation also appears in 100-dimensional results. In addition, as for ‘std’ of SPSORC, it should be noted that the ‘std’ without any fluctuations many times is markedly superior to that of the others. Second, with regard to the experimental results comparison in 100-dimensional results of Table 6, SPSOC is also able to achieve good performance with smaller mean value such as for f 1 , f 5 , f 7 , f 9 and f 10 out of the twenty-two 100-dimensional test functions.
To sum up, from Table 6, it has been identified experimentally that SPSORC is superior or highly competitive with several improved PSO variants, and this improving strategy is shown to be able to find fairly good solutions for most of the well-known benchmark functions.
Table 7 is a summary of the scores based on the t-test analysis of search results in three kinds of dimensions. S c o r e is the net score of SPSORC, which is better than the score obtained by the comparison function minus the number of comparison functions. Take the comparison result of SPSOC and RSMPSOc in Table 7 as an example, that is, the A6 algorithm in 100-dimensional in Table 7. The SPSORC result is seven times better than SPSOC and three times worse than SPSOC. Therefore, the net score of SPSORC is: S c o r e = 7 3 = 4 , and the calculation process of other net scores is the same.
Observed from Table 7, SPSORC has a stable net score for these six algorithms, all greater than 0. Careful observation can show us that the performance is slightly higher in the 50-dimensional scores compared to 10-and-100 dimensions. It is again proven that the capability of SPSORC is better than bPSO, PSOd, HPSOscac, TCPSO, SPSO and SPSOC, especially when solving the 50-dimensional problem. The convergence curves of seven improved PSO algorithms on twenty-two benchmark functions with 100 dimensions are plotted in Section 4.5.1 Figure 4, respectively.
Figure 4a is the legend for the other twenty-two convergence curves. Figure 4b indicates that, on f 1 , PSPSOC converges the fastest in the early stage among the seven improvements. HPSOscac converges relatively slowly compared to SPSORC. The order of performance on f 1 is SPSORC, HPSOscac, SPSOC, SPSO, bPSO, PSOd, TCPSO. Almost the same situation occurs simultaneously on the other 11 function convergence curves. This should be the effect of algorithm simplifying so that the algorithm can converge very quickly in the early stage. On f 3 , f 4 , f 5 , et al., RSPSO has found the best solution within the maximum generation. Figure 4l,m, on f 11 and f 12 , show that SPSORC converges relatively slowly compared to HPSOscac in the beginning, but it surpasses HPSOscac in about 20th generation.
Next, it is further analyzed by the box diagram in Figure 5. Box diagram is mainly used to reflect the characteristics of the original data distribution, and can also compare the distribution characteristics of multiple sets of data. On the same number of axes, the box plots of several sets of data are arranged in parallel. Shape information such as median, tail length, outliers, and distribution intervals of several batches of data can be seen at a glance. + indicates an abnormal point.
From Figure 5a, the order of these boxes from high to low is bPSO, TCPSO, PSOd, HPSOscac, SPSO, SPSORC and SPSOC, respectively. The upper quartile and median values of bPSO and TCPSO are closer to the upper edge, which indicates that the data of the two algorithms are more biased toward larger values. In comparison, the box of PSOd is more symmetrical and the data distribution is relatively uniform. Unfortunately, the distribution of the boxes of these three algorithms is too high, and the search results are not good. The box of HPSOscac is at the bottom of the coordinate system. However, we can clearly see that there are many outliers in its data. Some even have exceeded the median of PSOd. Its skewed nature tends to be smaller, but the distribution of data is more scattered. Compared with the above four algorithms, the distribution of the boxes of the three algorithms proposed in this paper is obviously more optimistic. The optimization results of the three algorithms are almost neat, concentrated and smaller. The situation of other box diagrams is not much different from that of Figure 5a. Throughout the 22 box diagrams in Figure 5, the bPSO, PSOd, HPSOscac and TCPSO seem to have more difficulty locating the solution than the SPSORC for from the box diagrams. The boxes of PSOd are mostly too top, followed by TCPSO. HPSOscac has a lot of outliers. Its maximum and minimum span is large, and distribution is extremely non-uniform and decentral. It is observed that the results of HPSOscac are highly volatile and the improvement of the algorithm is unstable. This may be related to its weight mixed with the trigonometric function. For the above reasons, the results of SPSORC and SPSOC are not obvious in the box diagram, almost all posted at the bottom. Combining the results of Table 6, we can roughly know that SPSORC has better performance, and the more oblate box can show that the 30 search results have little differences and the performance is very stable.
To sum up, the test results indicate that: Both confidence term and random weight can enhance diversity. The former can yield a significant improvement in performance, while the latter can preserve much more diversity. The aforementioned two methods are compatible. Combining both of them with SPSO can preserve the highest diversity and achieve the best overall performance among the six improved strategies.

4.5.2. Algorithm Complexity Analysis

Comparing the steps of the bPSO algorithm, the time complexity of SPSO’s two improvements mainly depends on two aspects: (1) random initialization, and (2) particle velocity and position updating. These two parts can all be expressed as O ( m × N ) by the Big O notation [72], in that, m is the population and N is the problem dimensions. In this paper, we haven’t changed the algorithm’s initialization method, so we only compare the time complexity from particle velocity and position updating. The SPSO, which does not consider the inertia weight updating and confidence term calculating, has a reduced computational complexity compared to the basic particle swarm optimization algorithm, but the Big O notation can also be represented by O ( m × N ) . Compared with the bPSO and the SPSO, the most complex algorithm we proposed is named SPSORC, which has increased weight, and the confidence term has surely increased in computational complexity, but we can see from Table 1 that its loop body has not changed. According to the Big O notation, its time complexity is still O ( m × N ) . Overall, the complexity of SPSO and its two improved ones are not increased by orders of magnitude.
Then, we analyze the real computational time from Table 8. In Table 8, we show the computational time for three kinds of dimension functions.
The time in Table 8 is the average time required to run 30 times independently. The average length of time varies slightly depending on the problem. Observing the running time of the seven algorithms, it is certain that the running time of SPSORC is similar to the computational time of other algorithms. It is clear that the lowest running time is obtained by SPSO, since it greatly simplifies bPSO. SPSOC has increased slightly over time due to confidence term. HPSOscac’s trigonometric function improvement strategy makes the algorithm better applicable to multimodal problems. However, because the regularity distribution of the trigonometric function increases the particle diversity, the particle is difficult to converge at the later stage, and the actual calculation time is longer. TCPSO uses the dual population to optimize problems through information exchange. Thus, SPSO and its improved strategies do not simply consume runtime to improve algorithm performance. The real computational time is basically distributed as Figure 6. A1–A7 are namely bPSO, PSOd, HPSOscac, TCPSO, SPSO, SPSOC and SPSORC.

4.5.3. Success Rate and Average Iteration Times

The success rate (SR) is the percentage between the times that each algorithm can successfully achieve convergence accuracy for function optimization and the total number of times. The average iterations times (AIT) is the average iteration numbers required by the algorithm to find the convergence accuracy. The former can examine the stability and accuracy of the algorithm, while the latter mainly examines the efficiency of the algorithm. The convergence accuracy used for the success rate and the average iteration times in this paper is the accuracy of the 50-dimensional test functions we want to meet. The specific values can refer to the last column of Table A1. The other parameters are set according to Table 2. Figure 7 shows the Radar charts with average iteration times of eight functions. We can surely, in Figure 7, find that the point of SPSORC is always close to the center origin.
Specific data for the average iteration times and the success rate of the seven algorithms in solving 50-dimensional problems are referred to in Table ‘AIT’ is the average iteration times. ‘SR’ represents the success rate. In order to facilitate the use of differentials, ‘SR’ is expressed in percentage form. The results round off to retain two digits after the decimal point. ‘−’ means that the algorithm fails to search this function when convergence accuracy is reached within the maximum generations. For the convenience of observation, we will show the minimum AIT and the maximum SR for each function in the bold format.
The results can be analyzed from Table 9. bPSO, PSOd, TCPSO and SPSO have higher success rates three times. SPSOC and HPSOscac gets six and seven times, respectively. SPSORC, surprisingly, has 15 times, and seven of them ( f 12 , f 13 , f 14 , f 17 , f 18 , f 20 , f 21 ) have the best success rate that none of the other six algorithms have achieved. One can see that the SPSORC strategy has a wider range of types, high precision and stability. Comparing the average iteration times, it is clearly shown that TCPSO, SPSO and SPSOC do not get the lowest average iteration times.
The above two data comparisons reveal that SPSORC has a large advantage compared with the other six algorithms. Not only is it more stable, but the search efficiency is also faster. When faced with a unimodal function, SPSORC can converge to effective precision more quickly. For other multi-peak complex problems, it is not to be outdone, except for the extremely difficult functions such as Rosenbrock Function and Schwefel’s Problem 2.26, which show weak stability. It can be stated that the improved method can be adapted to a variety of test environments, and the results are quite excellent.

5. Conclusions

Due to the effect on particle swarm optimization, in this paper, a Simple Particle Swarm Optimization based on Confidence term and Random inertia weight namely SPSORC has been proposed. SPSORC adopts three different improving strategies—first, particle updating formulas only use positional items and social items to enhance the exploration capability; second, the confidence term is introduced to increase particle diversity and avoid excessive particle convergence. Finally, a random inertia weight is formulated to keep the balance between exploration and exploitation. Extensive experiments in Section 4 on twenty-two benchmark functions validate and discuss SPSO and its further improvements’ effectiveness, efficiency, robustness and scalability. It has been demonstrated that, in most cases, SPSORC performs a better capability of exploitation and exploration than, or at least highly competitively with, basic PSO and its state-of-the-art improved ones introduced in this paper.
In our future work, we intend to incorporate different initialization strategies, multi-swarm and hybrid algorithms into SPSORC. This may result in very competitive algorithms. Obviously, many adaptive methods for PSO have been proposed. In order to improve the performance of the proposed approach and its application, the research on particle swarm optimization algorithm and its improvements is a promising research direction. Furthermore, we will apply the proposed approach to solve some other practical existing engineering optimization problems, e.g., machine-tool spindle design, logistics distribution region partitioning problem, economic load dispatch problem, etc. With these evolutionary algorithms, it is unnecessary to know the computing environment and to calculate the gradient and other information. Thus, it is helpful to save on the cost of computing. Even better, we can calculate the problem with more dimensions and goals at once, including some discontinuous problems.

Author Contributions

X.Z. suggested the improving strategy and wrote the original draft preparation. D.Z. was responsible for checking this paper. X.S. provided a provincial project.

Funding

The National Natural Science Foundation of China (No. 61403174) and the Postgraduate Research and Practice Innovation Program of Jiangsu Province (No. KYCX17_1575, No. KYCX17_1576).

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 61403174) and the Postgraduate Research and Practice Innovation Program of Jiangsu Province (No. KYCX17_1575, No. KYCX17_1576).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PSOParticle Swarm Optimization
bPSOThe basic PSO [21,22]
PSOdA distribution-based update rule for PSO [52]
HPSOscacA hybrid PSO with sine cosine acceleration coefficients [67]
TCPSOA two-swarm cooperative PSO [45]
SPSOSimple PSO
SPSOCSimple PSO with Confidence Term
SPSORCSimple PSO based on Random weight and Confidence term
v Particle velocity
x Particle position
p best Personal historical best solution
g best Global best solution
ω Inertia weight
ω m a x The maximum weight
ω m i n The minimum wight
c 1 Self-cognitive factor
c 2 Social communication factor
U (in Figure 1, Figure 2 and Figure 3)Solution space of a function
O (in Figure 1, Figure 2 and Figure 3)The theoretical optima of a function
iThe current particle
nThe current dimension
NThe maximum dimension
tThe current generation
TThe upper limit of generation
N R The number of times that algorithm search for problem
mPopulation size
minThe minimum values from the optima in which algorithms search for the problem 30 times
meanThe average values for the optima in which algorithms search for the problem 30 times
stdThe average values for the optima in which algorithms search for the problem 30 times
ttest (in Table 6)t-test results
AIT (in Section 4.5.2)Average iteration times
SR (in Section 4.5.2)Success rate
A1–A7 (in Figure 5 and Figure 6)bPSO, PSOd, HPSOscac, TCPSO, SPSO, SPSOC and SPSORC, respectively

Appendix A. Benchmark Function Appendix

Table A1. Benchmark functions.
Table A1. Benchmark functions.
InstanceExpressionDomainAnalytical SolutionAccuracy (50)
Ackley’s Path Function f 1 ( x ) = 20 e 0.2 1 30 i = 1 N x i 2 [ 32 , 32 ] f 1 ( 0 , , 0 ) = 8.8 8 ˙ × 10 16 1 × 10 15
e 1 30 i = 1 N cos 2 π x i + 20 + e
Alpine Function f 2 ( x ) = i = 1 N | x i sin ( x i ) + 0.1 x i | [ 10 , 10 ] f 2 ( 0 , , 0 ) = 0 1 × 10 60
Axis Parallel Hyperellipsoid f 3 ( x ) = i = 1 N i x i 2 [ 5.12 , 5.12 ] f 3 ( 0 , , 0 ) = 0 1 × 10 15
De Jong’s Function 4 (no noise) f 4 ( x ) = i = 1 N i x i 4 [ 1.28 , 1.28 ] f 4 ( 0 , , 0 ) = 0 1 × 10 240
Girewank Problem f 5 ( x ) = 1 4000 i = 1 N x i 2 i = 1 N ( x i i ) + 1 [ 600 , 600 ] f 5 ( 0 , , 0 ) = 0 1 × 10 15
High Conditioned Elliptic Function f 6 ( x ) = i = 1 N ( 10 6 ) i 1 n 1 x i 2 [ 100 , 100 ] f 6 ( 0 , , 0 ) = 0 1 × 10 110
Inverted Cosine
Wave Function
f 7 ( x ) = i = 1 N 1 ( e x i 2 x i + 1 2 0.5 x i x i + 1 8 ) [ 5 , 5 ] f 7 ( 0 , , 0 ) = N + 1 4.9 × 10 1
× cos ( 4 x i 2 + x i + 1 2 + 0.5 x i x i + 1 )
Pathological Function f 8 ( x ) = i = 1 N 1 ( 0.5 + sin 2 100 x i 2 + x i + 1 2 0.5 1 + 1 1000 ( x i 2 2 x i x i + 1 + x i + 1 2 ) ) [ 100 , 100 ] f 8 ( 0 , , 0 ) = 0 1 × 10 5
Quartic Function, i.e, noise f 9 ( x ) = i = 1 N i x i 4 + rand [ 0 , 1 ) [ 10 , 10 ] f 9 ( 0 , , 0 ) = 0 1 × 10 1
Rastrigin Problem f 10 ( x ) = i = 1 N [ x i 2 10 cos ( 2 π x i ) + 10 ] [ 5.12 , 5.12 ] f 10 ( 0 , , 0 ) = 0 1 × 10 20
Rosenbrock Problem f 11 ( x ) = i = 1 N 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] [ 30 , 30 ] f 11 ( 0 , , 0 ) = 0 5 × 10 1
Schwefel’s Problem 1.2 f 12 ( x ) = i = 1 N ( j = 1 m x j ) 2 [ 100 , 100 ] f 12 ( 0 , , 0 ) = 0 1 × 10 100
Schwefel’s Problem 2.21 f 13 ( x ) = max i | x i | , 1 i 30 [ 100 , 100 ] f 13 ( 0 , , 0 ) = 0 1 × 10 80
Schwefel’s Problem 2.22 f 14 ( x ) = i = 1 N | x i | + i = 1 N | x i | [ 10 , 10 ] f 14 ( 0 , , 0 ) = 0 1 × 10 60
Schwefel’s Problem 2.26 f 15 ( x ) = i = 1 N | x i sin ( x i ) + 0.1 x i | [ 500 , 500 ] f 15 ( s , , s ) = 419 N 2.5 × 10 3
s 420.97
Sphere Function f 16 ( x ) = i = 1 N x i 2 [ 100 , 100 ] f 16 ( 0 , , 0 ) = 0 1 × 10 120
Sum of Different Power Function f 17 ( x ) = i = 1 N | x i | i + 1 [ 1 , 1 ] f 17 ( 0 , , 0 ) = 0 1 × 10 300
Xin–She Yang 1 f 18 ( x ) = i = 1 N rand [ 0 , 1 ) × | x i | i [ 5 , 5 ] f 18 ( 0 , , 0 ) = 0 1 × 10 60
Xin–She Yang 2 f 19 ( x ) = i = 1 N | x i | e i = 1 N sin x i 2 [ 2 π , 2 π ] f 19 ( 0 , , 0 ) = 0 1 × 10 8
Xin–She Yang 3 f 20 ( x ) = e i = 1 N ( x i β ) 2 α 2 e i = 1 N x i 2 i = 1 N cos 2 x i [ 20 , 20 ] f 20 ( 0 , , 0 ) = 1 1
β = 15 , α = 3
Xin–She Yang 4 f 21 ( x ) = [ i = 1 N sin 2 x i e i = 1 N x i 2 ] e i = 1 N sin 2 | x i | [ 10 , 10 ] f 21 ( 0 , , 0 ) = 1 1
Zakharov Function f 22 ( x ) = i = 1 N x i 2 + ( i = 1 N 0.5 i x i 2 ) 2 + ( i = 1 N 0.5 i x i 2 ) 4 [ 5 , 10 ] f 22 ( 0 , , 0 ) = 0 1 × 10 80

References

  1. Denysiuk, R.; Gaspar-Cunha, A. Multiobjective Evolutionary Algorithm Based on Vector Angle Neighborhood. Swarm Evol. Comput. 2017, 37, 663–670. [Google Scholar] [CrossRef]
  2. Wang, G.G. Moth search algorithm: A bio-inspired metaheuristic algorithm for global optimization problems. Memet. Comput. 2016, 10, 1–14. [Google Scholar] [CrossRef]
  3. Feng, Y.H.; Wang, G.G. Binary moth search algorithm for discounted 0–1 knapsack problem. IEEE Access 2018, 6, 10708–10719. [Google Scholar] [CrossRef]
  4. Grefenstette, J.J. Genetic algorithms and machine learning. Mach. Learn. 1988, 3, 95–99. [Google Scholar] [CrossRef]
  5. Dorigo, M.; Gambardella, L.M. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1997, 1, 53–66. [Google Scholar] [CrossRef]
  6. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  7. Kirkpatrick, S.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  8. Wang, G.G.; Gandomi, A.H.; Alavi, A.H.; Deb, S. A multi-stage krill herd algorithm for global numerical optimization. Int. J. Artif. Intell. Tools 2016, 25, 1550030. [Google Scholar] [CrossRef]
  9. Wang, G.G.; Deb, S.; Gandomi, A.H.; Alavi, A.H. Opposition-based krill herd algorithm with Cauchy mutation and position clamping. Neurocomputing 2016, 177, 147–157. [Google Scholar] [CrossRef]
  10. Wang, G.G.; Gandomi, A.H.; Alavi, A.H. An effective krill herd algorithm with migration operator in biogeography-based optimization. Appl. Math. Model. 2014, 38, 2454–2462. [Google Scholar] [CrossRef]
  11. Wang, G.G.; Guo, L.H.; Wang, H.Q.; Duan, H.; Liu, L.; Li, J. Incorporating mutation scheme into krill herd algorithm for global numerical optimization. Neural Comput. Appl. 2014, 24, 853–871. [Google Scholar] [CrossRef]
  12. Wang, G.G.; Gandomi, A.H.; Hao, G.S. Hybrid krill herd algorithm with differential evolution for global numerical optimization. Neural Comput. Appl. 2014, 25, 297–308. [Google Scholar] [CrossRef]
  13. Ding, X.; Guo, H.; Guo, S. Efficiency Enhancement of Traction System Based on Loss Models and Golden Section Search in Electric Vehicle. Energy Procedia 2017, 105, 2923–2928. [Google Scholar] [CrossRef]
  14. Ramos, H.; Monteiro, M.T.T. A new approach based on the Newton’s method to solve systems of nonlinear equations. J. Comput. Appl. Math. 2016, 318, 3–13. [Google Scholar] [CrossRef]
  15. Fazio, A.R.D.; Russo, M.; Valeri, S.; Santis, M.D. Linear method for steady-state analysis of radial distribution systems. Int. J. Electr. Power Energy Syst. 2018, 99, 744–755. [Google Scholar] [CrossRef]
  16. Du, X.; Zhang, P.; Ma, W. Some modified conjugate gradient methods for unconstrained optimization. J. Comput. Appl. Math. 2016, 305, 92–114. [Google Scholar] [CrossRef]
  17. Pooranian, Z.; Shojafar, M.; Abawajy, J.H.; Abraham, A. An efficient meta-heuristic algorithm for grid computing. J. Comb. Optim. 2015, 30, 413–434. [Google Scholar] [CrossRef]
  18. Shojafar, M.; Chiaraviglio, L.; Blefari-Melazzi, N.; Salsano, S. P5G: A Bio-Inspired Algorithm for the Superfluid Management of 5G Networks. In Proceedings of the GLOBECOM 2017: 2017 IEEE Global Communications Conference, Singapore, 4–8 December 2017; pp. 1–7. [Google Scholar] [CrossRef]
  19. Shojafar, M.; Cordeschi, N.; Abawajy, J.H.; Baccarelli, E. Adaptive Energy-Efficient QoS-Aware Scheduling Algorithm for TCP/IP Mobile Cloud. In Proceedings of the IEEE Globecom Workshops, San Diego, CA, USA, 6–10 December 2015; pp. 1–6. [Google Scholar] [CrossRef]
  20. Zou, D.X.; Deb, S.; Wang, G.G. Solving IIR system identification by a variant of particle swarm optimization. Neural Comput. Appl. 2018, 30, 685–698. [Google Scholar] [CrossRef]
  21. Kennedy, J. Particle Swarm Optimization. In Proceedings of the 1995 International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  22. Shi, Y.H.; Eberhart, R.C. Empirical study of particle swarm optimization. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99(Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; pp. 1945–1950. [Google Scholar] [CrossRef]
  23. Chen, Y.; Li, L.; Xiao, J.; Yang, Y.; Liang, J.; Li, T. Particle swarm optimizer with crossover operation. Eng. Appl. Artif. Intell. 2018, 70, 159–169. [Google Scholar] [CrossRef]
  24. Zou, D.; Gao, L.; Li, S.; Wu, J.; Wang, X. A novel global harmony search algorithm for task assignment problem. J. Syst. Softw. 2010, 83, 1678–1688. [Google Scholar] [CrossRef]
  25. Niu, W.J.; Feng, Z.K.; Cheng, C.T.; Wu, X.Y. A parallel multi-objective particle swarm optimization for cascade hydropower reservoir operation in southwest China. Appl. Soft Comput. 2018, 70, 562–575. [Google Scholar] [CrossRef]
  26. Feng, Y.; Wang, G.G.; Deb, S.; Lu, M.; Zhao, X.J. Solving 0–1 knapsack problem by a novel binary monarch butterfly optimization. Neural Comput. Appl. 2017, 28, 1619–1634. [Google Scholar] [CrossRef]
  27. Wang, G.G.; Deb, S.; Coelho, L.D.S. Elephant Herding Optimization. In Proceedings of the International Symposium on Computational and Business Intelligence, Bali, Indonesia, 7–9 December 2015; pp. 1–5. [Google Scholar] [CrossRef]
  28. Wang, G.; Guo, L.; Gandomi, A.H.; Cao, L.; Alavi, A.H.; Duan, H.; Li, J. Levy-flight krill herd algorithm. Math. Probl. Eng. 2013, 2013, 682073. [Google Scholar] [CrossRef]
  29. Wang, G.G.; Gandomi, A.H.; Alavi, A.H. Stud krill herd algorithm. Neurocomputing 2014, 128, 363–370. [Google Scholar] [CrossRef]
  30. Wang, G.G.; Deb, S.; Coelho, L. Earthworm optimization algorithm: A bio-inspired metaheuristic algorithm for global optimization problems. Int. J. Bio-Inspired Comput. 2018, 12, 1–22. [Google Scholar] [CrossRef]
  31. Wang, G.G.; Chu, H.; Mirjalili, S. Three-dimensional path planning for UCAV using an improved bat algorithm. Aerosp. Sci. Technol. 2016, 49, 231–238. [Google Scholar] [CrossRef]
  32. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings, IEEE World Congress on Computational Intelligence (Cat. No.98TH8360), Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar] [CrossRef]
  33. Parsopoulos, K.E.; Vrahatis, M.N. Particle swarm optimizer in noisy and continuously changing environments. In Artificial Intelligence and Soft Computing; Hamza, M.H., Ed.; IASTED/ACTA Press: Anaheim, CA, USA, 2001; pp. 289–294. [Google Scholar]
  34. Kennedy, J.; Mendes, R. Population structure and particle swarm performance. In Proceedings of the IEEE Congress on Evolutionary Computation, Honolulu, HI, USA, 12–17 May 2002; Volume 2, pp. 1671–1676. [Google Scholar] [CrossRef]
  35. Clerc, M.; Kennedy, J. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef]
  36. Xu, G.; Yu, G. Reprint of: On convergence analysis of particle swarm optimization algorithm. J. Shanxi Norm. Univ. 2018, 4, 25–32. [Google Scholar] [CrossRef]
  37. Sun, J.; Wu, X.; Palade, V.; Fang, W.; Lai, C.H.; Xu, W.B. Convergence analysis and improvements of quantum-behaved particle swarm optimization. Inf. Sci. 2012, 193, 81–103. [Google Scholar] [CrossRef]
  38. Li, S.F.; Cheng, C.Y. Particle Swarm Optimization with Fitness Adjustment Parameters. Comput. Ind. Eng. 2017, 113, 831–841. [Google Scholar] [CrossRef]
  39. Li, N.J.; Wang, W.; Hsu, C.C.J. Hybrid particle swarm optimization incorporating fuzzy reasoning and weighted particle. Neurocomputing 2015, 167, 488–501. [Google Scholar] [CrossRef]
  40. Chih, M.; Lin, C.J.; Chern, M.S.; Ou, T.Y. Particle swarm optimization with time-varying acceleration coefficients for the multidimensional knapsack problem. J. Chin. Inst. Ind. Eng. 2014, 33, 77–102. [Google Scholar] [CrossRef]
  41. Ratnaweera, A.; Halgamuge, S.K.; Watson, H.C. Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans. Evol. Comput. 2004, 8, 240–255. [Google Scholar] [CrossRef][Green Version]
  42. Kennedy, J.; Mendes, R. Neighborhood topologies in fully informed and best-of-neighborhood particle swarms. IEEE Trans. Syst. Man Cybern. Part C 2006, 36, 515–519. [Google Scholar] [CrossRef]
  43. Zhao, S.Z.; Suganthan, P.N.; Pan, Q.K.; Tasgetiren, M.F. Dynamic multi-swarm particle swarm optimizer with harmony search. Expert Syst. Appl. 2011, 38, 3735–3742. [Google Scholar] [CrossRef]
  44. Majercik, S.M. Using Fluid Neural Networks to Create Dynamic Neighborhood Topologies in Particle Swarm Optimization. In Proceedings of the International Conference on Swarm Intelligence, Brussels, Belgium, 10–12 September 2014; Springer: Cham, Switzerland; New York, NY, USA, 2014; Volume 8667, pp. 270–277. [Google Scholar] [CrossRef]
  45. Sun, S.; Li, J. A two-swarm cooperative particle swarms optimization. Swarm Evol. Comput. 2014, 15, 1–18. [Google Scholar] [CrossRef]
  46. Netjinda, N.; Achalakul, T.; Sirinaovakul, B. Particle Swarm Optimization inspired by starling flock behavior. Appl. Soft Comput. 2015, 35, 411–422. [Google Scholar] [CrossRef]
  47. Beheshti, Z.; Shamsuddin, S.M. Non-parametric particle swarm optimization for global optimization. Appl. Soft Comput. 2015, 28, 345–359. [Google Scholar] [CrossRef]
  48. Mendes, R.; Kennedy, J.; Neves, J. The fully informed particle swarm: Simpler, maybe better. IEEE Trans. Evol. Comput. 2004, 8, 204–210. [Google Scholar] [CrossRef]
  49. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Subramanian, B. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  50. Gong, Y.J.; Li, J.J.; Zhou, Y.; Li, Y.; Chung, H.S.H.; Shi, Y.H.; Zhang, J. Genetic Learning Particle Swarm Optimization. IEEE Trans. Cybern. 2017, 46, 2277–2290. [Google Scholar] [CrossRef] [PubMed]
  51. Zou, D.; Li, S.; Li, Z.; Kong, X. A new global particle swarm optimization for the economic emission dispatch with or without transmission losses. Energy Convers. Manag. 2017, 139, 45–70. [Google Scholar] [CrossRef]
  52. Kiran, M.S. Particle Swarm Optimization with a New Update Mechanism. Appl. Soft Comput. 2017, 60, 607–680. [Google Scholar] [CrossRef]
  53. Wang, G.G.; Gandomi, A.H.; Alavi, A.H. A chaotic particle-swarm krill herd algorithm for global numerical optimization. Kybernetes 2013, 42, 962–978. [Google Scholar] [CrossRef]
  54. Gandomi, A.H.; Alavi, A.H. Krill herd: A new bio-inspired optimization algorithm. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 4831–4845. [Google Scholar] [CrossRef]
  55. Yang, X.S. Nature-Inspired Metaheuristic Algorithm; Luniver Press: Beckington, UK, 2008; ISBN1 1905986106. ISBN2 9781905986101. [Google Scholar]
  56. Wang, G.G.; Gandomi, A.H.; Yang, X.S.; Alavi, A.H. A novel improved accelerated particle swarm optimization algorithm for global numerical optimization. Eng. Comput. 2014, 31, 1198–1220. [Google Scholar] [CrossRef]
  57. Liu, Y.; Niu, B.; Luo, Y. Hybrid learning particle swarm optimizer with genetic disturbance. Neurocomputing 2015, 151, 1237–1247. [Google Scholar] [CrossRef]
  58. Chen, Y.; Li, L.; Peng, H.; Xiao, J.; Yang, Y.; Shi, Y. Particle Swarm Optimizer with two differential mutation. Appl. Soft Comput. 2017, 61, 314–330. [Google Scholar] [CrossRef]
  59. Liu, Z.G.; Ji, X.H.; Liu, Y.X. Hybrid Non-parametric Particle Swarm Optimization and its Stability Analysis. Expert Syst. Appl. 2017, 92, 256–275. [Google Scholar] [CrossRef]
  60. Bewoor, L.A.; Prakash, V.C.; Sapkal, S.U. Production scheduling optimization in foundry using hybrid Particle Swarm Optimization algorithm. Procedia Manuf. 2018, 22, 57–64. [Google Scholar] [CrossRef]
  61. Xu, X.; Rong, H.; Pereira, E.; Trovati, M.W. Predatory Search-based Chaos Turbo Particle Swarm Optimization (PS-CTPSO): A new particle swarm optimisation algorithm for Web service combination problems. Future Gener. Comput. Syst. 2018, 89, 375–386. [Google Scholar] [CrossRef]
  62. Guan, G.; Yang, Q.; Gu, W.W.; Jiang, W.; Lin, Y. Ship inner shell optimization based on the improved particle swarm optimization algorithm. Adv. Eng. Softw. 2018, 123, 104–116. [Google Scholar] [CrossRef]
  63. Qin, Z.; Liang, Y.G. Sensor Management of LEO Constellation Using Modified Binary Particle Swarm Optimization. Optik 2018, 172, 879–891. [Google Scholar] [CrossRef]
  64. Peng, Z.; Manier, H.; Manier, M.A. Particle Swarm Optimization for Capacitated Location-Routing Problem. IFAC PapersOnLine 2017, 50, 14668–14673. [Google Scholar] [CrossRef]
  65. Copot, C.; Thi, T.M.; Ionescu, C. PID based Particle Swarm Optimization in Offices Light Control. IFAC PapersOnLine 2018, 51, 382–387. [Google Scholar] [CrossRef][Green Version]
  66. Chen, S.Y.; Wu, C.H.; Hung, Y.H.; Chung, C.T. Optimal Strategies of Energy Management Integrated with Transmission Control for a Hybrid Electric Vehicle using Dynamic Particle Swarm Optimization. Energy 2018, 160, 154–170. [Google Scholar] [CrossRef]
  67. Chen, K.; Zhou, F.; Yin, L.; Wang, S.; Wang, Y.; Wan, F. A Hybrid Particle Swarm Optimizer with Sine Cosine Acceleration Coefficients. Inf. Sci. 2017, 422, 218–241. [Google Scholar] [CrossRef]
  68. Zou, D.; Gao, L.; Wu, J.; Li, S. Novel global harmony search algorithm for unconstrained problems. Neurocomputing 2010, 73, 3308–3318. [Google Scholar] [CrossRef]
  69. Hu, W.; Li, Z.S. A Simpler and More Effective Particle Swarm Optimization Algorithm. J. Softw. 2007, 18, 861–868. [Google Scholar] [CrossRef]
  70. Zhang, X.; Zou, D.X.; Kong, Z.; Shen, X. A Hybrid Gravitational Search Algorithm for Unconstrained Problems. In Proceedings of the 30th Chinese Control and Decision Conference, Shenyang, China, 9–11 June 2018; pp. 5277–5284. [Google Scholar] [CrossRef]
  71. Zhang, X.; Zou, D.X.; Shen, X. A Simplified and Efficient Gravitational Search Algorithm for Unconstrained Optimization Problems. In Proceedings of the 2017 International Conference on Vision, Image and Signal Processing, Osaka, Japan, 22–24 September 2017; pp. 11–17. [Google Scholar] [CrossRef]
  72. Müller, P. Analytische Zahlentheorie. In Funktionentheorie 1; Springer: Berlin/Heidelberg, Germany, 2006; pp. 386–456. [Google Scholar]
Figure 1. Optimization procedure of bPSO.
Figure 1. Optimization procedure of bPSO.
Mathematics 06 00287 g001
Figure 2. Optimization procedure of SPSO.
Figure 2. Optimization procedure of SPSO.
Mathematics 06 00287 g002
Figure 3. Optimization procedure of SPSOC.
Figure 3. Optimization procedure of SPSOC.
Mathematics 06 00287 g003
Figure 4. Average convergence curves of seven improved PSO algorithms for twenty-two functions in 10 dimensions.
Figure 4. Average convergence curves of seven improved PSO algorithms for twenty-two functions in 10 dimensions.
Mathematics 06 00287 g004
Figure 5. Box diagram of thirty results, each function with 100 dimensions.
Figure 5. Box diagram of thirty results, each function with 100 dimensions.
Mathematics 06 00287 g005
Figure 6. Real computational time of f 11 in 50 dimensions.
Figure 6. Real computational time of f 11 in 50 dimensions.
Mathematics 06 00287 g006
Figure 7. Radar charts of average iteration times.
Figure 7. Radar charts of average iteration times.
Mathematics 06 00287 g007
Table 1. The procedure of SPSORC.
Table 1. The procedure of SPSORC.
LineProcedure of SPSORC
1Initialize parameters: dimension N, population size m, iteration number T, weight ω , learing factors c 1 , c 2 , etc; % Step 1
2Initialize and reserve matrix space: p best = [Inf 11 ⋯ Inf m N ], g best = [Inf 1 ⋯ Inf m ],
x min = lower limits of position, x max = upper limits;
3Fori = 1:m
4For j = 1:N
5  Randomly initialize velosity and position: v i n , x i n ; % Step 2
6End For
7End For
8Fori = 1:m
9 Calculate the fitness. Compared to get the p best 1 and g best 1
10End For
11While the optima is not found or the termination condition is not met
12 Calculate the f b e s t and f w o r s t . Then, get the ω by Equation (23); % Step 3
13For i = 1:m
14  For j = 1:N
15   Update the particle positon according to Equation (22); % Step 4
16   If x i n t > x max
17     x i n t = x max ;
18   ElseIf x i n t < x min
19     x i n t = x min ;
20   End If
21  End For
22  Substitute the current particle into the fitness formula to calculate the fitness value of the current particle;
23  Compare to get the p best and g best ;
24End For
25End While
26Return Results. % Step 5
Table 2. Parameters for candidates.
Table 2. Parameters for candidates.
NR mT ω max ω min c 1 c 2 c 3
bPSO30401000.90.422-
PSOd3040100-----
HPSOscac3040100Equation (10)-Equation (11)Equation (11)-
TCPSO30801000.9-1.61.61.6
SPSO30401000.90.4-2-
SPSOC30401000.90.4-2-
SPSORC3040100Equation (23)--2-
Table 3. Simulation environment.
Table 3. Simulation environment.
Operation SystemWindows 7 Professional (×32)
CPUCore 2 Duo 2.26 GHz
Memory4.00 GB
PlatformMatlab R2014a
NetworkGigabit Ethernet
Table 4. Discussion on the necessity of improving SPSO.
Table 4. Discussion on the necessity of improving SPSO.
InstanceSPSOSPSOCSPSORC
minmeanminmeanminmax
f 1 4.44 × 10 15 4.44 × 10 15 8.88 × 10 16 8.88 × 10 16 8.88 × 10 16 1.48 × 10 15
f 2 1.53 × 10 18 5.91 × 10 3 6.94 × 10 56 6.93 × 10 50 06.65 × 10 268
f 3 2.16 × 10 35 2.68 × 10 35 3.84 × 10 106 1.33 × 10 97 00
f 4 5.40 × 10 76 7.42 × 10 76 2.27 × 10 216 8.96 × 10 195 00
f 5 08.32 × 10 3 0003.37×10 16
f 6 1.61 × 10 30 2.63 × 10 30 2.88 × 10 102 4.12 × 10 93 00
f 7 −1.12 × 10 1 7.87 × 10 2 −1.49 × 10 2 −1.49 × 10 2 −1.49 × 10 2 −1.49 × 10 2
f 8 9.51 × 10 1 6.61 × 10 1 0001.87 × 10 1
f 9 2.33 × 10 4 3.15 × 10 2 9.12 × 10 4 2.30 × 10 2 5.75 × 10 3 3.63 × 10 1
f 10 04.52 × 10 1 0006.51 × 10 16
f 11 1.49 × 10 2 1.49 × 10 2 1.49 × 10 2 1.49 × 10 2 1.48 × 10 2 1.49 × 10 2
f 12 7.97 × 10 33 2.79 × 10 32 2.95 × 10 82 8.39 × 10 21 01.77 × 10 04
f 13 6.25 × 10 49 1.07 × 10 45 1.89 × 10 75 1.24 × 10 60 00
f 14 1.20 × 10 17 1.36 × 10 17 5.46 × 10 55 3.75 × 10 47 01.48 × 10 270
f 15 −5.26 × 10 3 −3.37 × 10 3 −4.31 × 10 3 −2.61 × 10 3 −5.99 × 10 3 −3.80 × 10 3
f 16 1.21 × 10 34 1.47 × 10 34 1.91 × 10 106 1.81 × 10 96 01.90 × 10 321
f 17 01.64 × 10 315 0001.36 × 10 256
f 18 1.76 × 10 29 2.81 × 10 6 2.34 × 10 60 1.85 × 10 18 00
f 19 7.48 × 10 25 1.02 × 10 16 1.75 × 10 21 1.04 × 10 14 03.96 × 10 20
f 20 3.63 × 10 55 3.95 × 10 43 7.93 × 10 46 1.69 × 10 33 −1−1
f 21 1.75 × 10 44 1.78 × 10 42 1.59 × 10 36 4.71 × 10 32 −1−1
f 22 2.61 × 10 36 3.15 × 10 36 2.20 × 10 33 6.20 × 10 14 02.14 × 10 8
Table 5. Discussion on the weights selection of SPSOC.
Table 5. Discussion on the weights selection of SPSOC.
InstanceDifferent Weight Matching
ω 21 ω 31 ω 32 ω 11 ω 22 ω 22
f 1 min8.88 × 10 16 8.88 × 10 16 8.88 × 10 16 8.88 × 10 16 8.88 × 10 16 8.88 × 10 16
mean1.80 × 10 1 3.85 × 10 15 1.27 × 10 1 8.88 × 10 16 1.53 × 10 1 1.36 × 10 15
f 2 min1.54 × 10 85 4.67 × 10 65 9.25 × 10 83 8.09 × 10 61 2.10 × 10 80 0
mean4.34 × 10 16 1.53 × 10 57 7.90 × 10 75 2.73 × 10 53 4.14 × 10 63 0
f 3 min7.53 × 10 163 3.54 × 10 125 1.81 × 10 169 8.92 × 10 122 2.07 × 10 156 0
mean1.75 × 10 38 4.85 × 10 109 4.84 × 10 91 1.37 × 10 101 4.97 × 10 50 0
f 4 min1.78 × 10 240 8.32 × 10 258 04.48 × 10 235 2.22 × 10 304 0
mean1.68 × 10 34 7.32 × 10 214 5.03 × 10 106 1.99 × 10 205 2.65 × 10 21 0
f 5 min002.17 × 10 1 000
mean1.20 × 10 2 02.17 × 10 1 02.17 × 10 2 0
f 6 min2.20 × 10 161 2.59 × 10 121 1.45 × 10 155 6.26 × 10 114 1.41 × 10 151 0
mean3.58 × 10 67 4.98 × 10 104 7.88 × 10 142 2.66 × 10 92 9.16 × 10 72 0
f 7 min−9−9−9−9−9−9
mean−9−9−9−9−9−9
f 8 min1.3009.02 × 10 1 09.02 × 10 1 0
mean1.501.541.607.29 × 10−21.253.25 × 10 1
f 9 min9.13 × 10 4 1.37 × 10 3 1.51 × 10 3 1.05 × 10 3 5.92 × 10 4 2.71 × 10 3
mean2.01 × 10 2 3.16 × 10 2 4.61 × 10 2 1.51 × 10 2 3.11 × 10 2 4.44 × 10−2
f 10 min000000
mean2.89 × 10 1 0005.67 × 10 7 0
f 11 min7.288.037.698.047.867.97
mean8.098.838.818.458.228.54
f 12 min1.60 × 10 157 1.76 × 10 116 8.56 × 10 158 5.08 × 10 105 2.32 × 10 148 0
mean3.04 × 10 22 7.01 × 10 8 9.69 × 10 1 6.71 × 10 67 3.90 × 10 35 0
f 13 min1.35 × 10 96 2.36 × 10 73 1.39 × 10 93 2.91 × 10 77 1.92 × 10 92 0
mean9.62 × 10 26 3.02 × 10 59 3.63 × 10 43 7.25 × 10 58 2.42 × 10 41 0
f 14 min1.56 × 10 84 1.14 × 10 64 2.91 × 10 85 1.66 × 10 60 6.95 × 10 84 0
mean4.04 × 10 56 1.89 × 10 55 1.94 × 10 67 3.04 × 10 52 5.80 × 10 35 0
f 15 min−1.60 × 10 3 −1.57 × 10 3 −1.52 × 10 3 −1.36 × 10 3 −1.38 × 10 3 −1.34 × 10 3
mean−8.99 × 10 2 −9.81 × 10 2 −1.05 × 10 3 −8.06 × 10 2 −8.09 × 10 2 −8.84 × 10 2
f 16 min9.16 × 10 164 1.56 × 10 121 3.15 × 10 164 3.83 × 10 117 1.96 × 10 150 0
mean2.09 × 10 27 9.99 × 10 101 1.05 × 10 144 1.05 × 10 97 1.04 × 10 62 0
f 17 min1.78 × 10 199 2.74 × 10 154 1.09 × 10 192 8.20 × 10 148 2.80 × 10 158 0
mean6.33 × 10 1 1.43 × 10 116 1.63 × 10 33 2.37 × 10 118 6.53 × 10 24 0
f 18 min5.61 × 10 84 1.49 × 10 52 1.97 × 10 82 2.24 × 10 59 7.48 × 10 80 0
mean3.89 × 10 13 7.36 × 10 19 1.94 × 10 19 4.60 × 10 29 1.48 × 10 19 0
f 19 min3.54 × 10 3 1.01 × 10 2 9.08 × 10 3 1.47 × 10 2 1.09 × 10 2 0
mean6.15 × 10 2 4.33 × 10 2 4.10 × 10 2 9.36 × 10 2 7.27 × 10 2 2.24 × 10 2
f 20 min3.97 × 10 25 7.05 × 10 17 3.97 × 10 25 1.81 × 10 12 3.97 × 10 25 −1
mean3.68 × 10 14 2.75 × 10 11 4.58 × 10 10 1.01 × 10 7 3.97 × 10 25 −1
f 21 min9.28 × 10 4 5.44 × 10 4 1.88 × 10 3 1.61 × 10 3 2.95−1
mean2.852.35 × 10 3 1.07 × 10 1 3.14 × 10 3 2.95−1
f 22 min7.70 × 10 155 7.35 × 10 115 3.27 × 10 159 1.75 × 10 113 1.61 × 10 146 0
mean3.08 × 10 30 2.71 × 10 45 5.07 × 10 100 2.32 × 10 88 1.26 × 10 40 0
Table 6. Optimization results for the function in 3 kinds of dimension.
Table 6. Optimization results for the function in 3 kinds of dimension.
10 50 100
minmeanstdttestminmeanstdttestminmeanstdttest
f 1 bPSO3.16 × 10 1 1.296.68 × 10 1 +(1)1.26 × 10 1 1.77 × 10 1 1.82+(1)1.95 × 10 1 2.02 × 10 1 3.23 × 10 1 +(1)
PSOd1.163.321.64+(1)1.09 × 10 1 1.36 × 10 1 1.16+(1)1.39 × 10 1 1.54 × 10 1 6.67 × 10 1 +(1)
HPSOscac2.08 × 10 10 2.205.69+(1)1.68 × 10 11 1.132.98+(1)6.66 × 10 10 6.94 × 10 1 2.65=(0)
TCPSO8.92 × 10 01 2.086.35 × 10 1 +(1)1.03 × 10 1 1.39 × 10 1 2.08+(1)1.72 × 10 1 1.85 × 10 1 7.74 × 10 1 +(1)
SPSO8.88 × 10 16 3.38 × 10 15 1.66 × 10 15 +(1)8.88 × 10 16 3.73 × 10 15 1.45 × 10 15 +(1)8.88 × 10 16 3.85 × 10 15 1.35 × 10 15 +(1)
SPSOC8.88 × 10 16 8.88 × 10 16 0=(0)8.88 × 10 16 8.88 × 10 16 0=(0)8.88 × 10 16 8.88 × 10 16 0−(−1)
SPSORC8.88 × 10 16 8.88 × 10 16 0 8.88 × 10 16 8.88 × 10 16 0 8.88 × 10 16 2.19 × 10 15 4.01 × 10 15
f 2 bPSO3.46 × 10 2 6.10 × 10 1 6.44 × 10 1 +(1)3.97 × 10 1 5.60 × 10 1 8.63+(1)1.43 × 10 2 1.68 × 10 2 1.25 × 10 1 +(1)
PSOd4.28 × 10 3 1.29 × 10 1 1.79 × 10 1 +(1)1.53 × 10 1 2.12 × 10 1 4.77+(1)5.40 × 10 1 6.94 × 10 1 1.07 × 10 1 +(1)
HPSOscac06.88 × 10 77 3.77 × 10 76 =(0)01.27 × 10 68 6.95 × 10 68 =(0)08.67 × 10 49 4.75 × 10 48 =(0)
TCPSO6.42 × 10 2 1.461.69+(1)2.24 × 10 1 4.92 × 10 1 1.33 × 10 1 +(1)1.05 × 10 2 1.35 × 10 2 2.11 × 10 1 +(1)
SPSO2.58 × 10 24 2.81 × 10 2 1.49 × 10 1 =(0)4.28 × 10 20 2.40 × 10 4 1.25 × 10 3 =(0)3.06 × 10 19 1.94 × 10 3 1.06 × 10 2 =(0)
SPSOC7.18 × 10 59 1.01 × 10 45 5.54 × 10 45 =(0)2.12 × 10 56 5.22 × 10 41 2.73 × 10 40 =(0)1.07 × 10 53 5.71 × 10 42 2.35 × 10 41 =(0)
SPSORC000 05.61 × 10 281 0 000
f 3 bPSO1.31 × 10 3 8.95 × 10 1 4.78=(0)7.69 × 10 2 1.37 × 10 3 4.27 × 10 2 +(1)1.12 × 10 4 1.37 × 10 4 1.33 × 10 3 +(1)
PSOd6.53 × 10 4 2.91 × 10 1 3.50 × 10 1 +(1)2.95 × 10 2 5.68 × 10 2 1.66 × 10 2 +(1)2.88 × 10 3 4.32 × 10 3 1.02 × 10 3 +(1)
HPSOscac08.09 × 10 132 4.43 × 10 131 =(0)01.62 × 10 155 8.88 × 10 155 =(0)03.60 × 10 120 1.97 × 10 119 =(0)
TCPSO1.66 × 10 2 7.68 × 10 2 6.09 × 10 2 +(1)1.70 × 10 2 4.56 × 10 2 2.85 × 10 2 +(1)3.40 × 10 3 5.24 × 10 3 1.15 × 10 3 +(1)
SPSO1.80 × 10 52 2.39 × 10 46 6.91 × 10 46 =(0)2.97 × 10 38 1.11 × 10 32 4.16 × 10 32 =(0)3.39 × 10 35 2.39 × 10 29 1.31 × 10 28 =(0)
SPSOC4.92 × 10 112 3.50 × 10 90 1.89 × 10 89 =(0)1.91 × 10 108 9.54 × 10 73 5.22 × 10 72 =(0)7.38 × 10 104 5.55 × 10 69 3.04 × 10 68 =(0)
SPSORC000 000 000
f 4 bPSO2.10 × 10 8 5.60 × 10 7 5.78 × 10 7 +(1)7.193.05 × 10 1 1.35 × 10 1 +(1)2.32 × 10 2 4.93 × 10 2 1.54 × 10 2 +(1)
PSOd3.81 × 10 7 4.00 × 10 4 9.29 × 10 4 +(1)1.694.842.59+(1)1.94 × 10 1 6.03 × 10 1 2.18 × 10 1 +(1)
HPSOscac09.26 × 10 285 0+(1)06.20 × 10 224 0+(1)01.36 × 10 274 0+(1)
TCPSO2.30 × 10 8 3.68 × 10 6 4.79 × 10 6 +(1)4.50 × 10 1 4.356.46+(1)4.11 × 10 1 1.07 × 10 2 5.20 × 10 1 +(1)
SPSO6.17 × 10 108 4.10 × 10 94 2.23 × 10 93 =(0)9.65 × 10 83 3.38 × 10 72 1.77 × 10 71 =(0)4.66 × 10 78 1.25 × 10 69 4.73 × 10 69 =(0)
SPSOC5.88 × 10 234 1.19 × 10 183 0=(0)2.87 × 10 218 1.52 × 10 152 8.31 × 10 152 =(0)1.56 × 10 222 4.13 × 10 155 2.26 × 10 154 =(0)
SPSORC01.73 × 10 321 0 02.96 × 10 323 0 02.02 × 10 320 0
f 5 bPSO4.22 × 10 1 9.36 × 10 1 1.64 × 10 1 +(1)6.46 × 10 1 2.11 × 10 2 6.99 × 10 1 +(1)8.91 × 10 2 1.15 × 10 3 1.48 × 10 2 +(1)
PSOd1.37 × 10 1 8.46 × 10 1 6.67 × 10 1 +(1)4.36 × 10 1 8.91 × 10 1 2.69 × 10 1 +(1)2.38 × 10 2 3.18 × 10 2 4.27 × 10 1 +(1)
HPSOscac3.38 × 10 6 6.26 × 10 1 1.14 × 10 2 +(1)7.84 × 10 2 4.02 × 10 2 5.68 × 10 2 +(1)1.57 × 10 5 1.37 × 10 3 1.67 × 10 3 +(1)
TCPSO1.041.169.72 × 10 2 +(1)1.90 × 10 1 6.24 × 10 1 3.63 × 10 1 +(1)2.46 × 10 2 4.14 × 10 2 7.97 × 10 1 +(1)
SPSO03.87 × 10 1 3.41 × 10 1 +(1)05.40 × 10 2 1.49 × 10 1 =(0)07.33 × 10 3 2.37 × 10 2 =(0)
SPSOC000−(−1)000=(0)000=(0)
SPSORC03.70 × 10 18 2.03 × 10 17 000 05.18 × 10 17 1.23 × 10 16
f 6 bPSO4.16 × 10 3 6.97 × 10 5 1.22 × 10 6 +(1)1.38 × 10 8 3.70 × 10 8 1.82 × 10 8 +(1)7.95 × 10 8 1.90 × 10 9 6.34 × 10 8 +(1)
PSOd2.60 × 10 3 5.58 × 10 4 7.67 × 10 4 +(1)1.80 × 10 7 8.68 × 10 7 4.80 × 10 7 +(1)2.99 × 10 8 5.38 × 10 8 1.37 × 10 8 +(1)
HPSOscac03.69 × 10 149 2.02 × 10 148 =(0)03.77 × 10 157 2.06 × 10 156 =(0)03.40 × 10 124 1.86 × 10 123 =(0)
TCPSO5.51 × 10 4 8.43 × 10 5 1.26 × 10 6 +(1)4.79 × 10 7 1.91 × 10 8 1.33 × 10 8 +(1)3.51 × 10 8 1.04 × 10 9 6.43 × 10 8 +(1)
SPSO1.44 × 10 45 5.73 × 10 41 1.56 × 10 40 +(1)1.71 × 10 33 1.72 × 10 28 7.68 × 10 28 =(0)8.51 × 10 33 2.42 × 10 27 5.20 × 10 27 +(1)
SPSOC3.77 × 10 113 8.59 × 10 84 4.71 × 10 83 +(1)1.15 × 10 101 1.94 × 10 76 7.40 × 10 76 =(0)1.47 × 10 102 1.67 × 10 76 9.16 × 10 76 +(1)
SPSORC000 000 000
f 7 bPSO−6.76−5.326.63 × 10 1 +(1)−1.20 × 10 1 −9.161.64+(1)−1.66 × 10 1 −1.23 × 10 1 2.52+(1)
PSOd−7.93−6.496.48 × 10 1 +(1)−2.64 × 10 1 −2.13 × 10 1 2.01+(1)−3.78 × 10 1 −3.27 × 10 1 2.57+(1)
HPSOscac−2.46−2.464.73 × 10 1 +(1)−3.51−4.394.35+(1)−1.48 × 10 1 −5.404.02+(1)
TCPSO−6.83−4.919.48 × 10 1 +(1)−1.43 × 10 1 −8.663.01+(1)−1.58 × 10 1 −9.523.65+(1)
SPSO−9−5.472.81+(1)−4.90 × 10 1 −1.20 × 10 1 1.79 × 10 1 +(1)−3.31 × 10 1 −3.968.44+(1)
SPSOC−9−90=(0)−4.90 × 10 1 −4.90 × 10 + 01 0=(0)−9.90 × 10 1 −9.90 × 10 1 0−(−1)
SPSORC−9−90 −4.90 × 10 1 −4.90 × 10 1 0 −9.90 × 10 1 −9.90 × 10 1 2.64 × 10 15
f 8 bPSO9.02 × 10 1 1.715.13 × 10 1 +(1)1.04 × 10 1 1.24 × 10 1 1.36+(1)2.39 × 10 1 2.69 × 10 1 1.36+(1)
PSOd1.652.413.26 × 10 1 +(1)1.94 × 10 1 2.11 × 10 1 6.65 × 10 1 +(1)4.29 × 10 1 4.53 × 10 1 9.08 × 10 1 +(1)
HPSOscac2.22 × 10 16 1.821.47+(1)6.75 × 10 12 1.43 × 10 1 9.26+(1)9.03 × 10 6 2.68 × 10 1 2.09 × 10 1 +(1)
TCPSO7.05 × 10 1 1.687.04 × 10 1 +(1)9.911.19 × 10 1 1.05+(1)2.33 × 10 1 2.56 × 10 1 1.65+(1)
SPSO1.472.815.56 × 10 1 +(1)2.11 × 10 4 1.87 × 10 1 6.17+(1)6.62 × 10 1 4.30 × 10 1 8.94+(1)
SPSOC02.87 × 10 2 1.57 × 10 1 -(-1)06.58 × 10 1 3.60−(−1)01.508.22+(1)
SPSORC04.43 × 10 1 1.16 01.475.59 000
f 9 bPSO1.97 × 10 2 9.62 × 10 2 5.02 × 10 2 +(1)4.212.94 × 10 1 1.63 × 10 1 +(1)2.64 × 10 2 4.90 × 10 2 1.22 × 10 2 +(1)
PSOd1.23 × 10 2 5.86 × 10 2 3.47 × 10 2 +(1)3.096.443.01+(1)3.35 × 10 1 6.07 × 10 1 2.06 × 10 1 +(1)
HPSOscac4.17 × 10 1 1.27 × 10 1 1.70 × 10 1 +(1)6.768.72 × 10 2 6.17 × 10 2 +(1)7.34 × 10 1 3.51 × 10 3 2.21 × 10 3 +(1)
TCPSO2.68 × 10 2 7.16 × 10 2 4.05 × 10 2 +(1)2.395.043.05+(1)6.29 × 10 1 1.07 × 10 2 4.38 × 10 1 +(1)
SPSO2.16 × 10 4 2.27 × 10 2 2.16 × 10 2 =(0)2.39 × 10 3 2.13 × 10 2 1.70 × 10 2 −(−1)2.52 × 10 3 3.08 × 10 2 2.90 × 10 2 −(−1)
SPSOC9.79 × 10 4 2.07 × 10 2 2.35 × 10 2 =(0)1.00 × 10 3 2.15 × 10 2 1.65 × 10 2 +(1)9.11 × 10 4 2.15 × 10 2 1.66 × 10 2 −(−1)
SPSORC7.03 × 10 4 3.65 × 10 2 3.08 × 10 2 4.43 × 10 3 6.27 × 10 2 5.32 × 10 2 5.05 × 10 3 1.72 × 10 1 2.82 × 10 1
f 10 bPSO8.032.76 × 10 1 1.21 × 10 1 +(1)4.55 × 10 2 5.47 × 10 2 5.13 × 10 1 +(1)1.05 × 10 3 1.31 × 10 3 9.25 × 10 1 +(1)
PSOd1.05 × 10 1 8.864.17+(1)1.65 × 10 2 2.06 × 10 2 2.38 × 10 1 +(1)5.15 × 10 2 6.36 × 10 2 5.58 × 10 1 +(1)
HPSOscac9.35 × 10 5 5.13 × 10 1 5.40 × 10 1 +(1)2.50 × 10 1 3.47 × 10 2 3.03 × 10 2 +(1)3.08 × 10 1 6.13 × 10 2 5.71 × 10 2 +(1)
TCPSO1.12 × 10 1 4.68 × 10 1 2.10 × 10 1 +(1)4.02 × 10 2 5.42 × 10 2 7.35 × 10 1 +(1)1.03 × 10 3 1.22 × 10 3 1.15 × 10 2 +(1)
SPSO01.78 × 10 1 2.31 × 10 1 +(1)01.093.67=(0)03.24 × 10 1 1.24=(0)
SPSOC000=(0)000=(0)000=(0)
SPSORC000 000 02.96 × 10 16 9.43 × 10 16
f 11 bPSO1.31 × 10 1 6.35 × 10 3 2.28 × 10 04 =(0)5.06 × 10 6 1.71 × 10 7 8.00 × 10 6 +(1)1.80 × 10 8 2.98 × 10 8 7.48 × 10 7 +(1)
PSOd6.508.95 × 10 2 2.03 × 10 3 +(1)1.89 × 10 6 6.49 × 10 6 3.13 × 10 6 +(1)2.08 × 10 7 4.22 × 10 7 1.46 × 10 7 +(1)
HPSOscac9.00 × 10 3 8.68 × 10 7 8.12 × 10 7 +(1)5.64 × 10 7 1.01 × 10 9 6.13 × 10 8 +(1)2.42 × 10 8 2.41 × 10 9 1.33 × 10 9 +(1)
TCPSO4.24 × 10 1 8.01 × 10 2 1.04 × 10 3 +(1)5.18 × 10 5 2.45 × 10 6 1.01 × 10 6 +(1)3.84 × 10 7 7.88 × 10 7 3.19 × 10 7 +(1)
SPSO7.74E+008.151.28 × 10 1 −(−1)4.81 × 10 1 4.87 × 10 1 3.08 × 10 1 −(−1)9.81 × 10 1 9.88 × 10 1 2.02 × 10 1 =(0)
SPSOC8.118.321.99 × 10 1 +(1)4.81 × 10 1 4.87 × 10 1 3.05 × 10 1 +(1)9.82 × 10 1 9.89 × 10 1 1.48 × 10 1 =(0)
SPSORC8.008.646.52 × 10 1 4.86 × 10 1 4.89 × 10 1 1.05 × 10 1 9.82 × 10 1 9.89 × 10 1 1.54 × 10 1
f 12 bPSO6.41 × 10 2 3.89 × 10 3 1.89 × 10 3 +(1)3.09 × 10 5 1.53 × 10 6 1.12 × 10 6 +(1)5.44 × 10 6 1.94 × 10 7 1.18 × 10 7 +(1)
PSOd1.66 × 10 2 1.73 × 10 3 7.79 × 10 2 +(1)1.23 × 10 5 3.16 × 10 5 1.43 × 10 5 +(1)1.26 × 10 6 4.46 × 10 6 2.47 × 10 6 +(1)
HPSOscac07.30 × 10 131 4.00 × 10 130 =(0)01.56 × 10 144 8.56 × 10 144 =(0)01.10 × 10 120 6.01 × 10 120 =(0)
TCPSO4.44 × 10 2 3.89 × 10 3 1.64 × 10 3 +(1)5.48 × 10 5 2.50 × 10 6 1.72 × 10 6 +(1)1.05 × 10 7 3.86 × 10 7 3.14 × 10 7 +(1)
SPSO2.39 × 10 45 9.73 × 10 42 3.27 × 10 41 =(0)1.84 × 10 38 1.73 × 10 36 3.39 × 10 36 +(1)1.09 × 10 36 5.55 × 10 35 8.02 × 10 35 +(1)
SPSOC3.41 × 10 117 3.44 × 10 52 1.56 × 10 51 =(0)8.83 × 10 74 2.38 × 10 24 9.28 × 10 24 +(1)8.79 × 10 66 2.061.13 × 10 1 +(1)
SPSORC000 000 000
f 13 bPSO5.56 × 10 9 1.23 × 10 6 2.22 × 10 6 +(1)7.47 × 10 10 1.01 × 10 6 1.96 × 10 6 +(1)1.25 × 10 8 9.08 × 10 7 1.18 × 10 6 +(1)
PSOd1.42 × 10 34 1.88 × 10 30 4.49 × 10 30 +(1)7.15 × 10 36 6.07 × 10 31 1.23 × 10 30 +(1)6.10 × 10 35 4.29 × 10 30 1.65 × 10 29 =(0)
HPSOscac01.06 × 10 83 5.79 × 10 83 =(0)01.13 × 10 73 6.15 × 10 73 =(0)3.62 × 10 321 3.14 × 10 86 1.25 × 10 85 =(0)
TCPSO3.23 × 10 4 2.43 × 10 2 2.73 × 10 2 +(1)4.45 × 10 04 1.88 × 10 2 1.83 × 10 2 +(1)2.17 × 10 4 2.45 × 10 2 2.75 × 10 2 +(1)
SPSO3.11 × 10 51 5.41 × 10 47 1.16 × 10 46 +(1)3.93 × 10 53 6.62 × 10 47 1.57 × 10 46 +(1)8.32 × 10 52 1.34 × 10 45 5.05 × 10 45 =(0)
SPSOC7.15 × 10 77 4.92 × 10 60 2.68 × 10 59 +(1)2.13 × 10 77 7.72 × 10 60 4.23 × 10 59 +(1)3.79 × 10 79 1.10 × 10 60 6.01 × 10 60 =(0)
SPSORC000 000 000
f 14 bPSO6.81 × 10 2 2.34 × 10 1 1.12 × 10 1 +(1)8.67 × 10 1 3.08 × 10 2 9.07 × 10 2 =(0)2.94 × 10 2 3.55 × 10 2 2.47 × 10 1 +(1)
PSOd2.74 × 10 2 6.15 × 10 1 5.46 × 10 1 +(1)3.43 × 10 1 5.81 × 10 1 1.55 × 10 1 +(1)1.19 × 10 2 1.62 × 10 2 2.39 × 10 1 +(1)
HPSOscac01.04 × 10 59 5.71 × 10 59 =(0)1.79 × 10 238 3.77 × 10 61 1.45 × 10 60 =(0)05.56 × 10 63 2.57 × 10 62 =(0)
TCPSO2.30 × 10 1 4.88 × 10 1 1.66 × 10 1 +(1)9.95 × 10 1 8.94 × 10 13 4.83 × 10 14 =(0)3.25 × 10 2 1.60 × 10 37 7.61 × 10 37 =(0)
SPSO1.38 × 10 25 1.28 × 10 23 2.21 × 10 23 +(1)6.30 × 10 20 3.34 × 10 17 6.71 × 10 17 +(1)1.70 × 10 19 2.24 × 10 15 5.95 × 10 15 +(1)
SPSOC8.49 × 10 58 9.72 × 10 48 3.86 × 10 47 +(1)7.86 × 10 55 1.25 × 10 34 6.82 × 10 34 +(1)2.92 × 10 56 3.42 × 10 37 1.85 × 10 36 +(1)
SPSORC000 000 000
f 15 bPSO−3.83 × 10 3 −3.41 × 10 3 3.02 × 10 2 +(1)−1.31 × 10 4 −1.12 × 10 4 1.03 × 10 3 −(−1)−1.91 × 10 4 −1.57 × 10 4 1.40 × 10 3 −(−1)
PSOd−3.83 × 10 3 −3.18 × 10 3 3.54 × 10 2 +(1)−1.10 × 10 4 −9.40 × 10 3 7.22 × 10 2 +(1)−1.77 × 10 4 −1.46 × 10 4 1.31 × 10 3 −(−1)
HPSOscac−1.91 × 10 3 −1.28 × 10 3 3.96 × 10 2 +(1)−5.62 × 10 3 −3.34 × 10 3 1.02 × 10 3 +(1)−7.58 × 10 3 −4.87 × 10 3 1.20 × 10 3 +(1)
TCPSO−4.06 × 10 3 −3.41 × 10 3 3.01 × 10 2 −(−1)−1.34 × 10 4 −1.11 × 10 4 9.24 × 10 2 +(1)−2.04 × 10 4 −1.72 × 10 4 1.78 × 10 3 −(−1)
SPSO−1.51 × 10 3 −9.53 × 10 2 2.22 × 10 2 =(0)−3.05 × 10 3 −1.91 × 10 3 5.22 × 10 2 =(0)−4.12 × 10 3 −2.69 × 10 3 6.78 × 10 2 =(0)
SPSOC−1.19 × 10 3 −7.02 × 10 2 1.89 × 10 2 =(0)−2.61 × 10 3 −1.54 × 10 3 4.49 × 10 2 =(0)−3.62 × 10 3 −2.09 × 10 3 6.72 × 10 2 =(0)
SPSORC−1.35 × 10 3 −8.69 × 10 2 2.55 × 10 2 −2.62 × 10 3 −1.87 × 10 3 4.16 × 10 2 −4.26 × 10 3 −2.82 × 10 3 6.11 × 10 2
f 16 bPSO2.15 × 10 1 2.482.76+(1)7.73 × 10 3 1.87 × 10 4 6.62 × 10 3 +(1)9.11 × 10 4 1.15 × 10 5 1.47 × 10 4 +(1)
PSOd9.32 × 10 1 1.02 × 10 1 1.21 × 10 1 +(1)4.26 × 10 3 9.67 × 10 3 2.81 × 10 3 +(1)2.34 × 10 4 3.40 × 10 4 6.67 × 10 3 +(1)
HPSOscac06.95 × 10 126 3.81 × 10 125 =(0)05.30 × 10 144 2.90 × 10 143 =(0)01.15 × 10 92 6.31 × 10 92 =(0)
TCPSO1.417.525.62+(1)1.74 × 10 3 6.23 × 10 3 4.27 × 10 3 +(1)3.12 × 10 4 5.11 × 10 4 1.05 × 10 4 +(1)
SPSO1.38 × 10 50 3.20 × 10 44 1.11 × 10 43 =(0)2.69 × 10 36 5.43 × 10 32 1.61 × 10 31 =(0)3.40 × 10 35 8.65 × 10 30 2.53 × 10 29 =(0)
SPSOC2.11 × 10 114 3.46 × 10 89 1.66 × 10 88 =(0)8.39 × 10 107 5.65 × 10 70 3.10 × 10 69 =(0)2.21 × 10 103 5.27 × 10 79 2.85 × 10 78 =(0)
SPSORC000 000 000
f 17 bPSO2.52 × 10 12 1.09 × 10 8 2.37 × 10 8 +(1)7.44 × 10 5 52.75 × 10 3 2.89 × 10 3 +(1)1.48 × 10 2 2.36 × 10 1 2.58 × 10 1 +(1)
PSOd2.07 × 10 11 3.03 × 10 6 9.79 × 10 6 =(0)5.86 × 10 8 4.53 × 10 5 7.70 × 10 5 +(1)7.17 × 10 7 1.85 × 10 4 3.29 × 10 4 +(1)
HPSOscac05.38 × 10 135 2.95 × 10 134 =(0)02.62 × 10 148 1.34 × 10 147 =(0)02.87 × 10 154 1.57 × 10 153 =(0)
TCPSO1.51 × 10 8 6.10 × 10 7 6.72 × 10 7 +(1)6.45 × 10 6 5.69 × 10 4 1.10 × 10 3 +(1)1.48 × 10 3 1.27 × 10 1 3.65 × 10 1 =(0)
SPSO1.16 × 10 94 4.96 × 10 86 1.50 × 10 85 =(0)6.17 × 10 94 1.10 × 10 84 5.95 × 10 84 =(0)1.78 × 10 92 8.69 × 10 87 1.71 × 10 86 +(1)
SPSOC8.84 × 10 141 1.97 × 10 115 1.08 × 10 114 =(1)6.16 × 10 158 1.01 × 10 121 5.52 × 10 121 =(0)2.52 × 10 146 1.22 × 10 118 6.19 × 10 118 +(1)
SPSORC000 000 04.94 × 10 324 0
f 18 bPSO5.10 × 10 4 6.20 × 10 1 2.15=(0)1.82 × 10 13 2.27 × 10 19 1.11 × 10 20 =(0)2.83 × 10 34 6.97 × 10 46 3.76 × 10 47 =(0)
PSOd5.79 × 10 4 3.73 × 10 1 6.15 × 10 1 +(1)2.48 × 10 4 8.59 × 10 10 4.26 × 10 11 =(0)8.86 × 10 20 5.66 × 10 32 2.80 × 10 33 =(0)
HPSOscac5.26 × 10 319 1.20 × 10 3 3.72 × 10 3 +(1)4.63 × 10 237 3.53 × 10 28 1.66 × 10 29 =(0)02.88 × 10 54 1.34 × 10 55 =(0)
TCPSO2.75 × 10 3 3.15 × 10 1 4.89 × 10 1 +(1)1.10 × 10 8 4.18 × 10 16 2.20 × 10 17 =(0)4.98 × 10 30 1.22 × 10 42 6.28 × 10 42 =(0)
SPSO2.25 × 10 31 1.23 × 10 4 6.69 × 10 4 =(0)1.65 × 10 33 3.98 × 10 6 2.12 × 10 5 =(0)1.97 × 10 34 1.67 × 10 5 6.35 × 10 5 =(0)
SPSOC4.34 × 10 70 2.51 × 10 21 1.37 × 10 20 =(0)5.00 × 10 64 2.23 × 10 8 1.22 × 10 7 =(0)7.22 × 10 63 3.73 × 10 20 2.04 × 10 19 =(0)
SPSORC000 000 000
f 19 bPSO9.08 × 10 4 2.66 × 10 3 4.25 × 10 4 =(0)1.59 × 10 19 1.68 × 10 19 3.35 × 10 21 +(1)1.88 × 10 40 1.94 × 10 40 3.45 × 10 42 −(−1)
PSOd5.66 × 10 4 9.64 × 10 4 3.49 × 10 4 −(−1)8.35 × 10 18 1.41 × 10 16 2.64 × 10 16 +(1)3.03 × 10 31 1.19 × 10 28 3.03 × 10 28 −(−1)
HPSOscac1.59 × 10 3 3.43 × 10 3 3.74 × 10 4 =(0)1.79 × 10 19 1.79 × 10 19 4.90 × 10 35 +(1)2.04 × 10 40 2.04 × 10 40 4.15 × 10 56 −(−1)
TCPSO2.12 × 10 3 3.50 × 10 3 2.34 × 10 3 =(0)1.45 × 10 19 9.59 × 10 16 5.21 × 10 15 +(1)1.69 × 10 40 4.24 × 10 34 2.32 × 10 33 −(−1)
SPSO7.91 × 10 3 5.49 × 10 2 3.83 × 10 2 +(1)1.32 × 10 10 4.32 × 10 7 8.07 × 10 7 =(0)1.24 × 10 16 2.36 × 10 12 1.22 × 10 11 =(0)
SPSOC2.34 × 10 2 9.80 × 10 2 5.49 × 10 2 +(1)1.79 × 10 8 3.53 × 10 5 4.99 × 10 5 =(0)2.94 × 10 13 1.58 × 10 09 4.91 × 10 9 =(0)
SPSORC09.352.22 × 10 2 03.81 × 10 7 6.51 × 10 7 3.57 × 10 18 1.13 × 10 13 2.35 × 10 13
f 20 bPSO3.97 × 10 25 3.97 × 10 25 1.87 × 10 40 +(1)9.83 × 10 123 9.83 × 10 123 0+(1)9.66 × 10 245 9.66 × 10 245 0+(1)
PSOd4.66 × 10 25 2.90 × 10 19 1.51 × 10 18 +(1)4.66 × 10 63 1.10 × 10 51 5.13 × 10 51 +(1)7.68 × 10 90 1.59 × 10 73 8.73 × 10 73 +(1)
HPSOscac3.97 × 10 25 3.97 × 10 25 1.87 × 10 40 +(1)9.83 × 10 123 9.83 × 10 123 0+(1)9.66 × 10 245 9.66 × 10 245 0+(1)
TCPSO3.97 × 10 25 3.97 × 10 25 1.87 × 10 40 +(1)9.83 × 10 123 9.83 × 10 123 0+(1)9.66 × 10 245 9.66 × 10 245 0+(1)
SPSO9.48 × 10 14 2.11 × 10 8 9.45 × 10 8 +(1)8.01 × 10 31 1.32 × 10 21 4.61 × 10 21 +(1)1.95 × 10 49 3.65 × 10 34 1.91 × 10 33 +(1)
SPSOC1.15 × 10 12 4.77 × 10 7 1.37 × 10 6 +(1)1.19 × 10 22 2.22 × 10 16 4.89 × 10 16 +(1)1.69 × 10 41 5.40 × 10 22 2.89 × 10 21 +(1)
SPSORC−1.00−1.000 −1.00−9.33 × 10 1 2.54 × 10 1 −1.00−9.00 × 10 1 3.05 × 10 1
f 21 bPSO3.73 × 10 7 3.57 × 10 6 3.53 × 10 6 +(1)5.81 × 10 19 9.30 × 10 17 1.28 × 10 16 +(1)1.04 × 10 32 4.59 × 10 29 9.83 × 10 29 +(1)
PSOd2.54 × 10 8 8.53 × 10 6 1.06 × 10 5 +(1)1.35 × 10 20 6.03 × 10 20 5.19 × 10 20 +(1)3.97 × 10 39 7.51 × 10 38 9.24 × 10 38 +(1)
HPSOscac−1.00-5.27 × 10 1 5.13 × 10 1 +(1)−1.00-3.33 × 10 2 1.83 × 10 1 +(1)3.14 × 10 31 1.87 × 10 14 5.55 × 10 14 +(1)
TCPSO8.71 × 10 7 1.33 × 10 5 1.67 × 10 5 +(1)6.93 × 10 20 2.42 × 10 18 3.80 × 10 18 +(1)1.30 × 10 38 1.17 × 10 32 5.38 × 10 32 +(1)
SPSO4.39 × 10 4 1.04 × 10 3 3.19 × 10 4 +(1)4.00 × 10 16 2.15 × 10 14 1.99 × 10 14 +(1)2.51 × 10 29 6.77 × 10 27 1.10 × 10 26 +(1)
SPSOC6.88 × 10 4 3.24 × 10 3 1.69 × 10 3 +(1)9.35 × 10 14 7.28 × 10 12 1.12 × 10 11 +(1)1.13 × 10 25 3.10 × 10 22 6.75 × 10 22 +(1)
SPSORC−1.00−1.000 −1.00−1.002.92E-17 −1.00−1.002.92 × 10 17
f 22 bPSO2.262.62 × 10 1 2.76 × 10 1 +(1)1.16 × 10 3 2.76 × 10 3 4.53 × 10 3 +(1)3.39 × 10 3 1.70 × 10 5 5.90 × 10 5 =(0)
PSOd5.39 × 10 1 4.982.55+(1)1.80 × 10 2 3.28 × 10 2 9.49 × 10 1 +(1)7.23 × 10 2 9.14 × 10 2 1.20 × 10 2 +(1)
HPSOscac01.93 × 10 174 0+(1)01.17 × 10 167 0+(1)01.89 × 10 143 1.04 × 10 142 =(0)
TCPSO5.77 × 10 1 1.15 × 10 1 1.52 × 10 1 +(1)1.03 × 10 3 1.60 × 10 4 5.36 × 10 4 =(0)2.51 × 10 3 6.23 × 10 5 2.09 × 10 6 =(0)
SPSO1.63 × 10 48 1.59 × 10 43 6.83 × 10 43 =(0)1.69 × 10 40 3.13 × 10 38 6.42 × 10 38 +(1)1.89 × 10 40 3.88 × 10 38 8.35 × 10 38 =(0)
SPSOC1.29 × 10 113 1.69 × 10 81 8.46 × 10 81 =(0)3.35 × 10 61 2.71 × 10 29 1.08 × 10 28 +(1)8.00 × 10 61 3.24 × 10 20 1.76 × 10 19 =(0)
SPSORC000 000 01.64 × 10 4 8.99 × 10 4
Table 7. Simulation environment.
Table 7. Simulation environment.
NResultsbPSOPSOdHPSOscacSPSOSPSOCSPSORC
10+18201220116
=411011014
010112
S c o r e 18191219104
50+1921131998
=21931113
100021
S c o r e 1821131977
100+1818101697
=221141212
221213
S c o r e 161691484
Table 8. Real computational time.
Table 8. Real computational time.
NAlgbPSOPSOdHPSOscacTCPSOSPSOSPSOCSPSORC
10 f 1 0.03460.04080.06810.07550.01930.02930.0298
f 2 0.03240.03960.06510.07310.01880.02950.0296
f 3 0.03090.03740.06060.06730.01780.02800.0274
f 4 0.03780.04650.07000.08570.02690.03740.0367
f 5 0.03340.04210.06550.07830.01960.02980.0304
f 6 0.03580.04440.06760.08250.02450.03490.0348
f 7 0.03800.04490.13430.08560.02440.03280.0332
f 8 0.05420.06260.08670.12140.04230.05170.0561
f 9 0.04840.05590.08150.10660.03620.04700.0475
f 10 0.03290.04100.06540.07580.01990.02940.0311
f 11 0.03030.03880.06230.07040.01740.02780.0286
f 12 0.04280.05170.07600.09670.03130.04210.0423
f 13 0.02880.03780.06220.06760.01800.02850.0289
f 14 0.02900.03810.06080.06900.01810.02860.0280
f 15 0.03550.04350.07360.08030.02150.03250.0324
f 16 0.02880.03790.06240.06810.01810.02890.0280
f 17 0.03650.04620.06810.08520.02650.03680.0363
f 18 0.04710.05550.08000.10430.03470.04660.0466
f 19 0.03580.04280.06760.08120.02200.03190.0319
f 20 0.04470.05430.07690.10060.03390.04390.0407
f 21 0.04060.04880.07450.09140.02690.03690.0347
f 22 0.02940.03810.06170.06950.01810.02920.0287
50 f 1 0.15150.19120.28890.35020.08130.13110.1325
f 2 0.15150.18740.28970.34650.08150.13290.1329
f 3 0.13230.17180.27260.31450.07320.12470.1241
f 4 0.17980.21720.31790.40420.11740.17130.1704
f 5 0.15890.19760.29640.36330.08680.13780.1383
f 6 0.17610.21570.31300.39690.11630.16870.1678
f 7 0.18200.21900.53960.40800.11500.15430.1552
f 8 0.28470.31950.42060.59780.21370.26200.2682
f 9 0.23300.27270.38090.51600.17160.22460.2250
f 10 0.15540.19330.29210.35460.08510.13430.1356
f 11 0.13800.17750.28190.32360.07650.12890.1295
f 12 0.26960.30120.40740.56770.19960.25330.2527
f 13 0.13060.17090.27130.30720.07020.12360.1246
f 14 0.13240.17330.26290.31360.07440.12870.1276
f 15 0.15690.19650.33680.35810.09130.14590.1466
f 16 0.13480.17270.27030.31130.07270.12620.1244
f 17 0.17380.21250.30160.39520.11260.16230.1649
f 18 0.22770.26880.37190.50350.16820.21760.2197
f 19 0.16440.20270.29850.37240.09850.14690.1495
f 20 0.20890.25070.33800.46810.15010.19960.1881
f 21 0.18920.22410.32400.42260.12090.17070.1541
f 22 0.13080.16800.26940.30770.07000.12200.1222
100 f 1 0.29770.37190.56420.68300.15610.25710.2612
f 2 0.30230.37930.57750.69440.15950.26550.2660
f 3 0.26650.34310.54170.62470.14160.24850.2491
f 4 0.35880.43690.63810.80830.23790.34000.3387
f 5 0.31580.39290.58940.72320.17160.27290.2756
f 6 0.35550.42880.62870.79920.22840.33520.3343
f 7 0.36550.44191.30280.81070.23230.30490.3061
f 8 0.55800.62710.83951.20180.43110.52300.5311
f 9 0.46510.54430.75051.02110.33800.44470.4465
f 10 0.30980.38410.57710.71010.16670.26460.2687
f 11 0.27440.35210.55830.64130.14770.25290.2522
f 12 0.67520.75240.95441.43320.54680.65180.6535
f 13 0.26510.34260.54000.61930.13850.24420.2468
f 14 0.32170.36460.51210.61650.14310.24780.2477
f 15 0.31540.39180.61780.71550.18520.28990.2911
f 16 0.26320.34130.54160.65190.14660.25180.2498
f 17 0.35990.44500.63260.83350.22550.32790.3377
f 18 0.47580.55810.76311.05890.34910.44830.4592
f 19 0.34060.40930.61890.75890.19800.29750.3253
f 20 0.43810.52380.67350.93850.29700.40000.3777
f 21 0.38120.46040.64130.85130.24330.34740.3103
f 22 0.26580.34240.54210.62040.13670.24400.2443
Table 9. Success rate and average iteration times.
Table 9. Success rate and average iteration times.
bPSOPSOdHPSOscacTCPSOSPSOSPSOCSPSORC
AITSRAITSRAITSRAITSRAITSRAITSRAITSR
f 1 -0.00%-0.00%32.700.00%-0.00%15.873.33%53.50100.00%18.9093.33%
f 2 -0.00%-0.00%51.53100.00%-0.00%-0.00%-0.00%23.37100.00%
f 3 -0.00%-0.00%35.33100.00%-0.00%-100.00%77.33100.00%17.50100.00%
f 4 -0.00%-0.00%58.60100.00%-0.00%-0.00%-0.00%16.47100.00%
f 5 -0.00%-0.00%16.430.00%-0.00%56.8076.67%38.97100.00%11.00100.00%
f 6 -0.00%-0.00%54.73100.00%-0.00%-0.00%-0.00%20.70100.00%
f 7 -100.00%-100.00%-36.67%-100.00%-100.00%34.50100.00%13.77100.00%
f 8 -0.00%-0.00%25.000.00%-0.00%-0.00%37.0396.67%21.8096.67%
f 9 -0.00%-0.00%6.270.00%-0.00%12.3096.67%7.40100.00%11.8066.67%
f 10 -0.00%-0.00%23.370.00%-0.00%63.0753.33%36.80100.00%14.40100.00%
f 11 -0.00%-0.00%7.870.00%-0.00%28.30100.00%15.30100.00%5.33100.00%
f 12 -0.00%-0.00%56.93100.00%-0.00%-0.00%-0.00%20.97100.00%
f 13 2.830.00%-0.00%62.9393.33%4.670.00%-0.00%-0.00%19.57100.00%
f 14 -0.00%-0.00%66.7096.67%-0.00%-0.00%-0.00%16.50100.00%
f 15 1.20100.00%1.10100.00%1.2060.00%1.37100.00%1.2316.67%4.270.00%2.3010.00%
f 16 -0.00%-0.00%57.73100.00%-0.00%-0.00%0.00%24.50100.00%
f 17 -0.00%-0.00%26.6343.33%-0.00%-0.00%-0.00%20.07100.00%
f 18 -0.00%-0.00%14.6336.67%-0.00%-0.00%-0.00%8.20100.00%
f 19 1.67100.00%6.30100.00%1.00100.00%1.60100.00%9.203.33%4.070.00%1.2016.67%
f 20 0.00%-0.00%-0.00%-0.00%-0.00%-0.00%2.5390.00%
f 21 -0.00%-0.00%0.630.00%-0.00%-0.00%-0.00%16.7396.67%
f 22 -0.00%-0.00%38.30100.00%-0.00%-0.00%-0.00%13.17100.00%
Back to TopTop