Next Article in Journal
Auction and Classification of Smart Contracts
Previous Article in Journal
Boosting Ant Colony Optimization with Reptile Search Algorithm for Churn Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stochastic Triad Topology Based Particle Swarm Optimization for Global Numerical Optimization

1
School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
2
Department of Electrical and Electronic Engineering, Hanyang University, Ansan 15588, Korea
3
Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung 413310, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(7), 1032; https://doi.org/10.3390/math10071032
Submission received: 21 February 2022 / Revised: 15 March 2022 / Accepted: 21 March 2022 / Published: 24 March 2022
(This article belongs to the Topic Soft Computing)

Abstract

:
Particle swarm optimization (PSO) has exhibited well-known feasibility in problem optimization. However, its optimization performance still encounters challenges when confronted with complicated optimization problems with many local areas. In PSO, the interaction among particles and utilization of the communication information play crucial roles in improving the learning effectiveness and learning diversity of particles. To promote the communication effectiveness among particles, this paper proposes a stochastic triad topology to allow each particle to communicate with two random ones in the swarm via their personal best positions. Then, unlike existing studies that employ the personal best positions of the updated particle and the neighboring best position of the topology to direct its update, this paper adopts the best one and the mean position of the three personal best positions in the associated triad topology as the two guiding exemplars to direct the update of each particle. To further promote the interaction diversity among particles, an archive is maintained to store the obsolete personal best positions of particles and is then used to interact with particles in the triad topology. To enhance the chance of escaping from local regions, a random restart strategy is probabilistically triggered to introduce initialized solutions to the archive. To alleviate sensitivity to parameters, dynamic adjustment strategies are designed to dynamically adjust the associated parameter settings during the evolution. Integrating the above mechanism, a stochastic triad topology-based PSO (STTPSO) is developed to effectively search complex solution space. With the above techniques, the learning diversity and learning effectiveness of particles are largely promoted and thus the developed STTPSO is expected to explore and exploit the solution space appropriately to find high-quality solutions. Extensive experiments conducted on the commonly used CEC 2017 benchmark problem set with different dimension sizes substantiate that the proposed STTPSO achieves highly competitive or even much better performance than state-of-the-art and representative PSO variants.

1. Introduction

Particle swarm optimization (PSO) has witnessed tremendous success in solving optimization problems, especially non-differentiable ones [1,2,3,4,5], since its advent in 1995 [6,7]. Specifically, it maintains a swarm of particles, each of which represents a feasible solution, to iteratively search the solution space to find the global optima. Due to its good interpretability and great convenience of implementation [8,9,10], PSO has been widely applied to solve various real-world problems in daily life and industrial engineering, such as supply chain network design [11], control of pollutant spreading on social networks [12], and industrial power load forecasting [13].
In the classical PSO [6,7], a fully connected topology with all particles is utilized to select guiding exemplars for particles to update, leading to the global best position (usually denoted as gbest) discovered by the whole swarm being shared by all particles. As a result, the learning diversity of particles is limited and thus the swarm falls into local areas when dealing with multimodal problems. To alleviate this limitation, researchers have paid extensive attention to designing novel learning strategies [14,15,16,17,18,19] to improve the optimization effectiveness of PSO.
In essence, the key to the learning strategies in PSO lies in the selection of guiding exemplars to direct the update of particles [17,20]. Broadly speaking, existing exemplar selection mechanisms can be classified into two categories, namely topology-based exemplar selection methods [15,16,21,22,23], and exemplar construction approaches [17,18,19,24,25].
Topology-based exemplar selection methods have been widely utilized in the research of PSO. In most cases, these methods aim to determine a less greedy exemplar to replace the social exemplar, namely gbest, in the classical PSO [6,7]. Based on different topologies, an abundance of remarkable PSO variants have been developed [26,27,28], such as ring topology structure [26], pyramid topology structure [27], Von Neumann topology structure [29], random topology [22], etc. Different topologies usually preserve different characteristics and merits. Therefore, a natural idea is to hybridize them to ensemble the merits of different topologies to improve the optimization performance of PSO. Along this line, many PSO variants [28,30,31,32,33] have been developed based on different methods of hybridization. In addition, to alleviate the shortcoming of static topologies where each particle can only interact with fixed peers, some researchers further proposed dynamic topologies [34,35,36] to dynamically change the topologies (the topology type or the topology size) based on the evolution state of the swarm.
Although topology-based PSO variants have shown to be highly promising in solving optimization performance, the guiding exemplars selected by different topologies to direct the update of particles are all the historical promising positions found by particles. Therefore, the learning effectiveness of particles is limited by the historical positions [14,18,37]. Once all the historical positions converge to local areas, it is difficult for the swarm to jump out of the local basin. To alleviate this issue, researchers have attempted to develop novel PSOs from another perspective, namely constructing new guiding exemplars for particles to learn from [14,38].
Different from topology-based exemplar selection methods, exemplar construction methods generally build new guiding exemplars for particles by combining dimensions of historical positions. In general, it is highly possible that the built exemplars are not visited by particles. The most representative PSO variant in this direction is the comprehensive learning PSO (CLPSO) [17], which constructs a new guiding exemplar dimension by dimension from the personal best positions of all particles. Inspired from this method of constructing new exemplars, researchers have developed many other construction approaches, such as orthogonal learning PSO (OLPSO) [37], and genetic learning PSO (GLPSO) [18].
Although most existing PSO variants have shown very promising performances in simple optimization problems, such as unimodal problems and simple multimodal problems, their performance is confronted with great challenges or even deteriorates dramatically when dealing with complicated optimization problems, such as multimodal problems with many wide and flat local areas [39,40], and composition problems with many interacting variables. However, in the era of big data and Internet of Things (IoT) [41], optimization problems become increasingly complicated, which are ubiquitous in every field in daily life and industrial engineering [11,13]. As a consequence, the optimization ability of PSO to solve complicated optimization problems warrants urgent and careful research, rendering ongoing research into PSO an important frontier in evolutionary community [42,43].
To improve the optimization effectiveness of PSO in solving complicated problems, this paper proposes a stochastic triad topology-based PSO (STTPSO). Specifically, during the evolution, for each particle, two personal best positions are first randomly selected from those of the rest particles. Then, the personal best position of the particle to be updated and the two random best positions form a triad topology. Based on this topology, the best one in the topology and the mean position of the topology are employed as the guiding exemplars to direct the update of the particle. In this way, each particle likely preserves different guiding exemplars and thus the learning diversity of particles can be largely improved, which is beneficial for the swarm to escape from local areas.
Overall, the main components of the proposed STTPSO are summarized as follows:
(1)
A stochastic triad topology is employed to connect the personal best position of each particle and two different personal best positions randomly selected from those of the rest particles to select guiding exemplars for particles to update. Different from existing studies [22,37], which only utilize the topologies to determine the best position to replace the social exemplar, namely gbest, in the classical PSO (with another guiding exemplar as the personal best position of the particle), the proposed STTPSO utilizes the stochastic triad topology to select the best one and computes the mean position of the triad best positions as the two guiding exemplars to direct the update of each particle. Since the topology is stochastic, it is likely that different particles preserve different guiding exemplars. As a result, the learning diversity of particles can be largely promoted, and thus the probability of the swarm escaping from local areas can be promoted.
(2)
An archive is maintained to store the obsolete personal best positions and then is combined with the personal best positions of all particles in the current generation to form the triad topologies for particles. In this way, valuable historical information can be utilized to direct the update of particles, which is helpful for improving swarm diversity.
(3)
A random restart strategy is designed by randomly initializing a solution with a small probability. However, instead of employing this restart strategy on the swarm, we utilize it on the archive. That is to say, a randomly initialized solution is inserted into the archive with a small probability. In this way, the swarm diversity can be promoted without significant sacrifice of convergence speed.
(4)
A dynamic strategy for the acceleration coefficients is devised to alleviate the sensitivity of STTPSO. Instead of utilizing fixed values for the two acceleration coefficients, this paper randomly samples the two acceleration coefficients based on the Gaussian distribution with the mean value set as the classical setting of the two coefficients and a small deviation. With this dynamic strategy, different particles can have different settings, and thus the learning diversity can be further promoted.
The above four components collaborate cohesively to help STTPSO explore and exploit the solution appropriately to locate the optima of optimization problems. In order to verify the effectiveness of the proposed STTPSO, extensive experiments are conducted on the widely used CEC 2017 benchmark problem set [44] with three different dimension sizes by comparing STTPSO with several state-of-the-art and popular PSO variants. In addition, deep investigations on the devised STTPSO are also conducted to observe the influence of each component in STTPSO.
The rest of this paper is organized as follows. Closely related studies are reviewed briefly in Section 2. In Section 3, the proposed STTPSO is elucidated in detail. Then, extensive comparison experiments and analysis of the associated results are conducted and discussed in Section 4. Finally, the conclusion of this paper is provided in Section 5.

2. Related Works

2.1. Basic PSO

PSO is a heuristic search algorithm and was first proposed in 1995 by Kennedy and Eberhart [6,7]. Specifically, PSO maintains a population of particles to search the solution space, and each particle is represented by a position vector xi and a velocity vector vi, where the position vector represents a feasible solution to the problem, while the velocity vector represents the moving direction of the particle. Moreover, each particle memorizes its own historical best position pbesti, and the global best position gbest of the whole population is also memorized during the evolution. Then, each particle is updated by cognitively learning from its own experience, namely its personal best position pbesti, and socially learning from the experience of the whole swarm, namely the global best position gbest. Specifically, each particle is updated as follows:
v i = w v i + c 1 r 1 ( p b e s t i x i ) + c 2 r 2 ( g b e s t x i )
x i = x i + v i
where w is the inertia weight, c1 and c2 are called acceleration coefficients, and r1 and r2 are two real random numbers uniformly sampled within [0, 1].
In Equation (1), the first part in the right hand is the “inertia part”, which controls the memory of the velocity of each particle in the last generation. The second part is the “self-cognition” part, where each particle learns from its own experience. The third part is the social part, where each particle learns from the experience of the whole swarm.
As for the inertia weight w, in the literature [17,18,37,45], a linear decay method is widely utilized in PSO variants, which is presented below:
w = 0.9 0.5 × f e s F E m a x
In the literature [10,17,18,37,45,46,47], it is widely recognized that the learning strategy in Equation (1) plays the most important role in helping PSO achieve satisfactory performance. As a result, researchers have been devoted to designing novel effective learning strategies for PSO to improve its optimization ability.

2.2. Advanced Learning Strategies for PSO

To improve the optimization performance of PSO, many researchers have designed an ocean of effective learning strategies to improve the learning diversity and the learning effectiveness of particles [18,48,49,50,51]. As far as we are concerned, existing learning strategies for PSO could be classified into two main categories as shown in Table 1 as shown in Table 1, namely topology-based learning strategies [22,26,52], and exemplar construction-based learning strategies [14,17,37,38,53].
Topology-based learning strategies [21,22,26,54,55], mainly utilize a specific topology to interact with particles to select guiding exemplars to update particles. In fact, the classical PSO [6,7] introduced above is a topology-based learning PSO, where the topology is the full topology connecting all particles. Such a full topology usually leads to an excessively greedy guiding exemplar (namely the global best position gbest), which likely attracts particles into local areas. To alleviate this issue, many researchers have developed many kinds of local topologies to select less greedy guiding exemplars to direct the update of particles. For instance, in [26], the ring topology was utilized to organize particles into a ring, and then each particle interacts with its two neighbors to select one guiding exemplar to replace gbest in Equation (1). In [27], the pyramid topology with a three-dimensional wire-frame triangle was used to select the guiding exemplars for particles. In [29,55], the star topology was employed for particles to interact with others. In this topology, the central node shares information with other particles, and other particles also share information with the central node. Therefore, the communication is a two-way information exchange. In [29], the Von Neumann topology which is a two-dimensional lattice, was adopted to select guiding exemplars. Specifically, this topology connects the top, bottom, left and right neighbors of each point to form a neighborhood topology of size five. Such a topology can be regarded as a “two-dimensional” ring topology derived from a one-dimensional line into a two-dimensional plane.
Table 1. The rough classification of existing PSO variants.
Table 1. The rough classification of existing PSO variants.
CategoryMethodsCharacteristics
Topology-based MethodsStatic TopologyFull TopologyPSO [6,7]Each particle can only communicate with fixed peers.
The learning diversity of particles is limited.
Ring TopologyMRTPSO [26], GGL-PSOD [56]
Pyramid TopologyPMKPSO [27]
Star TopologyPSO-Star [29,55]
Von Neumann TopologyPSO-Von-Neumann [29]
Hybrid TopologyXPSO [23]
Dynamic TopologyDynamic TopologyDNSPSO [28],
DMSPSO [15], SPSO [22]
Each particle communicates with dynamic peers.
The learning diversity of particles is high.
Dynamic Size TopologyRPSO [16]
Exemplar construction-based MethodsRandom ConstructionCLPSO_LS [14], CLPSO [17], HCLPSO [25], TCSPSO [19]Randomly recombine dimensions of personal best positions.
The exemplar construction efficiency is low, but it consumes no fitness evaluations in exemplar construction.
Operator-based ConstructionMPSOEG [24], GLPSO [18]Recombine dimensions of personal best positions based evolutionary operators in other EAs.
The exemplar construction efficiency is high, but it consumes many fitness evaluations in exemplar construction
Orthogonal RecombinationOLPSO [37]Recombine dimensions of personal best positions based on orthogonal experimental design.
The exemplar construction efficiency is high, but it consumes a lot of fitness evaluation in exemplar construction
All topologies mentioned above are static topologies. In these topologies, each particle interacts with fixed peers during the evolution, and thus they bear limitations in improving the learning effectiveness of particles. To compensate for this shortcoming, researchers have attempted to develop dynamic topologies to select guiding exemplars for particles. Along this line, Liang and Suganthan [15] designed a random topology, which connects each particle with several randomly selected particles. Their experimental results demonstrated that the randomly constructed topological structure exhibits the best performance when its size is equal to three. In [22], each particle interacts with k particles randomly selected from the swarm. As for the setting of k, it is set between one and the population size. In particular, it can be the same for all particles and can also be different for different particles. In [16], the authors proposed adaptive adjustment of the size of the topology based on the evolution state. Specifically, in the early stage, a small size is maintained to preserve high search diversity, so that particles focus on exploring the solution space. Whereas, at the late stage, a large topology size is maintained to guarantee the convergence, so that particles focus on exploiting the solution space. In [23], the authors combined the global topology and the local topology to select guiding exemplars for particles, so that a good balance between exploration and exploitation could be maintained.
The aforementioned topology-based learning strategies mainly select guiding exemplars for particles based on existing personal best positions found by all particles. To further promote the learning effectiveness of particles, some researchers have attempted to construct novel exemplars, which might not be visited by particles, for particles, leading to exemplar construction-based learning strategies [14,17,19,24,37,38].
Different from topology-based learning strategies, exemplar construction-based learning strategies construct new exemplars that are not visited by particles based on the personal best positions of all particles. In this direction, the most representative algorithm is the comprehensive learning PSO (CLPSO) [17]. Specifically, this algorithm uses the binary tournament selection mechanism to select a learning object for each dimension of the current particle. From a macro perspective, CLPSO constructs a new position for each particle dimension by dimension that does not exist in the current population. Since CLPSO randomly recombines dimensions of different personal best positions, it ignores the correlation between variables and thus cannot effectively integrate useful evolutionary information together. To further improve the optimization performance of CLPSO, a heterogeneous CLPSO (HCLPSO) was proposed in [25]. This algorithm divides the population into two sub-populations, with one sub-population used for exploration, which is updated by the original CL strategy, and another sub-population used for exploitation, which is guided by the global best position.
Although the above introduced CLPSO variants have achieved good performance in solving multimodal problems, the construction of promising exemplars for particle is inefficient since the recombination of dimensions is totally random. To improve the exemplar construction effectiveness and efficiency, in [37], Zhan et al. proposed an orthogonal learning PSO (OLPSO) by using the orthogonal experimental design to seek useful dimension combinations of the historical positions found by particles. Specifically, OLPSO adopts an orthogonal matrix to evaluate the effectiveness of the dimension combinations and then combines the most useful dimension combinations to construct promising exemplars. Though the exemplar construction efficiency is improved, it consumes many fitness evaluations in the exemplar construction stage. To reduce fitness evaluation consumption in the exemplar construction, in [18], Gong et al. employed the genetic operators such as crossover, mutation, and selection, to construct guiding exemplars for particles, leading to a genetic learning PSO (GLPSO). With these operators, GLPSO is expected to generate diversified and high-quality exemplars for particles. To further improve its optimization effectiveness, in [56], a global GLPSO with a ring topology (GGL-PSOD) was devised, where the ring topology is adopted to generate diversified exemplars based on neighbor particles. Though the constructed exemplars are promising, GLPSO and its variants still consume many fitness evaluations in the exemplar construction. To further construct diversified but promising guiding exemplars for particles, in [19], terminal crossover and steering-based PSO with distribution (TCSPSO) was proposed by devising a new crossover mechanism to construct exemplars. Meanwhile, a global disturbance was designed to improve the population diversity to escape from local areas. In [24], a modified particle swarm optimization with effective guides (MPSOEG) was devised by generating two types of guiding exemplars with an optimal guide creation (OGC) module. In particular, a global exemplar is constructed by the OCG module to guide the swarm towards promising regions, whereas a local exemplar is constructed for each particle to escape from local areas.
Except for the abovementioned learning strategies, some researchers have even attempted utilizing multiple learning strategies to direct the evolution of the swarm in PSO. For instance, in [20], the concept of evolutionary game theory was introduced, and four classical learning strategies were taken as game strategies in the game theory. Then, the swarm adaptively selects the most suitable learning strategy based on the current evolution state. In [28], a dynamic-neighborhood-based switching PSO (DNSPSO) algorithm was proposed by adjusting the personal best position and the global best position based on a distance-based dynamic neighborhood and hybridizing the differential evolution algorithm to alleviate premature convergence.
Although many of the original limitations and shortcomings of PSO have been greatly improved since its introduction, its optimization performance in solving complex optimization problems with many interacting variables and a wide saddle-point region still encounters great challenges. Therefore, methods to improve the optimization performance of PSO in solving widely emerging complicated problems remains an open issue and deserves careful attention, which results in the research of PSO remaining a highly popular and frontier topic in the evolutionary computation community. To improve the learning effectiveness of particles in complicated environments, this paper proposes a stochastic triad topology-based PSO (STTPSO), which will be elucidated in the following section.

3. Stochastic Triad Topology-Based Particle Swarm Optimization

The most crucial aspect of PSO is the interaction among particles to select guiding exemplars to direct the update of particles [22,26,27,29]. Most existing topology-based PSO variants [21,22,55] only adopt the topologies to select one exemplar to replace the social exemplar (namely gbest) in the classical PSO (shown in Equation (1)). Such utilization of topologies is limited since it only changes one exemplar in Equation (1), which results in limited improvement in the learning effectiveness and learning diversity of particles. To alleviate this predicament, this paper proposes a stochastic triad topology-based PSO (STTPSO), which utilizes a stochastic triad topology for each particle to select two novel guiding exemplars to replace the two ones in the classical PSO to promote the learning effectiveness and learning diversity of particles.

3.1. Stochastic Triad Topology

During the evolution, given that PS particles are maintained in the swarm, then for each particle xi (0 ≤ iPS), a stochastic triad topology is employed to connect the personal best position (pbesti) of this particle and two different personal best positions, which are randomly selected from those of other particles. Given that the two randomly selected personal best positions are pbestr1 and pbestr2, respectively, this paper utilizes the best one among the triad pbests, (pbesti, pbestr1, and pbestr2) and the mean position of these triad pbests as the two guiding exemplars to replace the two ones in Equation (1) to update each particle. Specifically, the velocity of each particle is updated as follows:
v i = w v i + c 1 r 1 ( t p b e s t i x i ) + c 2 r 2 ( t m e a n i ¯ x i )
where tpbesti is the best one among the triad pbests, which is determined as follows:
t p b e s t i = argmin { f ( p b e s t r 1 ) , f ( p b e s t r 2 ) , f ( p b e s t i ) }
tmean ¯ i represents the mean position of the triad pbests, which is calculated as follows:
t m e a n i ¯ = 1 3 ( p b e s t r 1 + p b e s t r 2 + p b e s t i )
Upon deep observation of Equation (4), we discover the following findings: (1) The first guiding exemplar (tpbesti) is likely different for different particles. This is because the triad topology of each particle is constructed by randomly selecting two different personal best positions (pbestr1 and pbestr2) from those of other particles along with the personal best position (pbesti) of this particle. Due to such randomness, the diversity of the first exemplar could be largely promoted. (2) The second guiding exemplar ( tmean ¯ i ) is also likely different due to the random construction of the triad topology. Therefore, the diversity of the second exemplar is also promoted to a large extent. Along with high diversity of the first exemplar, we can see that the learning diversity of particles is high, which is of great benefit for particles to search the solution space dispersedly and thus is helpful for the swarm to escape from local areas. (3) The first exemplar is expectedly better than the personal best position of the particle to be updated. As a result, the learning effectiveness and efficiency of particles is expected to be promoted, which is beneficial for the swarm to rapidly converge to promising areas. (4) As for the second exemplar, it can be considered as a kind of distribution estimation of the triad pbests. Utilizing it as one guiding exemplar is also expected to direct the updated particle to promising areas fast. (5) However, compared with the first exemplar, the quality of the second exemplar is uncertain. In particular, we can consider that the first exemplar is responsible for convergence, while the second is in charge of swarm diversity. Therefore, in Equation (4), a promising balance between fast convergence and high diversity is implicitly maintained in the update of particles.
As for the triad topology structure, to guarantee the learning effectiveness of particles, instead of frequently changing the topology structure, we first keep the structure fixed for each particle. That is to say, the indexes (pbestr1 and pbestr2) of the randomly selected two personal best positions for each particle are not changed. Then, we observe the change of the personal best position of each particle. For particle xi (0 ≤ iPS), if its personal best position pbesti keeps unchanged for continuous stopmax times, this indicates that the learning effectiveness of this particle under the triad structure degrades. In this situation, to improve the learning effectiveness of this particle, we randomly reselect two different personal best positions from those of other particles to rebuild the triad topology structure. In this way, the learning effectiveness and learning diversity of particles can be largely promoted.
In Section 4.3, investigative experiments are conducted to verify the effectiveness of the adaption strategy for the triad topology structure of each particle. Experimental results show that stopmax = 30 helps STTPSO achieve the best overall performance, and thus in this paper, we set stopmax = 30 for STTPSO.

Remark

In essence, the proposed stochastic triad topology belongs to a kind of random topology. In the literature, many studies [21,22,52,57] have adopted random topologies to select guiding exemplars for particles to learn from. Compared with these existing studies, this paper distinguishes itself from them in the following ways:
(1)
Unlike existing studies that use the random topologies to determine only one guiding exemplar to replace the social exemplar (gbest) in the classical PSO [6,7], the proposed STTPSO utilizes the stochastic triad topology for each particle to select the best one among the triad personal best positions and computes the mean position of these pbests as the two guiding exemplars to direct the update of this particle. In this way, due to the randomness of the triad topology, not only the diversity of the first exemplar is promoted largely, but also the diversity of the second exemplar is promoted to a large extent. Therefore, the learning diversity of particles is improved, which is beneficial for enhancing the chance of escaping from local areas for the swarm.
(2)
Unlike existing studies that change the random topology structure every generation, this paper adaptively changes the triad topology structure based on the evolution state of each particle. In particular, we record stagnation times of each particle (xi), which is actually the number of continuous generations where the personal best position (pbesti) of the particle remains unchanged. When such a number exceeds a predefined threshold stopmax, the triad topology structure is reconstructed by randomly reselecting two different personal best positions from those of other particles. In this way, the triad topology structure of each particle is changed asynchronously, which guarantees the learning effectiveness of particles.

3.2. Dynamic Acceleration Coefficients

As for the parameters in Equation (4), with respect to the inertia weight w, we directly adopt the widely used dynamic strategy as shown in Equation (3) to dynamically adjust w during the evolution.
As for the acceleration coefficients c1 and c2, in the classical PSO, a large body of research has recommended to set them as c1 = c2 = 1.49618 [18]. Such a setting makes all particles share the same setting, which, as far as we are concerned, is not beneficial for improving the learning diversity of particles. Therefore, to alleviate this issue and to further enhance the learning diversity of particles, we first randomly samples two values v1 and v2 from the Gaussian distribution Gaussian (1.49618, 0.1) as follows:
{ v 1 = G a u s s i a n ( 1.49618 , 0.1 ) v 2 = G a u s s i a n ( 1.49618 , 0.1 )
Then, we set c1 and c2 based on the sampled two values as follows:
{ c 1 = m a x ( v 1 , v 2 ) c 2 = m i n ( v 1 , v 2 )
First, the Gaussian distribution Gaussian (1.49618, 0.1) with the mean value set as the classical setting of c1 and c2 and the standard deviation set as a small value allows the sampled two values to be close to the classical setting but with a small difference. In this way, the diversity of the settings of c1 and c2 is slightly increased, resulting in a slight improvement in the learning diversity of particles.
Second, between the two sampled values, the larger one is set to c1, while the smaller one is set to c2. This is because, as aforementioned in Equation (4), the first guiding exemplar (the best one among the triad pbests) is expectedly better than the second guiding exemplar (the mean position of the triad pbests), and thus we can consider that the first exemplar is responsible for the convergence, while the second exemplar takes charge of the diversity. Since the second exemplar is the mean position of the triad pbests, which is expectedly different from the first exemplar, we set c1 with the larger sampled value and c2 with the smaller value to guarantee that the updated particle learns more from the first exemplar, so that it can approach promising areas fast without serious loss of diversity by learning slightly less from the second exemplar.
Lastly, experiments conducted in Section 4.3 will demonstrate the effectiveness of the proposed dynamic acceleration coefficient strategy.

3.3. Historical Information Utilization

In PSO, the obsolete historical information may also contain useful evolutionary information. As a consequence, many studies [2,58] have maintained an additional archive to store historical information to evolve the swarm. Inspired from this, this paper also maintains an archive of size PS/2 to store the obsolete personal best positions of particles.
Specifically, during the evolution, once a particle discovers a better position, its old personal best position is first inserted into the archive and then is replaced by the new better position. Once the archive is full, namely when its size exceeds PS/2, an obsolete personal best position is inserted into the archive by randomly replacing a solution in the archive.
During the update of particles, the archive along with the personal best positions of all particles are used to construct the triad topology structure of each particle. In particular, when the stagnation times of the personal best position of one particle exceeds stopmax, two different personal best positions are randomly selected from the archive and those of the other particles to rebuild the triad topology structure of this particle. In this way, the historical evolutionary information is employed to evolve the swarm.
Due to the utilization of historical information, the number of candidates used to build the triad topology of each particle is increased and thus the learning diversity of particles can be improved largely, which is beneficial for the swarm to escape from local areas. Experiments conducted in Section 4.3 will demonstrate the effectiveness of this additional archive.

3.4. Random Restart Strategy

To further enhance the chance of escaping from local areas for the swarm, this paper further proposes a random restart strategy to introduce initial solutions. Specifically, given a small restart probability pm, during the evolution, in each generation, when a uniformly sampled value within [0, 1] is smaller than pm, a feasible solution is randomly initialized within the range of variables. Then, instead of inserting this solution into the current swarm as noted in existing studies [59,60,61,62,63], this paper inserts this initialized solution into the archive. If the archive is full, it randomly replaces a solution in the archive.
In particular, such a restart strategy with the initialized solution inserted in the archive could not seriously break the convergence of the swarm but could improve the learning diversity of particles effectively. Most existing studies [60,61,62,63] only replace one particle in the current swarm with the initialized solution. Such a strategy usually leads to a very small improvement in the learning diversity and learning effectiveness of particles. This is because the personal best positions of all particles remain unchanged, leading to the learning effectiveness of most particles not improving. However, in our strategy, the randomly initialized solution is inserted into the archive, which is then used to build the triad topology structure of each particle. Therefore, we can see that once the initialized solution is selected to build the triad structure of one particle, at least the second exemplar (namely the mean position of the triad positions in the topology) is changed. Consequently, the learning diversity of particles can be effectively improved, which is beneficial for the swarm to escape from local areas. Experiments conducted in Section 4.3 will demonstrate the effectiveness of this restart strategy.

3.5. Overall Procedure

Integrating the above components together, the overall procedure of the developed STTPSO is shown in Algorithm 1. Specifically, as shown in Lines 1 to 4, the triad topology is constructed for each of PS particles after they are randomly initialized and evaluated. Moreover, the stagnation time of each particle is initialized as 0. Then, the algorithm goes into the main iteration (Lines 5~27). First, for each particle xi, the inertia weight w is computed (Line 7) and then the acceleration coefficients c1 and c2 are set based on Gaussian distribution (Lines 8–11). Subsequently, the particle is updated, and then the personal best position (pbesti) of this particle is updated with its stagnation time stopi updated as well (Lines 12–19). Once the stagnation time of particle xi reaches the allowed maximum stagnation time stopmax, two different personal best positions are randomly selected from those of all particles and the archive to rebuild the triad topology structure (Lines 20~22). After all particles are updated, the random restart strategy is conditionally triggered to randomly insert an initialized solution into the archive (Lines 24~26). The above main iteration proceeds until the maximum number of fitness evaluations is exhausted and at the end of the program, the global best position is obtained as the final output.
From Algorithm 1, it can be observed that during each iteration, O(PS) is needed to compute the parameters such as w, c1 and c2. Following this, O(PS) is needed to obtain the best one among the triad pbests and O(PS × D) to calculate the mean position of the triad pbests for all particles. Then, O(PS × D) is used to update particles. During the update of the archive, O(PS × D) is needed in each generation. Overall, the time complexity of STTPSO is O(PS × D), which is the same as the classical PSO. Therefore, we can see that STTPSO remains as efficient as the classical PSO.
Algorithm 1: The pseudocode of STTPSO
Input: swarm size PS, maximum fitness evaluations FEmax, maximum stagnation times stopmax, restart probability pm;
     1: Initialize PS particles randomly and calculate their fitness;
     2: Set fes = PS, and set the archive empty;
     3: Randomly select two different personal best positions (pbestr1 and pbestr2) from the personal best positions of
        other particles and the archive for each particle to form the associated triad topology;
     4: Set the stagnation time stopi = 0 (1 ≤ i ≤ PS) for each particle;
     5: While (fes ≤ FEmax) do
     6:  For i = 1:PS do
     7:     Compute w according to Equation (3);
     8:     Randomly sample c1 and c2 from Gaussian(1.49618,0.1);
     9:     If c1 < c2 then
     10:     Swap c1 and c2;
     11:     End If
     12:     Update xi and vi according to Equations (2) and (4);
     13:     Calculate the fitness of the updated xi: f(xi) and fes + = 1;
     14:     If f(xi) < f(pbesti) then
     15:     Put pbesti in the archive and set stopi = 0;
     16:     pbesti = xi;
     17:     Else
     18:     stopi += 1;
     19:     End If
     20:     If stopi >= stopmax then
             Reselect two different personal best positions (pbestr1 and pbestr2) from those of other particles and
     21:      the archive for xi to form the associated triad topology;
     22:     End If
     23:  End For
     24:  If rand(0, 1) < pm then
     25:     Randomly initialize a solution and store it into the archive;
     26:  End If
     27: End While
     28: Obtain the global best solution gbest and its fitness f(gbest);
Output: f(gbest) and gbest

4. Experiments

This section mainly verifies the effectiveness of the proposed STTPSO by extensive experiments conducted on the widely used CEC 2017 benchmark function set [44]. Specifically, this benchmark set contains 29 optimization problems with four categories, namely unimodal functions, simple multimodal functions, hybrid functions, and composition functions. Compared with the former two categories of optimization problems, the latter two kinds of optimization problems are more difficult to optimize.

4.1. Experimental Setup

Firstly, in order to verify the effectiveness of STTPSO effectively, we select seven most advanced PSO variants as the compared methods, namely DNSPSO [28], XPSO [23], DPLPSO [45], TCSPSO [19], GLPSO [18], HCLPSO [25] and CLPSO [17]. DNSPSO, XPSO and DPLPSO are state-of-the-art topology-based PSO variants, while TCSPSO, GLPSO, HCLPSO and CLPSO are state-of-the-art exemplar construction based PSO variants.
Secondly, in order to verify the optimization performance of STTPSO in a comprehensive way, we conduct comparative experiments on the CEC 2017 benchmark set with three dimension sizes, namely 30-D, 50-D and 100-D respectively. For the sake of fairness, the maximum number of function evaluation times is set as 10,000 × D for all algorithms.
Thirdly, for fair comparisons, except for the population size, we adopt the parameter settings for all key parameters in the compared PSO variants as recommended in the associated papers. As for the population size, due to its sensitivity to optimization problems, we fine-tune its settings for all compared PSO variants. After preliminary parameter fine-tuning experiments, Table 2 lists the specific parameter settings of all algorithms.
Finally, in order to comprehensively evaluate the optimization performance of all algorithms, we independently execute each algorithm 30 times and use the median, the mean and the standard deviation to measure the optimization performance of each algorithm. To distinguish the statistical significance with respect to the performance difference between two algorithms, the Wilcoxon rank sum test at the significance level of 0.05 is conducted. Furthermore, to obtain the overall performance of each algorithm on the whole benchmark set, the Friedman test is conducted to obtain the overall ranks of all algorithms on the whole benchmark set.

4.2. Comparison with State-of-the-Art PSO Variants

In this section, we conduct extensive comparative experiments on the CEC 2017 benchmark set with the three dimension sizes to compare STTPSO with the seven state-of-art PSO variants. Table 3, Table 4 and Table 5, respectively, show the comparison results between the proposed STTPSO and the seven PSO variants on the 30-D, the 50-D and the 100-D CEC 2017 benchmark functions. In these tables, the symbols ‘+’, ‘−’ and ‘=‘ indicate that the proposed STTPSO is significantly superior to, significantly inferior to and roughly equivalent to the associated compared algorithm on the associated functions. As shown in the second to last row of each table, ‘w/t/l’ count the number of functions where the proposed STTPSO achieves significantly better performance, obtains roughly equivalent performance, and exhibits significantly worse performance than the compared algorithms, respectively. In fact, they are the numbers of ‘+’, ‘=‘ and ‘−’. In the last row of each table, the average rank of each algorithm obtained by the Friedman test is displayed. Moreover, the statistical comparison results between the proposed STTPSO and the seven state-of-the-art PSO variants on the CEC 2017 benchmark set with different dimension sizes in terms of “w/t/l” are summarized in Table 6.
As shown in Table 3, the comparison results on the 30-D CEC 2017 benchmark functions are summarized below:
(1)
According to the Friedman test results as shown in the last row, STTPSO achieves the lowest rank among all eight algorithms and its rank value (1.86) is much smaller than those (at least 2.55) of the seven compared algorithms. This demonstrates that STTPSO achieves the best overall performance on the 30-D CEC 2017 benchmark functions, and presents significant superiority over the seven compared algorithms.
(2)
The second last row of Table 2 shows that STTPSO is significantly superior to the compared algorithms on at least 21 problems except for XPSO, and only presents inferior performance on, at most, five problems. Compared with XPSO, STTPSO obtains significantly better performance on 18 problems, while only performing worse than XPSO on three problems.
(3)
In terms of the comparison results on different types of optimization problems, STTPSO achieves highly competitive performance with all the compared algorithms on the two unimodal problems. In particular, it shows significant dominance to DNSPSO and DPLPSO both on the two problems. In terms of the six simple multimodal problems, except for DNSPSO, STTPSO shows significantly better performance than the other six compared algorithms on all these problems. Compared with DNSPSO, STTPSO presents significant superiority on five problems and shows inferiority on only one problem. Regarding the 10 hybrid problems, STTPSO shows much better performance than DPLPSO on all 10 problems. Compared with DNSPSO, TCSPSO, and HCLPSO, STTPSO obtains significantly better performance on seven, six, and seven problems, respectively, and only shows inferiority to them on, at most, two problems. In comparison with XPSO, GLPSO, and CLPSO, STTPSO achieves no worse performance on at least seven problems and displays inferiority to them on, at most, three problems. Concerning the 11 composition problems, STTPSO outperforms the seven compared algorithms on at least nine problems, and only shows inferiority on, at most, two problems. In particular, STTPSO significantly outperforms both TCSPSO and GLPSO on all these problems and obtains much better performance than both HCLPSO and DPLPSO on 10 problems with no inferiority to them on all the 11 problems. Overall, it is demonstrated that STTPSO shows promise in solving various kinds of problems and particularly obtains good performance on complicated problems, such as multimodal problems, hybrid problems, and composition problems.
As shown in Table 4, the comparison results on the 50-D CEC 2017 benchmark problems are summarized below:
(1)
According to the Friedman test results shown in the last row, STTPSO achieves the lowest rank. This indicates that STTPSO still achieves the best overall performance on the whole 50-D CEC 2017 benchmark set. In particular, except for XPSO, its rank value (2.17) is much smaller than those (at least 4.14) of the other six compared algorithms. This demonstrates that STTPSO displays significantly better overall performance than the six compared algorithms.
(2)
From the perspective of the Wilcoxon rank sum test, as shown in the second to last row, STTPSO achieves significantly better performance than the seven compared algorithms on at least 19 problems and shows inferiority to them on, at most, five problems. In particular, compared with DNSPSO, TCSPSO, GLPSO, and CLPSO, STTPSO significantly dominates them all on 23 problems. In comparison with DPLPSO, STTPSO presents significant superiority on all the 29 problems.
(3)
In terms of different types of optimization problems, STTPSO achieves highly competitive performance with the seven compared state-of-the-art PSO variants regarding the two unimodal problems. Particularly, STTPSO defeats DPLPSO concerning these two problems. On the six simple multimodal problems, STTPSO performs much better than the seven compared algorithms on at least five problems. In particular, STTPSO presents significant dominance to XPSO, TCSPSO, GLPSO, DPLPSO, and CLPSO on all the six problems. Regarding the 10 hybrid problems, except for XPSO, STTPSO is significantly superior to the seven compared algorithms on at least seven problems, and shows inferiority on, at most, three problems. In particular, STTPSO significantly outperforms DPLPSO on all the 10 problems and obtains significantly better performance than DNSPSO on nine problems. Concerning the 11 composition problems, STTPSO displays significantly better performance than the seven state-of-the-art PSO variants on at least eight problems, and performs worse than them on, at most, two problems. Particularly, STTPSO shows significant dominance to DPLPSO on all the 11 problems and obtains much better performance than both TCSPSO and GLPSO on 10 problems. Overall, it is still demonstrated that STTPSO is a promising approach for problem optimization and displays its sound optimization ability in solving complicated optimization problems, such as multimodal problems, hybrid problems, and composition problems.
As shown in Table 5, the comparison results on the 100-D CEC 2017 benchmark set are summarized below:
(1)
According to the Friedman test results, STTPSO achieves the lowest rank among all algorithms. This verifies that STTPSO still obtains the best overall performance on the 100-D CEC 2017 benchmark set. In particular, its rank value (1.52) is much smaller than those (at least 2.72) of the seven compared algorithms. This further demonstrates that STTPSO displays significant dominance to the seven compared algorithms. Together with the observations on the 30-D and 50-D CEC 2017 benchmark set, we can see that STTPSO consistently performs the best on the CEC 2017 benchmark set with different dimension sizes among all eight algorithms, and consistently presents its significant superiority to the seven compared algorithms on the benchmark set with the three dimension sizes. Therefore, it is demonstrated that STTPSO preserves a good scalability to solve optimization problems.
(2)
Regarding the Wilcoxon rank sum test, from the second to last row, it is observed that STTPSO achieves significantly better performance than the seven compared algorithms on at least 20 problems and shows inferiority to them on, at most, four problems. In particular, STTPSO outperforms DPLPSO significantly on all the 29 problems, and obtains much better performance than TCSPSO, GLPSO, HCLPSO, and CLPSO on 24, 24, 26, and 27 problems, respectively.
(3)
With respect to the optimization performance on different types of optimization problems, STTPSO obtains highly competitive or even much better performance than the seven compared algorithms on the two unimodal problems. Particularly, STTPSO shows significant dominance to DPLPSO and CLPSO on the two problems. As for the six simple multimodal problems, except for DNSPSO, STTPSO exhibits significant superiority to the other six compared algorithms on all these six problems. Competed with DNSPSO, STTPSO also shows much better performance on five problems. In terms of the 10 hybrid problems, except for XPSO, STTPSO is significantly superior to the other six compared algorithms on at least seven problems. Compared with XPSO, STTPSO illustrates significantly better performance on five problems and does not show inferiority on any of the problems. In particular, it is discovered that STTPSO is significantly better than HCLPSO and DPLPSO on all the 10 problems. Regarding the 11 composition problems, except for DNSPSO, STTPSO achieves much better performance than the other six compared algorithms on at least nine problems. Compared with DNSPSO, it still performs much better on seven problems. Particularly, STTPSO shows significant superiority to DPLPSO on all the 11 problems, and obtains much better performance than TCSPSO, GLPSO, and CLPSO on 10 problems and shows no inferiority to the three compared methods on these kinds of problems. Overall, it is demonstrated that STTPSO is still effective at solving optimization problems, especially complicated problems, such as multimodal problems, hybrid problems, and composition problems.
To summarize, as shown in Table 6, on the CEC 2017 benchmark set with different dimension sizes, we find that the proposed STTPSO not only shows highly competitive performance against the compared state-of-the-art PSO variants on simple optimization problems, such as unimodal problems, but also achieves much better performance on complicated optimization problems, such as multimodal problems, hybrid problems and composition problems. In particular, we find that the superiority of STTPSO to the compared state-of-the-art methods is far more conspicuous regarding complicated problems, such as hybrid problems and composition problems. On the other hand, it can be concluded that STTPSO preserves a good scalability to solve optimization problems, since it consistently achieves the best overall performance on the CEC 2017 set with the three dimension sizes. Moreover, it is found that as the dimensionality increases, the superiority of STTPSO to certain compared algorithms become much more evident.
The above extensive experiments have demonstrated the effectiveness of STTPSO in solving optimization problems. To further demonstrate its efficiency in tackling optimization problems, we conduct experiments on the 50D CEC 2017 benchmark set to form convergence comparisons between STTPSO and the seven compared algorithms. Figure 1 presents the comparison results on the 16 50D CEC 2017 problems of different categories.
From Figure 1, the following observations can be obtained. (1) At a first glance, STTPSO obtains much better performance in terms of both convergence speed and solution quality on 12 problems (f5, f7, f9, f11, f12, f16, f17, f20, f21, f23, f24, and f29). (2) On f19 and f26, STTPSO shows clear dominance to six compared methods regarding both convergence speed and solution quality, and only presents inferiority to only one of the compared methods. (3) On f1 and f14, STTPSO displays conspicuously faster convergence speed and higher solution quality than five compared methods, and only presents slight inferiority to two compared methods. (4) Overall, it is demonstrated that STTPSO could solve optimization problems with both high effectiveness and efficiency.
The superiority of STTPSO mainly benefits from the proposed stochastic triad strategy along with the devised archive, the restart mechanism and the dynamic parameter adjustment strategy. These strategies cooperate cohesively to improve the learning diversity and learning effectiveness of particles, which help the swarm explore and exploit the solution space properly to find the optima of optimization problems. To investigate the influence of the four components, we will conduct a thorough investigation on STTPSO in the following subsection.

4.3. Deep Investigation on STTPSO

In this section, we aim to verify the effectiveness of each component in STTPSO by conducting experiments on the 50-D CEC 2017 benchmark set.

4.3.1. Effectiveness of the Reformulation of the Stochastic Triad Topology

First, we conduct experiments to verify the effectiveness of the reformulation of the stochastic triad topology. In Section 3.1, we mentioned that in order to retain the learning effectiveness of particles, we let the triad topology structure remain fixed for each particle and then adjust it based on the evolution state of this particle. In particular, when the personal best position (pbest) of one particle keeps unchanged for stopmax times, we then reformulate the triad topology by randomly selecting two personal positions from those of the current swarm and the archive. In this way, both the learning effectiveness and learning diversity of particles can be guaranteed.
To verify the effectiveness of this strategy, we fix stopmax as different values as shown in Table 7. It should be mentioned that the larger the value of stopmax, the less frequently the triad topology structure changes. In particular, when stopmax = 0, the triad structure is consequently changed in every generation. The comparison results among STTPSO with different settings of stopmax is shown in Table 7.
From Table 7, it is found that (1) from the perspective of the Friedman test, stopmax = 30 helps STTPSO achieve the best overall performance on the 50-D CEC 2017 benchmark set. In particular, we find that the rank value (2.14) of STTPSO with stopmax = 30 is much smaller than those (at least 4.31) of STTPSO with other settings. This indicates that the superiority of stopmax = 30 is much more significant than the other settings. Moreover, we also find that STTPSO with small stopmax, such as stopmax = 0 and stopmax = 5 achieve a much worse performance. This demonstrates an absence of beneficial effects regarding STTPSO frequently changing the triad structure. (2) Through meticulous observation, we find that STTPSO with stopmax = 30 achieves the best performance on 22 problems. Concerning the other seven problems, its performance is extremely close to the STTPSO with the associated optimal settings of stopmax.
Based on the above observations, it is verified that the reformulation of the triad topology is very helpful for STTPSO to achieve promising performance. In particular, such reformulation should bear neither an excessively high frequency, nor an excessively low frequency.

4.3.2. Effectiveness of the Additional Archive and the Proposed Random Restart Strategy

Subsequently, we conduct experiments to verify the effectiveness of the additional archive and the random restart strategy. To this end, we first remove the additional archive from STTPSO, deriving a new version of STTPSO, which we name as STTPSO_WA. Then, we remove the restart strategy from STTPSO, deriving another version of STTPSO, which we denote as STTPSO_WR. Subsequently, we conduct experiments on the 50-D CEC 2017 benchmark set to compare the three versions of STTPSO. Table 8 displays the comparison results.
From Table 8, it is discovered that STTPSO with both the archive and the restart strategy achieves the best overall performance than the other two versions of STTPSO. In particular, we find that STTPSO_WR obtains the worst performance. This indicates that compared with the archive, the restart strategy is far more helpful. This is because compared with the archive, which stores the obsolete historical information, the restart strategy is more effective at improving the swarm diversity since it can introduce new solutions into the archive to promote the learning diversity of particles.

4.3.3. Effectiveness of the Dynamic Acceleration Coefficients

At last, we conduct experiments to verify the effectiveness of the devised dynamic acceleration coefficient strategy. In Section 3.2, instead of using fixed acceleration coefficients, the proposed STTPSO first randomly samples two different values based on the Gaussian distribution, and then the larger one between the two sampled values is utilized as c1, while the smaller one is utilized as c2. In this way, a promising balance between exploration and exploitation can be preserved. To validate this, we first denote the original strategy in this paper as “Dynamic”. Then, we replace the settings of c1 and c2 with two other settings. The first is to directly utilize the sampled values as c1 and c2 without comparison, which we denote as “Dynamic2”. The other is to utilize the smaller one between the two sampled values as c1, and the larger one as c2, which is a converse setting of the one used in this paper, which we denote as “Dynamic3”. Lastly, as the baseline comparison, we adopt fixed settings for c1 and c2 by varying them from 1.0 to 2.0. Table 9 shows the comparison results between STTPSO with different settings of c1 and c2 on the 50-D CEC 2017 benchmark set.
From Table 9, it is observed that from the average rank obtained from the Friedman test, the proposed dynamic strategy for c1 and c2 helps STTPSO achieve the best overall performance among all setting versions of c1 and c2. This demonstrates that the proposed dynamic strategy is extremely effective for STTPSO to achieve good performance. In particular, compared with the fixed settings, the proposed dynamic strategy helps STTPSO achieve much better performance than all the fixed settings. This demonstrates the dynamic sampling of c1 and c2 is far more effective than fixed ones. In comparison with the other two dynamic strategies, STTPSO with the proposed dynamic strategy obtains significantly better performance than those with the other two dynamic strategies. This demonstrates utilization of the larger one between the sampled values as c1 and the smaller one as c2 is far more effective. Together, we can observe that the proposed dynamic acceleration coefficient strategy is helpful in order for STTPSO to achieve good performance.

5. Conclusions

This paper has devised a stochastic triad topology-based particle swarm optimization (STTPSO) algorithm to solve optimization problems. Specifically, in this optimizer, for each particle, a triad topology is utilized to connect the personal best position of this particle and two other personal best positions randomly selected from those of particles in the current swarm and an additional archive used to store obsolete historical best positions. Then, the best one in the topology and the mean position of the connected triad personal best positions are employed to update each particle. In addition, during the evolution, the triad topology structure of each particle is dynamically updated based on its evolution state. In this way, the learning diversity and learning effectiveness of particles could be largely promoted, so that the swarm could explore and exploit the solution space appropriately. To further improve the swarm diversity, a random restart strategy is proposed by randomly initializing a feasible solution and then inserting into the archive. To alleviate the sensitivity of STTPSO to the acceleration coefficients, a dynamic acceleration coefficient strategy is devised based on the Gaussian distribution. With the above mechanisms, the proposed STTPSO is expected to search the solution space with proper intensification and diversification to achieve promising performance.
Extensive comparative experiments conducted on the CEC 2017 benchmark set with three different dimension sizes have demonstrated the effectiveness of STTPSO. Compared with seven state-of-the-art PSO variants, the proposed STTPSO consistently achieves the best overall performance on the CEC 2017 set with the three dimension sizes. In particular, we find that STTPSO exhibits much better performance than the compared methods regarding complicated optimization problems, such as hybrid problems and composition problems. In addition, the experimental results verified that STTPSO preserves a good scalability to solve optimization problems. In depth investigation on the proposed STTPSO was also conducted to validate the effectiveness of each component in STTPSO. Experimental results demonstrated each component as being of great benefit for STTPSO to achieve good performance.
However, from Table 3, Table 4 and Table 5, we can see that the results obtained by STTPSO on certain problems remain far from the true optima. Therefore, its optimization performance still requires improvement. In this paper, we adjusted the parameters in STTPSO dynamically without considering the evolution state of particles and the difference between particles. As a result, in future, we will mainly focus on devising adaptive parameter adjustment strategies by considering both the difference between particles and the evolution state of particles to further promote the optimization ability of STTPSO.

Author Contributions

Q.Y.: Conceptualization, supervision, methodology, formal analysis, and writing—original draft preparation. Y.-W.B.: Implementation, formal analysis, and writing—original draft preparation. X.-D.G.: Methodology, and writing—review and editing. D.-D.X.: Writing—review and editing. Z.-Y.L.: Writing—review and editing, and funding acquisition. S.-W.J.: Writing—review and editing. J.Z.: Conceptualization and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62006124 and U20B2061, in part by the Natural Science Foundation of Jiangsu Province under Project BK20200811, in part by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant 20KJB520006, in part by the National Research Foundation of Korea (NRF-2021H1D3A2A01082705), and in part by the Startup Foundation for Introducing Talent of NUIST.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, L.; Chang, L.; Gu, T.; Sheng, W.; Wang, W. On the Norm of Dominant Difference for Many-Objective Particle Swarm Optimization. IEEE Trans. Cybern. 2021, 51, 2055–2067. [Google Scholar] [CrossRef] [PubMed]
  2. Xia, X.; Gui, L.; Yu, F.; Wu, H.; Wei, B.; Zhang, Y.L.; Zhan, Z.H. Triple Archives Particle Swarm Optimization. IEEE Trans. Cybern. 2020, 50, 4862–4875. [Google Scholar] [CrossRef] [PubMed]
  3. Liu, W.; Wang, Z.; Yuan, Y.; Zeng, N.; Hone, K.; Liu, X. A Novel Sigmoid-Function-Based Adaptive Weighted Particle Swarm Optimizer. IEEE Trans. Cybern. 2021, 51, 1085–1093. [Google Scholar] [CrossRef] [PubMed]
  4. Yang, Q.; Hua, L.; Gao, X.; Xu, D.; Lu, Z.; Jeon, S.-W.; Zhang, J. Stochastic Cognitive Dominance Leading Particle Swarm Optimization for Multimodal Problems. Mathematics 2022, 10, 761. [Google Scholar] [CrossRef]
  5. Yang, Q.; Chen, W.-N.; Zhang, J. Probabilistic Multimodal Optimization. In Metaheuristics for Finding Multiple Solutions; Springer: Berlin/Heidelberg, Germany, 2021; pp. 191–228. [Google Scholar]
  6. Eberhart, R.; Kennedy, J. A New Optimizer Using Particle Swarm Theory. In Proceedings of the International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  7. Shi, Y.; Eberhart, R.C. Empirical Study of Particle Swarm Optimization. In Proceedings of the Congress on Evolutionary Computation, Washington, DC, USA, 6–9 July 1999; pp. 1945–1950. [Google Scholar]
  8. Tsekouras, G.E.; Tsimikas, J.; Kalloniatis, C.; Gritzalis, S. Interpretability Constraints for Fuzzy Modeling Implemented by Constrained Particle Swarm Optimization. IEEE Trans. Fuzzy Syst. 2018, 26, 2348–2361. [Google Scholar] [CrossRef]
  9. Lin, C.; Chen, C.; Lin, C. Efficient Self-Evolving Evolutionary Learning for Neurofuzzy Inference Systems. IEEE Trans. Fuzzy Syst. 2008, 16, 1476–1490. [Google Scholar]
  10. Yang, Q.; Chen, W.N.; Gu, T.; Jin, H.; Mao, W.; Zhang, J. An Adaptive Stochastic Dominant Learning Swarm Optimizer for High-Dimensional Optimization. IEEE Trans. Cybern. 2022, 52, 1960–1976. [Google Scholar] [CrossRef]
  11. Zhang, X.; Du, K.J.; Zhan, Z.H.; Kwong, S.; Gu, T.L.; Zhang, J. Cooperative Coevolutionary Bare-Bones Particle Swarm Optimization with Function Independent Decomposition for Large-Scale Supply Chain Network Design with Uncertainties. IEEE Trans. Cybern. 2020, 50, 4454–4468. [Google Scholar] [CrossRef]
  12. Chen, W.N.; Tan, D.Z.; Yang, Q.; Gu, T.; Zhang, J. Ant Colony Optimization for the Control of Pollutant Spreading on Social Networks. IEEE Trans. Cybern. 2020, 50, 4053–4065. [Google Scholar] [CrossRef]
  13. Ge, Q.; Guo, C.; Jiang, H.; Lu, Z.; Yao, G.; Zhang, J.; Hua, Q. Industrial Power Load Forecasting Method Based on Reinforcement Learning and PSO-LSSVM. IEEE Trans. Cybern. 2022, 52, 1112–1124. [Google Scholar] [CrossRef]
  14. Cao, Y.; Zhang, H.; Li, W.; Zhou, M.; Zhang, Y.; Chaovalitwongse, W.A. Comprehensive Learning Particle Swarm Optimization Algorithm With Local Search for Multimodal Functions. IEEE Trans. Evol. Comput. 2019, 23, 718–731. [Google Scholar] [CrossRef]
  15. Liang, J.J.; Suganthan, P.N. Dynamic Multi-swarm Particle Swarm Optimizer. In Proceedings of the IEEE Swarm Intelligence Symposium, Pasadena, CA, USA, 8–10 June 2005; pp. 124–129. [Google Scholar]
  16. Mirjalili, S.; Lewis, A. Obstacles and Difficulties for Robust Benchmark Problems: A Novel Penalty-based Robust Optimisation Method. Inf. Sci. 2016, 328, 485–509. [Google Scholar] [CrossRef]
  17. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive Learning Particle Swarm Optimizer for Global Optimization of Multimodal Functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  18. Gong, Y.-J.; Li, J.-J.; Zhou, Y.; Li, Y.; Chung, H.S.-H.; Shi, Y.-H.; Zhang, J. Genetic Learning Particle Swarm Optimization. IEEE Trans. Cybern. 2015, 46, 2277–2290. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Zhang, X.; Liu, H.; Zhang, T.; Wang, Q.; Wang, Y.; Tu, L. Terminal Crossover and Steering-based Particle Swarm Optimization Algorithm with Disturbance. Appl. Soft Comput. 2019, 85, 105841. [Google Scholar] [CrossRef]
  20. Liu, Z.; Nishi, T. Strategy Dynamics Particle Swarm Optimizer. Inf. Sci. 2022, 582, 665–703. [Google Scholar] [CrossRef]
  21. Kennedy, J.; Mendes, R. Population Structure and Particle Swarm Performance. In Proceedings of the Congress on Evolutionary Computation, Honolulu, HI, USA, 12–17 May 2002; pp. 1671–1676. [Google Scholar]
  22. Clerc, M. Beyond Standard Particle Swarm Optimisation. Int. J. Swarm Intell. Res. 2010, 1. [Google Scholar] [CrossRef]
  23. Xia, X.; Gui, L.; He, G.; Wei, B.; Zhang, Y.; Yu, F.; Wu, H.; Zhan, Z.-H. An Expanded Particle Swarm Optimization Based on Multi-exemplar and Forgetting Ability. Inf. Sci. 2020, 508, 105–120. [Google Scholar] [CrossRef]
  24. Karim, A.A.; Isa, N.A.M.; Lim, W.H. Modified Particle Swarm Optimization with Effective Guides. IEEE Access 2020, 8, 188699–188725. [Google Scholar] [CrossRef]
  25. Lynn, N.; Suganthan, P.N. Heterogeneous Comprehensive Learning Particle Swarm Optimization with Enhanced Exploration and Exploitation. Swarm Evol. Comput. 2015, 24, 11–24. [Google Scholar] [CrossRef]
  26. Yue, C.; Qu, B.; Liang, J. A Multiobjective Particle Swarm Optimizer Using Ring Topology for Solving Multimodal Multiobjective Problems. IEEE Trans. Evol. Comput. 2018, 22, 805–817. [Google Scholar] [CrossRef]
  27. Chakraborty, A.; Ray, K.S.; Dutta, S.; Bhattacharyya, S.; Kolya, A. Species Inspired PSO based Pyramid Match Kernel Model (PMK) for Moving Object Motion Tracking. In Proceedings of the Fourth International Conference on Research in Computational Intelligence and Communication Networks, Kolkata, India, 22–23 November 2018; pp. 152–157. [Google Scholar]
  28. Nianyin, Z.; Zidong, W.; Weibo, L.; Hong, Z.; Kate, H.; Xiaohui, L. A Dynamic Neighborhood-based Switching Particle Swarm Optimization Algorithm. IEEE Trans. Cybern. 2020, 8, 701–717. [Google Scholar]
  29. Vazquez, J.C.; Valdez, F. Fuzzy Logic for Dynamic Adaptation in PSO with Multiple Topologies. In Proceedings of the 2013 Joint IFSA World Congress and NAFIPS Annual Meeting, Edmonton, AB, Canada, 24–28 June 2013; pp. 1197–1202. [Google Scholar]
  30. Liu, Q.; Wei, W.; Yuan, H.; Zhan, Z.-H.; Li, Y. Topology Selection for Particle Swarm Optimization. Inf. Sci. 2016, 363, 154–173. [Google Scholar] [CrossRef]
  31. Lin, W.; Bo, Y.; Jeff, O. Particle Swarm Optimization Using Dynamic Tournament Yopology. Appl. Soft Comput. 2016, 48, 584–596. [Google Scholar]
  32. Xia, X.; Gui, L.; Zhan, Z.-H. A Multi-swarm Particle Swarm Optimization Algorithm based on Dynamical Topology and Purposeful Detecting. Appl. Soft Comput. 2018, 67, 126–140. [Google Scholar] [CrossRef]
  33. Zou, J.; Deng, Q.; Zheng, J.; Yang, S. A Close Neighbor Mobility Method Using Particle Swarm Optimizer for Solving Multimodal Optimization Problems. Inf. Sci. 2020, 519, 332–347. [Google Scholar] [CrossRef]
  34. Parrott, D.; Xiaodong, L. Locating and Tracking Multiple Dynamic Optima by a Particle Swarm Model Using Speciation. IEEE Trans. Evol. Comput. 2006, 10, 440–458. [Google Scholar] [CrossRef] [Green Version]
  35. Cervantes, A.; Galvan, I.M.; Isasi, P. AMPSO: A New Particle Swarm Method for Nearest Neighborhood Classification. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2009, 39, 1082–1091. [Google Scholar] [CrossRef] [Green Version]
  36. Janson, S.; Middendorf, M. A Hierarchical Particle Swarm Optimizer and Its Adaptive Variant. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2005, 35, 1272–1282. [Google Scholar] [CrossRef]
  37. Zhan, Z.; Zhang, J.; Li, Y.; Shi, Y. Orthogonal Learning Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2011, 15, 832–847. [Google Scholar] [CrossRef] [Green Version]
  38. Chen, S.; Hong, X.; Harris, C.J. Particle Swarm Optimization Aided Orthogonal Forward Regression for Unified Data Modeling. IEEE Trans. Evol. Comput. 2010, 14, 477–499. [Google Scholar] [CrossRef] [Green Version]
  39. Yang, Q.; Chen, W.N.; Yu, Z.; Gu, T.; Li, Y.; Zhang, H.; Zhang, J. Adaptive Multimodal Continuous Ant Colony Optimization. IEEE Trans. Evol. Comput. 2017, 21, 191–205. [Google Scholar] [CrossRef] [Green Version]
  40. Yang, Q.; Chen, W.N.; Li, Y.; Chen, C.L.P.; Xu, X.M.; Zhang, J. Multimodal Estimation of Distribution Algorithms. IEEE Trans. Cybern. 2017, 47, 636–650. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Yang, Q.; Li, Y.; Gao, X.-D.; Ma, Y.-Y.; Lu, Z.-Y.; Jeon, S.-W.; Zhang, J. An Adaptive Covariance Scaling Estimation of Distribution Algorithm. Mathematics 2021, 9, 3207. [Google Scholar] [CrossRef]
  42. Wei, F.F.; Chen, W.N.; Yang, Q.; Deng, J.; Zhang, J. A Classifier-Assisted Level-Based Learning Swarm Optimizer for Expensive Optimization. IEEE Trans. Evol. Comput. 2020, 25, 219–233. [Google Scholar] [CrossRef]
  43. Yang, Q.; Chen, W.N.; Gu, T.; Zhang, H.; Yuan, H.; Kwong, S.; Zhang, J. A Distributed Swarm Optimizer with Adaptive Communication for Large-Scale Optimization. IEEE Trans. Cybern. 2020, 50, 3393–3408. [Google Scholar] [CrossRef]
  44. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for The CEC 2017 Competition on Constrained Real-Parameter Optimization; Technical Report; National University of Defense Technology: Changsha, China; Kyungpook National University: Daegu, Korea; Nanyang Technological University: Singapore, 2017; pp. 1–16. [Google Scholar]
  45. Shen, Y.; Wei, L.; Zeng, C.; Chen, J. Particle Swarm Optimization with Double Learning Patterns. Comput. Intell. Neurosci. 2016, 2016, 6510303. [Google Scholar] [CrossRef] [Green Version]
  46. Xie, H.-Y.; Yang, Q.; Hu, X.-M.; Chen, W.N. Cross-generation Elites Guided Particle Swarm Optimization for large scale optimization. In Proceedings of the IEEE Symposium Series on Computational Intelligence, Athens, Greece, 6–9 December 2016; pp. 1–8. [Google Scholar]
  47. Yang, Q.; Chen, W.N.; Gu, T.; Zhang, H.; Deng, J.D.; Li, Y.; Zhang, J. Segment-Based Predominant Learning Swarm Optimizer for Large-Scale Optimization. IEEE Trans. Cybern. 2017, 47, 2896–2910. [Google Scholar] [CrossRef] [Green Version]
  48. Hesam, V.; Naser, S.H.; Mahsa, S. A Hybrid Generalized Reduced Gradient-based Particle Swarm Optimizer for Constrained Engineering Optimization Problems. J. Comput. Civ. Eng. 2021, 5, 86–119. [Google Scholar]
  49. Riaan, B.; Engelbrecht, A.P.; van den Bergh, F. A Niching Particle Swarm Optimizer. In Proceedings of the Asia-Pacific Conference on Simulated Evolution and Learning, Orchid Country Club, Singapore, 18–22 November 2002; pp. 692–696. [Google Scholar]
  50. Yousri, D.; Thanikanti, S.B.; Allam, D.; Ramachandaramurthy, V.K.; Eteiba, M.B. Fractional Chaotic Ensemble Particle Swarm Optimizer for Identifying the Single, Double, and Three Diode Photovoltaic Models’ Parameters. Energy 2020, 195, 116979. [Google Scholar] [CrossRef]
  51. Chen, X.; Tianfield, H.; Du, W. Bee-foraging Learning Particle SwarmOptimization. Appl. Soft Comput. 2021, 102, 107134. [Google Scholar] [CrossRef]
  52. Zhan, Z.-H.; Shi, L.; Tan, K.C.; Zhang, J. A Survey on Evolutionary Computation for Complex Continuous optimization. Artif. Intell. Rev. 2021, 55, 59–110. [Google Scholar] [CrossRef]
  53. Tao, X.; Li, X.; Chen, W.; Liang, T.; Qi, L. Self-Adaptive Two Roles Hybrid Learning Strategies-based Particle Swarm Optimization. Inf. Sci. 2021, 578. [Google Scholar] [CrossRef]
  54. Xu, G.; Zhao, X.; Wu, T.; Li, R.; Li, X. An Elitist Learning Particle Swarm Optimization with Scaling Mutation and Ring Topology. IEEE Access 2018, 6, 78453–78470. [Google Scholar] [CrossRef]
  55. Kennedy, J. Small Worlds and Mega-minds: Effects of Neighborhood Topology on Particle Swarm Performance. In Proceedings of the Congress on Evolutionary Computation, Washington, DC, USA, 6–9 July 1999; Volume 1933, pp. 1931–1938. [Google Scholar]
  56. Lin, A.; Sun, W.; Yu, H.; Wu, G.; Tang, H. Global Genetic Learning Particle Swarm Optimization with Diversity Enhancement by Ring Topology. Swarm Evol. Comput. 2019, 44, 571–583. [Google Scholar] [CrossRef]
  57. Turkey, M.; Poli, R. A Model for Analysing the Collective Dynamic Behaviour and Characterising the Exploitation of Population-Based Algorithms. Evol. Comput. 2014, 22, 159–188. [Google Scholar] [CrossRef] [Green Version]
  58. Zhang, J.; Sanderson, A.C. JADE: Adaptive Differential Evolution with Optional External Archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  59. Djellali, H.; Ghoualmi, N. Improved Chaotic Initialization of Particle Swarm applied to Feature Selection. In Proceedings of the International Conference on Networking and Advanced Systems, Annaba, Algeria, 26–27 June 2019; pp. 1–5. [Google Scholar]
  60. Watanabe, M.; Ihara, K.; Kato, S.; Sakuma, T. Initialization Effects for PSO Based Storage Assignment Optimization. In Proceedings of the Global Conference on Consumer Electronics, Kyoto, Japan, 12–15 October 2021; pp. 494–495. [Google Scholar]
  61. Wang, C.J.; Fang, H.; Wang, C.; Daneshmand, M.; Wang, H. A Novel Initialization Method for Particle Swarm Optimization-based FCM in Big Biomedical Data. In Proceedings of the IEEE International Conference on Big Data, Santa Clara, CA, USA, 29 October–1 November 2015; pp. 2942–2944. [Google Scholar]
  62. Farooq, M.U.; Ahmad, A.; Hameed, A. Opposition-based Initialization and A Modified Pattern for Lnertia Weight (IW) in PSO. In Proceedings of the IEEE International Conference on INnovations in Intelligent SysTems and Applications, Gdynia, Poland, 3–5 July 2017; pp. 96–101. [Google Scholar]
  63. Guo, J.; Tang, S. An Improved Particle Swarm Optimization with Re-initialization Mechanism. In Proceedings of the International Conference on Intelligent Human-Machine Systems and Cybernetics, Hangzhou, China, 26–27 August 2009; pp. 437–441. [Google Scholar]
Figure 1. Convergence behavior comparisons between STTPSO and the seven compared algorithms on the 16 50D CEC 2017 benchmark problems.
Figure 1. Convergence behavior comparisons between STTPSO and the seven compared algorithms on the 16 50D CEC 2017 benchmark problems.
Mathematics 10 01032 g001
Table 2. Parameter settings of the proposed STTPSO and the compared algorithms.
Table 2. Parameter settings of the proposed STTPSO and the compared algorithms.
AlgorithmDParameter Settings
STTPSO30PS = 300AS = PS/2; w = 0.9~0.4; c~N(1.49618,0.1); pm = 0.01; stopmax = 30
50PS = 300
100PS = 300
DNSPSO30PS = 50w = 0.4~0.9; k = 5; F = 0.5; CR = 0.9;
50PS = 50
100PS = 60
XPSO30PS = 100η = 0.2; Stagmax = 5; p = 0.5; σ = 0.1
50PS = 150
100PS = 150
TCSPSO30PS = 50w = 0.9~0.4; c1 = c2 = 2
50PS = 50
100PS = 50
GLPSO30PS = 40w = 0.7298; c = 1.49618; pm = 0.1; sg = 7
50PS = 40
100PS = 50
HCLPSO30PS = 160w = 0.99~0.2; c1 = 2.5~0.5; c2 = 0.5~2.5; c = 3~1.5
50PS = 180
100PS = 180
DPLPSO30PS = 40c1 = c2 = 2; L = 50
50PS = 40
100PS = 40
CLPSO30PS = 40Pc = 0.05~0.5
50PS = 40
100PS = 40
Table 3. Comparison results between the proposed STTPSO and the 7 state-of-the-art and popular PSO variants on the 30-D CEC 2017 benchmark functions. The highlighted p-values means that the proposed STTPSO is significantly better than the associated compared algorithms on the corresponding problems.
Table 3. Comparison results between the proposed STTPSO and the 7 state-of-the-art and popular PSO variants on the 30-D CEC 2017 benchmark functions. The highlighted p-values means that the proposed STTPSO is significantly better than the associated compared algorithms on the corresponding problems.
fCategoryQualitySTTPSODNSPSOXPSOTCSPSOGLPSOHCLPSODPLPSOCLPSO
f1Unimodal FunctionsMedian1.19 × 1031.95 × 1052.26 × 1033.20 × 1032.30 × 1035.49 × 1032.64 × 1091.52 × 102
Mean2.10 × 1032.11 × 1054.05 × 1033.66 × 1033.06 × 1038.65 × 1032.87 × 1093.88 × 102
Std2.28 × 1031.30 × 1054.72 × 1034.08 × 1032.42 × 1037.34 × 1031.11 × 1097.31 × 102
p-value-1.83 × 10−6+2.17 × 10−1=1.95 × 10−1=5.85 × 10−2=8.55 × 10−5+1.83 × 10−6+7.84 × 10−5
f3Median1.52 × 1041.51 × 1056.26 × 10−29.94 × 1031.14 × 10−134.54 × 1013.91 × 1044.30 × 104
Mean1.53 × 1041.54 × 1057.91 × 10−11.15 × 1041.33 × 10−136.87 × 1013.87 × 1044.41 × 104
Std4.39 × 1033.37 × 1042.00 × 1003.68 × 1035.36 × 10−148.17 × 1018.36 × 1031.00 × 104
p-value-1.83 × 10−6+1.83 × 10−62.51 × 10−41.83 × 10−61.83 × 10−61.83 × 10−6+1.83 × 10−6+
f1–3w/t/l-2/0/00/1/10/1/10/1/11/0/12/0/01/0/1
f4Simple Multimodal FunctionsMedian8.47 × 1012.54 × 1011.24 × 1021.30 × 1021.47 × 1028.56 × 1017.62 × 1029.09 × 101
Mean8.48 × 1012.56 × 1011.20 × 1021.32 × 1021.48 × 1028.73 × 1017.99 × 1029.12 × 101
Std3.74 × 10−11.07 × 1002.71 × 1014.84 × 1014.27 × 1017.92 × 1001.86 × 1021.57 × 100
p-value-1.83 × 10−63.56 × 10−5+3.89 × 10−5+4.97 × 10−6+7.84 × 10−5+1.83 × 10−6+1.82 × 10−6+
f5Median4.98 × 1002.04 × 1024.18 × 1018.56 × 1016.77 × 1016.45 × 1012.00 × 1027.66 × 101
Mean4.71 × 1002.03 × 1024.38 × 1018.92 × 1016.68 × 1016.77 × 1011.93 × 1027.52 × 101
Std1.96 × 1001.25 × 1011.58 × 1012.54 × 1011.96 × 1011.67 × 1013.28 × 1016.69 × 100
p-value-1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.82 × 10−6+1.82 × 10−6+1.83 × 10−6+1.83 × 10−6+
f6Median1.14 × 10−131.87 × 10−13.82 × 10−38.01 × 10−16.34 × 10−31.62 × 10−42.97 × 1012.66 × 10−6
Mean1.12 × 10−71.91 × 10−11.50 × 10−21.04 × 1009.69 × 10−32.41 × 10−32.98 × 1012.86 × 10−6
Std2.91 × 10−75.92 × 10−23.82 × 10−21.15 × 1008.77 × 10−35.85 × 10−35.82 × 1001.90 × 10−6
p-value-1.83 × 10−6+1.83 × 10−6+1.82 × 10−6+1.83 × 10−6+1.83 × 10−6+1.82 × 10−6+1.83 × 10−6+
f7Median3.44 × 1012.36 × 1027.94 × 1011.45 × 1029.75 × 1011.06 × 1022.90 × 1029.23 × 101
Mean3.46 × 1012.33 × 1028.17 × 1011.42 × 1029.86 × 1011.01 × 1022.88 × 1029.05 × 101
Std1.12 × 1001.69 × 1011.81 × 1012.87 × 1011.53 × 1011.86 × 1012.45 × 1017.88 × 100
p-value-1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f8Median3.98 × 1002.01 × 1023.83 × 1019.55 × 1015.97 × 1015.66 × 1011.94 × 1028.09 × 101
Mean4.15 × 1002.02 × 1023.98 × 1019.33 × 1016.08 × 1016.35 × 1011.90 × 1028.18 × 101
Std1.67 × 1001.03 × 1011.37 × 1012.22 × 1011.65 × 1012.08 × 1013.20 × 1019.86 × 100
p-value-1.82 × 10−6+1.82 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.82 × 10−6+1.83 × 10−6+
f9Median5.69 × 10−141.50 × 1001.45 × 1003.01 × 1025.98 × 1014.90 × 1011.27 × 1036.58 × 102
Mean5.69 × 10−142.09 × 1002.73 × 1003.85 × 1027.13 × 1018.90 × 1011.50 × 1036.76 × 102
Std5.69 × 10−141.39 × 1003.44 × 1003.36 × 1024.83 × 1011.49 × 1026.80 × 1022.80 × 102
p-value-1.83 × 10−6+1.82 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.82 × 10−6+1.83 × 10−6+
f4–9w/t/l-5/0/16/0/06/0/06/0/06/0/06/0/06/0/0
f10Hybrid FunctionsMedian1.81 × 1036.21 × 1032.80 × 1032.98 × 1033.26 × 1032.87 × 1036.39 × 1033.00 × 103
Mean2.82 × 1035.90 × 1032.62 × 1032.97 × 1033.23 × 1032.90 × 1036.33 × 1032.94 × 103
Std1.82 × 1031.01 × 1036.13 × 1024.22 × 1028.64 × 1025.17 × 1024.47 × 1022.77 × 102
p-value-4.50 × 10−6+5.37 × 10−1=7.89 × 10−1=2.41 × 10−1=8.69 × 10−1=1.83 × 10−6+8.53 × 10−1=
f11Median1.79 × 1019.12 × 1018.01 × 1011.16 × 1027.24 × 1011.09 × 1024.10 × 1021.21 × 102
Mean2.79 × 1019.19 × 1018.63 × 1011.18 × 1027.82 × 1011.08 × 1024.23 × 1021.16 × 102
Std2.33 × 1018.54 × 1004.60 × 1014.27 × 1013.83 × 1014.49 × 1011.19 × 1021.72 × 101
p-value-1.83 × 10−6+9.77 × 10−6+4.50 × 10−6+1.68 × 10−5+3.03 × 10−6+1.83 × 10−6+2.02 × 10−6+
f12Median5.38 × 1045.52 × 1072.64 × 1041.86 × 1051.13 × 1062.39 × 1051.55 × 1081.74 × 106
Mean6.09 × 1046.25 × 1071.93 × 1055.12 × 1053.35 × 1062.55 × 1051.77 × 1082.01 × 106
Std3.97 × 1042.90 × 1075.57 × 1056.68 × 1054.39 × 1061.70 × 1059.46 × 1071.11 × 106
p-value-1.83 × 10−6+2.17 × 10−1=1.36 × 10−5+1.72 × 10−5+8.88 × 10−6+1.83 × 10−6+1.83 × 10−6+
f13Median5.18 × 1031.32 × 1067.54 × 1038.24 × 1037.23 × 1036.10 × 1046.71 × 1063.39 × 103
Mean1.10 × 1041.52 × 1069.83 × 1033.28 × 1051.19 × 1043.80 × 1042.47 × 1073.40 × 103
Std1.11 × 1047.16 × 1051.02 × 1041.14 × 1061.45 × 1042.64 × 1047.42 × 1071.42 × 103
p-value-1.83 × 10−6+9.34 × 10−1=4.59 × 10−1=1.00 × 100=2.26 × 10−5+1.83 × 10−6+1.14 × 10−2
f14Median3.89 × 1031.90 × 1023.75 × 1033.49 × 1041.66 × 1031.43 × 1041.20 × 1054.42 × 104
Mean6.63 × 1031.93 × 1025.36 × 1035.20 × 1043.38 × 1041.55 × 1041.66 × 1054.93 × 104
Std6.41 × 1032.22 × 1014.38 × 1037.90 × 1047.81 × 1049.99 × 1032.35 × 1053.41 × 104
p-value-1.83 × 10−62.49 × 10−1=5.93 × 10−4+9.51 × 10−1=2.51 × 10−4+4.08 × 10−6+4.50 × 10−6+
f15Median3.91 × 1034.24 × 1041.61 × 1031.08 × 1045.46 × 1038.96 × 1031.28 × 1044.08 × 102
Mean7.84 × 1034.70 × 1043.59 × 1031.33 × 1048.80 × 1031.35 × 1042.61 × 1044.59 × 102
Std8.32 × 1032.04 × 1044.74 × 1031.04 × 1048.48 × 1031.22 × 1043.16 × 1042.45 × 102
p-value-1.83 × 10−6+8.04 × 10−2=3.41 × 10−2+7.89 × 10−1=4.38 × 10−2+2.33 × 10−3+3.34 × 10−6
f16Median2.22 × 1011.88 × 1035.70 × 1028.65 × 1028.49 × 1027.46 × 1021.57 × 1036.61 × 102
Mean5.93 × 1011.87 × 1035.33 × 1028.54 × 1028.21 × 1027.11 × 1021.52 × 1036.24 × 102
Std6.79 × 1011.61 × 1021.96 × 1022.56 × 1022.22 × 1022.08 × 1022.55 × 1021.64 × 102
p-value-1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f17Median4.48 × 1018.55 × 1021.64 × 1023.18 × 1022.02 × 1023.12 × 1024.36 × 1021.96 × 102
Mean4.69 × 1018.68 × 1021.45 × 1022.96 × 1022.28 × 1023.22 × 1024.31 × 1021.88 × 102
Std1.01 × 1011.21 × 1028.08 × 1011.44 × 1021.38 × 1021.54 × 1021.50 × 1026.59 × 101
p-value-1.83 × 10−6+4.97 × 10−6+1.83 × 10−6+3.34 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f18Median1.87 × 1051.88 × 1059.24 × 1041.41 × 1051.68 × 1041.90 × 1058.67 × 1051.87 × 105
Mean2.64 × 1052.16 × 1051.46 × 1052.78 × 1051.23 × 1052.05 × 1051.03 × 1062.46 × 105
Std2.51 × 1058.08 × 1041.27 × 1052.93 × 1054.87 × 1051.47 × 1058.56 × 1051.55 × 105
p-value-1.00 × 100=5.58 × 10−2=9.18 × 10−1=1.20 × 10−49.34 × 10−1=6.60 × 10−5+9.18 × 10−1=
f19Median5.77 × 1031.95 × 1033.65 × 1037.66 × 1032.99 × 1031.31 × 1041.52 × 1041.05 × 102
Mean1.10 × 1042.25 × 1034.56 × 1031.49 × 1047.89 × 1031.63 × 1043.47 × 1041.35 × 102
Std1.32 × 1041.04 × 1034.83 × 1031.58 × 1041.05 × 1041.76 × 1047.74 × 1048.36 × 101
p-value-2.86 × 10−33.78 × 10−23.24 × 10−1=3.34 × 10−1=2.94 × 10−1=1.80 × 10−2+1.83 × 10−6
f10–19w/t/l-7/1/23/6/16/4/04/5/17/3/010/0/05/2/3
fCategoryQualitySTTPSODNSPSOXPSOTCSPSOGLPSOHCLPSODPLPSOCLPSO
f20Composition FunctionsMedian3.72 × 1013.50 × 1021.74 × 1023.84 × 1021.96 × 1022.13 × 1023.66 × 1021.94 × 102
Mean4.65 × 1013.87 × 1021.84 × 1023.70 × 1022.16 × 1022.09 × 1024.00 × 1021.89 × 102
Std3.35 × 1011.20 × 1026.75 × 1011.39 × 1021.01 × 1021.04 × 1021.28 × 1026.57 × 101
p-value-1.83 × 10−6+2.48 × 10−6+1.83 × 10−6+3.69 × 10−6+5.48 × 10−6+1.83 × 10−6+3.69 × 10−6+
f21Median2.12 × 1024.04 × 1022.35 × 1022.81 × 1022.66 × 1022.75 × 1024.02 × 1022.88 × 102
Mean2.13 × 1024.04 × 1022.38 × 1022.84 × 1022.67 × 1022.76 × 1024.00 × 1022.83 × 102
Std3.76 × 1009.54 × 1001.05 × 1012.25 × 1012.07 × 1011.25 × 1012.25 × 1012.53 × 101
p-value-1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.82 × 10−6+1.82 × 10−6+1.82 × 10−6+2.47 × 10−6+
f22Median1.00 × 1026.38 × 1031.00 × 1021.05 × 1021.00 × 1021.02 × 1025.79 × 1022.06 × 102
Mean1.00 × 1026.29 × 1034.30 × 1021.70 × 1031.02 × 1028.37 × 1025.92 × 1028.25 × 102
Std0.00 × 1007.48 × 1029.91 × 1021.75 × 1033.08 × 1001.49 × 1031.44 × 1021.18 × 103
p-value-1.82 × 10−6+3.82 × 10−3+1.43 × 10−4+1.63 × 10−3+4.67 × 10−4+1.83 × 10−6+1.83 × 10−6+
f23Median3.85 × 1025.83 × 1023.99 × 1024.44 × 1024.26 × 1024.53 × 1026.77 × 1024.46 × 102
Mean3.86 × 1025.87 × 1023.98 × 1024.47 × 1024.33 × 1024.51 × 1026.84 × 1024.45 × 102
Std7.70 × 1003.62 × 1011.15 × 1012.85 × 1013.02 × 1012.02 × 1014.57 × 1011.09 × 101
p-value-1.83 × 10−6+6.89 × 10−4+1.82 × 10−6+2.24 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f24Median4.60 × 1026.68 × 1024.70 × 1025.37 × 1024.88 × 1025.35 × 1027.32 × 1025.60 × 102
Mean4.61 × 1026.82 × 1024.73 × 1025.38 × 1024.99 × 1025.39 × 1027.37 × 1025.60 × 102
Std8.66 × 1004.48 × 1012.40 × 1015.08 × 1013.90 × 1012.34 × 1013.79 × 1011.66 × 101
p-value-1.83 × 10−6+1.28 × 10−2+2.24 × 10−6+2.06 × 10−5+1.83 × 10−6+1.82 × 10−6+1.83 × 10−6+
f25Median3.87 × 1023.79 × 1023.91 × 1024.14 × 1024.09 × 1023.89 × 1026.00 × 1023.89 × 102
Mean3.87 × 1023.78 × 1023.92 × 1024.12 × 1024.04 × 1023.89 × 1026.18 × 1023.89 × 102
Std2.06 × 10−11.04 × 1004.86 × 1001.56 × 1011.24 × 1018.23 × 1008.24 × 1015.62 × 10−1
p-value-1.77 × 10−66.65 × 10−6+1.83 × 10−6+1.82 × 10−6+1.74 × 10−2+1.83 × 10−6+1.78 × 10−6+
f26Median1.47 × 1033.28 × 1033.00 × 1022.32 × 1031.94 × 1032.04 × 1031.61 × 1031.85 × 103
Mean1.49 × 1033.45 × 1036.97 × 1022.23 × 1031.92 × 1031.82 × 1031.95 × 1031.58 × 103
Std1.09 × 1024.89 × 1026.06 × 1026.97 × 1024.77 × 1026.36 × 1021.02 × 1034.70 × 102
p-value-1.83 × 10−6+3.56 × 10−51.97 × 10−4+1.67 × 10−4+1.52 × 10−2+1.92 × 10−1=1.88 × 10−1=
f27Median5.13 × 1025.00 × 1025.36 × 1025.61 × 1025.48 × 1025.14 × 1028.09 × 1025.11 × 102
Mean5.18 × 1025.00 × 1025.35 × 1025.61 × 1025.50 × 1025.16 × 1028.19 × 1025.10 × 102
Std1.40 × 1010.00 × 1001.10 × 1011.95 × 1011.28 × 1011.59 × 1015.73 × 1014.56 × 100
p-value-2.24 × 10−69.31 × 10−5+2.87 × 10−6+2.48 × 10−6+8.05 × 10−1=1.82 × 10−6+1.70 × 10−2
f28Median4.08 × 1025.00 × 1024.03 × 1024.40 × 1024.72 × 1024.55 × 1028.24 × 1024.74 × 102
Mean3.79 × 1025.00 × 1023.86 × 1024.51 × 1024.51 × 1024.48 × 1028.80 × 1024.83 × 102
Std5.79 × 1010.00 × 1006.85 × 1015.21 × 1017.00 × 1013.48 × 1011.61 × 1022.88 × 101
p-value-1.81 × 10−6+7.76 × 10−1=3.73 × 10−4+1.72 × 10−3+1.54 × 10−4+1.83 × 10−6+2.02 × 10−6+
f29Median4.84 × 1021.66 × 1035.66 × 1028.62 × 1027.74 × 1026.92 × 1021.34 × 1036.58 × 102
Mean5.11 × 1021.56 × 1035.88 × 1029.05 × 1028.17 × 1026.98 × 1021.34 × 1036.46 × 102
Std7.29 × 1012.89 × 1029.03 × 1011.86 × 1022.37 × 1021.56 × 1022.29 × 1027.20 × 101
p-value-1.83 × 10−6+2.95 × 10−4+1.83 × 10−6+1.72 × 10−5+1.30 × 10−5+1.83 × 10−6+7.33 × 10−6+
f30Median4.19 × 1034.00 × 1048.04 × 1031.20 × 1049.45 × 1037.39 × 1031.96 × 1061.27 × 104
Mean5.03 × 1036.20 × 1049.78 × 1031.80 × 1042.08 × 1048.59 × 1032.55 × 1061.37 × 104
Std2.02 × 1035.12 × 1046.72 × 1031.77 × 1042.96 × 1044.52 × 1031.97 × 1063.99 × 103
p-value-1.83 × 10−6+4.36 × 10−4+4.08 × 10−6+1.88 × 10−5+1.54 × 10−3+1.83 × 10−6+2.02 × 10−6+
f20–30w/t/l-9/0/29/1/111/0/011/0/010/1/010/1/09/1/1
w/t/l-23/1/518/8/323/5/121/6/224/4/128/1/021/3/5
rank1.866.002.555.484.004.417.454.24
Table 4. Comparison results between the proposed STTPSO and the 7 state-of-the-art and popular PSO variants on the 50-D CEC 2017 benchmark functions. The highlighted p-values means that the proposed STTPSO is significantly better than the associated compared algorithms on the corresponding problems.
Table 4. Comparison results between the proposed STTPSO and the 7 state-of-the-art and popular PSO variants on the 50-D CEC 2017 benchmark functions. The highlighted p-values means that the proposed STTPSO is significantly better than the associated compared algorithms on the corresponding problems.
fCategoryQualitySTTPSODNSPSOXPSOTCSPSOGLPSOHCLPSODPLPSOCLPSO
f1Unimodal FunctionsMedian2.89 × 1035.13 × 1038.51 × 1025.47 × 1033.11 × 1039.83 × 1031.92 × 10102.06 × 103
Mean4.33 × 1038.53 × 1033.66 × 1032.84 × 1062.76 × 1067.15 × 1071.84 × 10102.59 × 103
Std4.38 × 1031.13 × 1045.11 × 1031.52 × 1071.37 × 1072.72 × 1084.36 × 1091.88 × 103
p-value-2.02 × 10−1=4.97 × 10−1=2.02 × 10−1=7.69 × 10−2=2.51 × 10−4+1.83 × 10−6+2.67 × 10−1=
f3Median5.80 × 1043.84 × 1054.35 × 1035.55 × 1044.55E−134.71 × 1031.23 × 1051.31 × 105
Mean5.85 × 1043.82 × 1054.72 × 1035.83 × 1043.92E−125.11 × 1031.22 × 1051.31 × 105
Std8.60 × 1036.15 × 1041.64 × 1039.39 × 1031.49E−112.41 × 1031.62 × 1042.22 × 104
p-value-1.83 × 10−6+1.83 × 10−65.51 × 10−1=1.83 × 10−61.83 × 10−61.83 × 10−6+1.83 × 10−6+
f1–3w/t/l-1/1/00/1/10/2/00/1/11/0/12/0/01/1/0
f4Simple Multimodal FunctionsMedian1.75 × 1024.57 × 1012.46 × 1022.88 × 1023.22 × 1021.61 × 1023.81 × 1031.90 × 102
Mean1.69 × 1025.24 × 1012.33 × 1022.93 × 1023.29 × 1021.48 × 1024.02 × 1031.87 × 102
Std3.35 × 1012.36 × 1015.15 × 1019.09 × 1018.97 × 1015.23 × 1011.06 × 1032.00 × 101
p-value-2.24 × 10−65.08 × 10−5+6.65 × 10−6+2.02 × 10−6+9.99 × 10−2=1.83 × 10−6+1.90 × 10−2+
f5Median8.96 × 1004.11 × 1028.21 × 1011.87 × 1021.47 × 1021.65 × 1024.58 × 1022.02 × 102
Mean9.09 × 1004.12 × 1028.33 × 1011.91 × 1021.42 × 1021.66 × 1024.46 × 1021.98 × 102
Std2.70 × 1001.88 × 1011.50 × 1013.82 × 1013.83 × 1013.23 × 1014.64 × 1011.55 × 101
p-value-1.82 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.82 × 10−6+1.82 × 10−6+1.83 × 10−6+
f6Median1.22 × 10−61.02 × 10−15.59 × 10−23.01 × 1001.19 × 10−21.85 × 10−35.28 × 1011.23 × 10−8
Mean4.48 × 10−61.16 × 10−11.53 × 10−13.93 × 1002.00 × 10−22.65 × 10−35.18 × 1012.45 × 10−3
Std8.10 × 10−64.66 × 10−22.59 × 10−13.68 × 1002.08 × 10−22.37 × 10−34.60 × 1001.32 × 10−2
p-value-1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+3.56 × 10−5+
f7Median6.27 × 1014.70 × 1021.51 × 1023.18 × 1022.27 × 1021.94 × 1027.71 × 1022.11 × 102
Mean6.33 × 1014.70 × 1021.53 × 1023.35 × 1022.36 × 1022.02 × 1027.70 × 1022.10 × 102
Std3.22 × 1001.85 × 1012.57 × 1016.19 × 1013.89 × 1013.06 × 1016.79 × 1011.44 × 101
p-value-1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f8Median8.96 × 1003.99 × 1028.81 × 1011.97 × 1021.36 × 1021.56 × 1024.60 × 1021.96 × 102
Mean9.35 × 1004.00 × 1029.29 × 1012.09 × 1021.41 × 1021.56 × 1024.45 × 1021.97 × 102
Std3.22 × 1001.86 × 1012.34 × 1016.12 × 1013.13 × 1012.68 × 1014.48 × 1011.61 × 101
p-value-1.82 × 10−6+1.83 × 10−6+1.82 × 10−6+1.82 × 10−6+1.83 × 10−6+1.82 × 10−6+1.82 × 10−6+
f9Median4.99 × 10−11.72 × 1011.55 × 1012.85 × 1034.51 × 1021.25 × 1031.08 × 1043.95 × 103
Mean8.56 × 10−13.56 × 1014.97 × 1013.34 × 1035.98 × 1021.26 × 1031.10 × 1044.22 × 103
Std1.05 × 1005.56 × 1018.96 × 1011.87 × 1035.23 × 1025.35 × 1021.90 × 1031.20 × 103
p-value-3.34 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f4−9w/t/l-5/0/16/0/06/0/06/0/05/1/06/0/06/0/0
f10Hybrid FunctionsMedian3.60 × 1031.16 × 1045.17 × 1035.57 × 1035.59 × 1035.58 × 1031.23 × 1046.66 × 103
Mean3.49 × 1031.15 × 1045.00 × 1035.56 × 1036.27 × 1035.57 × 1031.23 × 1046.59 × 103
Std5.89 × 1021.43 × 1038.85 × 1026.71 × 1021.85 × 1036.10 × 1025.33 × 1024.43 × 102
p-value-1.83 × 10−6+6.04 × 10−6+1.83 × 10−6+2.02 × 10−6+1.82 × 10−6+1.82 × 10−6+1.82 × 10−6+
f11Median6.11 × 1012.04 × 1021.58 × 1022.15 × 1022.90 × 1022.08 × 1022.34 × 1031.93 × 102
Mean6.12 × 1012.02 × 1021.56 × 1022.36 × 1023.81 × 1022.07 × 1022.38 × 1031.92 × 102
Std1.18 × 1012.05 × 1013.37 × 1019.86 × 1013.27 × 1026.30 × 1015.66 × 1024.10 × 101
p-value-1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f12Median8.59 × 1052.78 × 1075.18 × 1051.83 × 1061.19 × 1062.94 × 1063.95 × 1091.86 × 107
Mean8.89 × 1053.23 × 1079.85 × 1058.82 × 1065.72 × 1063.61 × 1064.04 × 1091.96 × 107
Std5.26 × 1051.68 × 1071.13 × 1062.40 × 1071.24 × 1072.48 × 1061.72 × 1099.49 × 106
p-value-1.83 × 10−6+7.11 × 10−1=2.97 × 10−5+1.52 × 10−2+2.24 × 10−6+1.83 × 10−6+1.83 × 10−6+
f13Median1.88 × 1042.62 × 1062.30 × 1033.81 × 1033.32 × 1032.15 × 1043.11 × 1081.11 × 104
Mean1.63 × 1042.73 × 1064.40 × 1037.76 × 1035.58 × 1032.29 × 1045.22 × 1081.13 × 104
Std1.12 × 1041.72 × 1064.74 × 1039.15 × 1036.22 × 1031.32 × 1047.74 × 1083.04 × 103
p-value-1.83 × 10−6+1.30 × 10−41.44 × 10−21.54 × 10−49.99 × 10−2=1.83 × 10−6+4.60 × 10−2
f14Median5.52 × 1047.90 × 1033.86 × 1044.09 × 1044.01 × 1048.86 × 1041.56 × 1064.64 × 105
Mean7.09 × 1048.09 × 1033.97 × 1042.30 × 1051.43 × 1051.22 × 1052.21 × 1065.29 × 105
Std5.64 × 1043.51 × 1032.64 × 1045.29 × 1052.20 × 1051.15 × 1052.01 × 1062.65 × 105
p-value-8.07 × 10−61.36 × 10−23.65 × 10−1=7.42 × 10−1=4.38 × 10−2+2.02 × 10−6+1.83 × 10−6+
f15Median1.18 × 1044.55 × 1052.69 × 1037.12 × 1033.89 × 1031.81 × 1042.93 × 1068.15 × 102
Mean1.32 × 1044.80 × 1054.24 × 1031.46 × 1045.84 × 1031.92 × 1041.79 × 1079.31 × 102
Std9.68 × 1031.58 × 1054.08 × 1032.54 × 1046.28 × 1031.01 × 1042.92 × 1074.49 × 102
p-value-1.83 × 10−6+4.71 × 10−42.76 × 10−1=1.07 × 10−36.41 × 10−2=1.83 × 10−6+6.65 × 10−6
f16Median4.55 × 1023.79 × 1039.27 × 1021.63 × 1031.60 × 1031.52 × 1033.13 × 1031.45 × 103
Mean4.23 × 1023.78 × 1039.56 × 1021.70 × 1031.57 × 1031.53 × 1033.06 × 1031.41 × 103
Std1.90 × 1022.33 × 1023.27 × 1024.35 × 1024.06 × 1023.57 × 1025.41 × 1022.00 × 102
p-value-1.83 × 10−6+6.04 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f17Median2.45 × 1022.41 × 1038.83 × 1021.15 × 1031.03 × 1031.30 × 1031.85 × 1031.06 × 103
Mean3.11 × 1022.40 × 1038.56 × 1021.17 × 1031.05 × 1031.20 × 1031.83 × 1031.04 × 103
Std1.43 × 1022.00 × 1022.56 × 1022.97 × 1022.23 × 1023.24 × 1022.86 × 1021.94 × 102
p-value-1.82 × 10−6+3.69 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+2.02 × 10−6+
f18Median3.71 × 1053.27 × 1061.71 × 1053.12 × 1061.09 × 1063.18 × 1057.61 × 1061.14 × 106
Mean4.28 × 1053.47 × 1063.94 × 1055.58 × 1063.42 × 1064.33 × 1051.04 × 1071.31 × 106
Std2.34 × 1051.61 × 1064.88 × 1055.71 × 1065.02 × 1063.40 × 1059.88 × 1067.63 × 105
p-value-1.83 × 10−6+2.58 × 10−1=4.50 × 10−6+1.90 × 10−3+6.81 × 10−1=1.83 × 10−6+2.24 × 10−6+
f19Median1.60 × 1032.67 × 1049.19 × 1031.30 × 1041.42 × 1041.03 × 1041.44 × 1063.36 × 102
Mean4.33 × 1033.18 × 1041.12 × 1041.51 × 1041.74 × 1041.48 × 1043.65 × 1065.26 × 102
Std6.31 × 1031.73 × 1048.07 × 1031.38 × 1041.07 × 1041.36 × 1048.78 × 1065.03 × 102
p-value-1.83 × 10−6+1.42 × 10−4+1.54 × 10−3+3.26 × 10−5+1.90 × 10−3+1.83 × 10−6+1.10 × 10−4
f10–19w/t/l-9/0/15/2/37/2/17/1/27/3/010/0/07/0/3
fCategoryQualitySTTPSODNSPSOXPSOTCSPSOGLPSOHCLPSODPLPSOCLPSO
f20Composition FunctionsMedian9.66 × 1011.66 × 1034.66 × 1029.58 × 1027.45 × 1029.48 × 1021.44 × 1035.92 × 102
Mean1.76 × 1021.59 × 1034.70 × 1029.01 × 1027.43 × 1028.97 × 1021.38 × 1036.14 × 102
Std1.38 × 1023.96 × 1022.19 × 1022.96 × 1022.66 × 1022.33 × 1022.58 × 1021.30 × 102
p-value-1.83 × 10−6+2.97 × 10−5+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f21Median2.22 × 1026.01 × 1022.79 × 1023.77 × 1023.37 × 1023.86 × 1026.59 × 1024.21 × 102
Mean2.22 × 1026.00 × 1022.80 × 1024.02 × 1023.46 × 1023.80 × 1026.53 × 1024.21 × 102
Std3.65 × 1002.02 × 1011.96 × 1016.26 × 1014.16 × 1012.91 × 1013.35 × 1011.52 × 101
p-value-1.82 × 10−6+1.83 × 10−6+1.82 × 10−6+1.82 × 10−6+1.82 × 10−6+1.83 × 10−6+1.82 × 10−6+
f22Median3.04 × 1031.28 × 1045.63 × 1036.39 × 1036.49 × 1036.24 × 1031.26 × 1047.13 × 103
Mean3.10 × 1031.27 × 1044.80 × 1036.14 × 1035.88 × 1035.76 × 1031.13 × 1047.14 × 103
Std1.34 × 1039.78 × 1022.20 × 1031.52 × 1033.61 × 1031.63 × 1033.48 × 1032.75 × 102
p-value-1.83 × 10−6+1.01 × 10−2+1.18 × 10−5+1.33 × 10−3+8.55 × 10−5+2.48 × 10−6+2.02 × 10−6+
f23Median5.06 × 1028.64 × 1025.27 × 1026.43 × 1026.65 × 1026.66 × 1021.22 × 1036.66 × 102
Mean5.09 × 1028.88 × 1025.24 × 1026.50 × 1026.81 × 1026.65 × 1021.22 × 1036.66 × 102
Std1.71 × 1016.87 × 1013.11 × 1015.85 × 1018.50 × 1013.99 × 1016.75 × 1011.90 × 101
p-value-1.83 × 10−6+4.38 × 10−2+1.82 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f24Median5.81 × 1021.06 × 1035.90 × 1027.06 × 1027.38 × 1027.27 × 1021.31 × 1038.04 × 102
Mean5.83 × 1021.08 × 1036.06 × 1027.09 × 1027.49 × 1027.23 × 1021.33 × 1038.05 × 102
Std1.63 × 1011.26 × 1025.78 × 1017.38 × 1019.38 × 1013.46 × 1019.27 × 1012.65 × 101
p-value-1.82 × 10−6+3.41 × 10−2+1.83 × 10−6+2.02 × 10−6+1.83 × 10−6+1.82 × 10−6+1.82 × 10−6+
f25Median4.80 × 1024.31 × 1025.98 × 1026.75 × 1026.61 × 1024.80 × 1022.60 × 1035.31 × 102
Mean5.06 × 1024.41 × 1025.97 × 1026.76 × 1026.66 × 1025.01 × 1022.77 × 1035.30 × 102
Std3.55 × 1012.14 × 1012.43 × 1016.58 × 1017.00 × 1013.54 × 1016.68 × 1026.29 × 100
p-value-3.14 × 10−61.83 × 10−6+1.82 × 10−6+2.02 × 10−6+2.33 × 10−1=1.82 × 10−6+5.91 × 10−4+
f26Median2.23 × 1036.70 × 1039.43 × 1023.98 × 1032.96 × 1033.70 × 1037.22 × 1033.60 × 103
Mean2.27 × 1037.10 × 1031.15 × 1034.07 × 1033.04 × 1033.64 × 1036.89 × 1033.52 × 103
Std1.23 × 1021.63 × 1038.75 × 1021.02 × 1036.50 × 1023.13 × 1022.09 × 1033.37 × 102
p-value-1.83 × 10−6+1.42 × 10−51.18 × 10−5+6.87 × 10−6+1.83 × 10−6+1.83 × 10−6+2.02 × 10−6+
f27Median7.00 × 1025.00 × 1027.19 × 1029.01 × 1028.35 × 1026.54 × 1021.98 × 1036.35 × 102
Mean6.94 × 1025.00 × 1027.35 × 1029.06 × 1028.39 × 1026.89 × 1021.99 × 1036.33 × 102
Std5.44 × 1010.00 × 1008.89 × 1019.24 × 1018.57 × 1011.06 × 1021.76 × 1022.76 × 101
p-value-1.82 × 10−67.03 × 10−2=1.83 × 10−6+7.69 × 10−6+7.50 × 10−1=1.83 × 10−6+7.84 × 10−5
f28Median5.08 × 1025.00 × 1025.38 × 1026.68 × 1027.00 × 1024.92 × 1022.91 × 1031.71 × 103
Mean9.06 × 1025.00 × 1025.47 × 1026.77 × 1026.97 × 1024.92 × 1023.04 × 1031.79 × 103
Std1.14 × 1030.00 × 1003.99 × 1016.99 × 1015.82 × 1013.40 × 1015.38 × 1024.45 × 102
p-value-6.21 × 10−2=7.35 × 10−2=1.52 × 10−21.52 × 10−29.56 × 10−2=3.69 × 10−6+4.71 × 10−4+
f29Median5.58 × 1023.32 × 1038.90 × 1021.43 × 1031.03 × 1031.19 × 1033.44 × 1031.01 × 103
Mean5.90 × 1023.27 × 1038.66 × 1021.43 × 1031.08 × 1031.16 × 1033.52 × 1031.02 × 103
Std1.75 × 1022.69 × 1021.87 × 1022.49 × 1022.75 × 1023.49 × 1025.07 × 1021.49 × 102
p-value-1.83 × 10−6+2.07 × 10−5+1.83 × 10−6+7.33 × 10−6+2.02 × 10−6+1.83 × 10−6+2.48 × 10−6+
f30Median8.17 × 1051.79 × 1061.93 × 1062.00 × 1062.00 × 1061.16 × 1061.38 × 1087.30 × 105
Mean8.55 × 1052.22 × 1061.92 × 1062.36 × 1062.40 × 1061.23 × 1061.38 × 1087.40 × 105
Std1.54 × 1051.12 × 1063.32 × 1058.05 × 1051.44 × 1064.15 × 1055.12 × 1077.50 × 104
p-value-4.50 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+4.36 × 10−4+1.83 × 10−6+1.15 × 10−3
f20–30w/t/l-8/1/28/2/110/0/110/0/18/3/011/0/09/0/2
w/t/l-23/2/419/5/523/4/223/2/421/7/129/0/023/1/5
rank2.175.722.455.144.314.147.794.28
Table 5. Comparison results between the proposed STTPSO and the 7 state-of-the-art and popular PSO variants on the 100D CEC 2017 benchmark functions. The highlighted p-values means that the proposed STTPSO is significantly better than the associated compared algorithms on the corresponding problems.
Table 5. Comparison results between the proposed STTPSO and the 7 state-of-the-art and popular PSO variants on the 100D CEC 2017 benchmark functions. The highlighted p-values means that the proposed STTPSO is significantly better than the associated compared algorithms on the corresponding problems.
fCategoryQualitySTTPSODNSPSOXPSOTCSPSOGLPSOHCLPSODPLPSOCLPSO
f1Unimodal FunctionsMedian2.53 × 1033.31 × 1033.50 × 1032.62 × 1036.15 × 1031.37 × 1041.10 × 10111.63 × 109
Mean4.29 × 1037.08 × 1038.27 × 1036.22 × 1031.21 × 1041.93 × 1071.10 × 10111.78 × 109
Std4.43 × 1031.04 × 1049.67 × 1037.24 × 1031.60 × 1041.04 × 1081.17 × 10101.48 × 109
p-value-7.89 × 10−1=2.41 × 10−1=4.34 × 10−1=5.15 × 10−3+7.20 × 10−5+1.83 × 10−6+4.08 × 10−6+
f3Median2.22 × 1051.07 × 1067.04 × 1042.55 × 1057.98 × 1018.06 × 1043.82 × 1055.10 × 105
Mean2.23 × 1051.04 × 1066.94 × 1042.53 × 1053.93 × 1038.23 × 1043.73 × 1055.12 × 105
Std2.17 × 1041.25 × 1059.80 × 1033.05 × 1049.85 × 1032.40 × 1044.19 × 1043.83 × 104
p-value-1.83 × 10−6+1.83 × 10−61.74 × 10−4+1.83 × 10−61.82 × 10−61.83 × 10−6+1.82 × 10−6+
f1–3w/t/l-1/1/00/1/11/1/01/0/11/0/12/0/02/0/0
f4Simple Multimodal FunctionsMedian2.16 × 1021.99 × 1024.80 × 1026.28 × 1028.16 × 1022.45 × 1022.20 × 1043.17 × 102
Mean2.18 × 1022.07 × 1024.77 × 1027.03 × 1028.54 × 1022.46 × 1022.25 × 1043.26 × 102
Std1.89 × 1015.44 × 1015.85 × 1011.82 × 1021.91 × 1022.61 × 1013.31 × 1034.30 × 101
p-value-2.41 × 10−1=1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+4.36 × 10−4+1.83 × 10−6+1.83 × 10−6+
f5Median3.08 × 1011.02 × 1032.29 × 1025.36 × 1023.76 × 1024.78 × 1021.20 × 1037.46 × 102
Mean2.96 × 1011.03 × 1032.28 × 1025.55 × 1023.85 × 1024.96 × 1021.20 × 1037.46 × 102
Std4.78 × 1004.29 × 1014.81 × 1011.09 × 1025.70 × 1017.09 × 1014.04 × 1014.13 × 101
p-value-1.82 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.82 × 10−6+1.83 × 10−6+
f6Median1.13 × 10−32.42 × 10−13.95 × 1001.85 × 1014.94 × 10−29.66 × 10−37.63 × 1011.07 × 10−2
Mean1.90 × 10−32.82 × 10−14.43 × 1001.78 × 1015.62 × 10−21.54 × 10−27.59 × 1012.65 × 10−2
Std2.09 × 10−31.54 × 10−13.28 × 1006.21 × 1002.65 × 10−21.99 × 10−24.73 × 1003.21 × 10−2
p-value-1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.07 × 10−5+1.83 × 10−6+5.09 × 10−4+
f7Median1.80 × 1021.14 × 1034.41 × 1021.22 × 1037.93 × 1026.74 × 1022.78 × 1037.07 × 102
Mean1.80 × 1021.13 × 1034.48 × 1021.23 × 1037.99 × 1026.80 × 1022.77 × 1037.02 × 102
Std1.07 × 1014.09 × 1018.25 × 1011.99 × 1021.16 × 1021.08 × 1022.21 × 1026.34 × 101
p-value-1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.82 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f8Median2.89 × 1011.02 × 1032.14 × 1025.29 × 1024.02 × 1025.48 × 1021.26 × 1037.41 × 102
Mean2.98 × 1011.02 × 1032.15 × 1025.54 × 1024.08 × 1025.46 × 1021.26 × 1037.48 × 102
Std5.21 × 1003.27 × 1014.03 × 1018.00 × 1016.80 × 1018.92 × 1014.77 × 1013.22 × 101
p-value-1.82 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f9Median2.53 × 1018.78 × 1024.97 × 1021.38 × 1048.82 × 1037.70 × 1035.07 × 1042.24 × 104
Mean2.81 × 1012.10 × 1035.37 × 1021.40 × 1048.58 × 1038.23 × 1035.10 × 1042.32 × 104
Std1.44 × 1012.82 × 1033.40 × 1024.05 × 1032.37 × 1032.91 × 1035.93 × 1034.46 × 103
p-value-1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f4–9w/t/l-5/1/06/0/06/0/06/0/06/0/06/0/06/0/0
f10Hybrid FunctionsMedian8.46 × 1033.03 × 1041.23 × 1041.37 × 1043.05 × 1041.35 × 1042.90 × 1042.17 × 104
Mean8.52 × 1033.00 × 1041.24 × 1041.34 × 1043.04 × 1041.32 × 1042.91 × 1042.18 × 104
Std8.64 × 1027.81 × 1021.41 × 1031.08 × 1033.27 × 1021.10 × 1038.24 × 1024.97 × 102
p-value-1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f11Median5.25 × 1022.87 × 1041.16 × 1032.74 × 1031.41 × 1048.15 × 1027.75 × 1041.34 × 103
Mean5.22 × 1022.90 × 1041.20 × 1033.42 × 1031.61 × 1047.93 × 1027.82 × 1041.34 × 103
Std1.17 × 1027.16 × 1032.46 × 1021.92 × 1037.38 × 1032.03 × 1029.38 × 1031.60 × 102
p-value-1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+3.89 × 10−5+1.83 × 10−6+1.83 × 10−6+
f12Median7.69 × 1051.49 × 1071.15 × 1075.19 × 1075.53 × 1071.55 × 1073.01 × 10107.81 × 107
Mean7.68 × 1051.63 × 1071.94 × 1078.56 × 1071.07 × 1082.18 × 1073.03 × 10108.97 × 107
Std2.66 × 1058.03 × 1062.04 × 1079.57 × 1071.58 × 1082.60 × 1076.25 × 1094.08 × 107
p-value-1.83 × 10−6+1.43 × 10−5+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f13Median1.40 × 1034.13 × 1033.05 × 1034.13 × 1033.44 × 1031.18 × 1042.58 × 1093.82 × 104
Mean4.47 × 1039.58 × 1034.62 × 1036.81 × 1031.87 × 1041.86 × 1042.78 × 1094.13 × 104
Std5.03 × 1031.16 × 1044.04 × 1035.71 × 1037.41 × 1041.27 × 1041.14 × 1091.59 × 104
p-value-8.78 × 10−2=4.72 × 10−1=1.50 × 10−1=3.55 × 10−1=2.26 × 10−5+1.83 × 10−6+1.83 × 10−6+
f14Median1.92 × 1052.02 × 1062.23 × 1059.21 × 1058.66 × 1054.17 × 1051.20 × 1074.85 × 106
Mean2.04 × 1052.19 × 1064.45 × 1051.48 × 1061.30 × 1068.45 × 1051.39 × 1075.01 × 106
Std7.61 × 1048.82 × 1056.26 × 1051.37 × 1061.34 × 1061.08 × 1067.91 × 1061.20 × 106
p-value-1.83 × 10−6+8.78 × 10−2=7.33 × 10−6+1.67 × 10−4+1.30 × 10−5+1.83 × 10−6+1.83 × 10−6+
f15Median1.10 × 1034.68 × 1041.43 × 1032.33 × 1032.57 × 1032.77 × 1043.46 × 1084.82 × 103
Mean3.87 × 1031.16 × 1052.18 × 1034.88 × 1033.92 × 1031.85 × 1043.92 × 1085.32 × 103
Std5.76 × 1031.61 × 1051.96 × 1035.47 × 1034.10 × 1031.10 × 1042.20 × 1082.70 × 103
p-value-1.83 × 10−6+5.65 × 10−1=3.76 × 10−1=6.51 × 10−1=7.20 × 10−5+1.83 × 10−6+2.77 × 10−2+
f16Median1.57 × 1038.87 × 1032.91 × 1033.71 × 1033.98 × 1034.01 × 1039.81 × 1034.02 × 103
Mean1.58 × 1038.88 × 1032.84 × 1033.82 × 1034.06 × 1034.16 × 1039.62 × 1034.07 × 103
Std4.82 × 1023.59 × 1024.77 × 1026.74 × 1028.46 × 1026.93 × 1025.78 × 1023.37 × 102
p-value-1.83 × 10−6+4.08 × 10−6+1.83 × 10−6+2.02 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f17Median1.16 × 1035.97 × 1032.52 × 1033.23 × 1032.77 × 1033.72 × 1036.77 × 1033.23 × 103
Mean1.22 × 1035.95 × 1032.43 × 1033.07 × 1032.80 × 1033.81 × 1037.04 × 1033.21 × 103
Std3.81 × 1023.03 × 1024.78 × 1025.17 × 1025.82 × 1026.74 × 1021.41 × 1032.81 × 102
p-value-1.83 × 10−6+2.48 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f18Median3.35 × 1053.02 × 1073.87 × 1052.76 × 1063.39 × 1051.17 × 1062.20 × 1077.77 × 106
Mean3.70 × 1053.11 × 1074.91 × 1053.38 × 1065.25 × 1051.63 × 1062.45 × 1077.50 × 106
Std1.72 × 1051.18 × 1073.58 × 1052.09 × 1064.73 × 1051.25 × 1061.25 × 1072.41 × 106
p-value-1.83 × 10−6+1.62 × 10−1=1.83 × 10−6+4.97 × 10−1=2.74 × 10−6+1.83 × 10−6+1.83 × 10−6+
f19Median5.57 × 1034.75 × 1033.13 × 1032.30 × 1034.20 × 1091.28 × 1046.56 × 1081.79 × 103
Mean7.46 × 1037.38 × 1034.46 × 1034.86 × 1034.12 × 1091.93 × 1046.13 × 1081.97 × 103
Std7.38 × 1036.43 × 1035.99 × 1036.41 × 1037.32 × 1081.52 × 1042.78 × 1087.13 × 102
p-value-9.67 × 10−1=1.33 × 10−1=8.04 × 10−2=1.83 × 10−6+3.06 × 10−3+1.83 × 10−6+2.18 × 10−3
f10–19w/t/l-8/2/05/5/07/3/07/3/010/0/010/0/09/0/1
fCategoryQualitySTTPSODNSPSOXPSOTCSPSOGLPSOHCLPSODPLPSOCLPSO
f20Composition FunctionsMedian7.56 × 1025.57 × 1032.10 × 1032.88 × 1035.26 × 1032.86 × 1034.69 × 1032.29 × 103
Mean7.88 × 1025.20 × 1032.16 × 1032.83 × 1035.23 × 1032.76 × 1034.69 × 1032.34 × 103
Std2.72 × 1029.09 × 1023.96 × 1024.93 × 1021.93 × 1023.59 × 1024.05 × 1022.09 × 102
p-value-1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f21Median2.98 × 1021.22 × 1034.64 × 1027.91 × 1026.06 × 1028.46 × 1021.60 × 1039.66 × 102
Mean2.96 × 1021.22 × 1034.66 × 1027.87 × 1026.26 × 1028.42 × 1021.60 × 1039.62 × 102
Std9.91 × 1003.32 × 1015.19 × 1018.43 × 1017.42 × 1016.64 × 1018.73 × 1013.02 × 101
p-value-1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.82 × 10−6+1.83 × 10−6+1.82 × 10−6+1.83 × 10−6+
f22Median8.16 × 1033.08 × 1041.36 × 1041.46 × 1041.80 × 1041.41 × 1043.08 × 1042.25 × 104
Mean8.11 × 1033.02 × 1041.22 × 1041.46 × 1041.81 × 1041.42 × 1043.07 × 1042.23 × 104
Std1.17 × 1031.39 × 1034.91 × 1031.35 × 1033.93 × 1031.27 × 1031.24 × 1036.52 × 102
p-value-1.83 × 10−6+7.05 × 10−3+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f23Median7.10 × 1021.84 × 1038.13 × 1021.04 × 1031.11 × 1038.83 × 1022.82 × 1039.01 × 102
Mean7.18 × 1021.89 × 1038.09 × 1021.07 × 1031.12 × 1038.89 × 1022.85 × 1039.01 × 102
Std2.95 × 1012.76 × 1024.95 × 1011.09 × 1021.66 × 1024.44 × 1012.15 × 1022.66 × 101
p-value-1.83 × 10−6+2.48 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f24Median1.14 × 1033.00 × 1031.20 × 1031.52 × 1031.68 × 1031.50 × 1034.68 × 1031.50 × 103
Mean1.14 × 1033.14 × 1031.25 × 1031.55 × 1031.64 × 1031.51 × 1034.69 × 1031.49 × 103
Std5.71 × 1016.83 × 1021.17 × 1021.46 × 1022.19 × 1026.83 × 1014.23 × 1022.50 × 101
p-value-1.83 × 10−6+1.67 × 10−4+1.83 × 10−6+1.83 × 10−6+1.82 × 10−6+1.83 × 10−6+1.82 × 10−6+
f25Median8.21 × 1027.62 × 1021.09 × 1031.29 × 1031.38 × 1037.63 × 1021.10 × 1049.02 × 102
Mean7.97 × 1027.67 × 1021.10 × 1031.35 × 1031.37 × 1037.80 × 1021.10 × 1049.08 × 102
Std5.15 × 1015.19 × 1017.56 × 1012.93 × 1022.10 × 1026.35 × 1011.35 × 1034.80 × 101
p-value-3.00 × 10−21.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.68 × 10−1=1.82 × 10−6+1.83 × 10−6+
f26Median6.53 × 1032.88 × 1045.34 × 1031.06 × 1048.54 × 1031.13 × 1042.89 × 1041.09 × 104
Mean6.55 × 1032.92 × 1043.90 × 1031.14 × 1048.61 × 1031.12 × 1042.85 × 1041.10 × 104
Std4.55 × 1025.67 × 1032.58 × 1032.40 × 1031.57 × 1035.79 × 1022.39 × 1033.16 × 102
p-value-1.83 × 10−6+4.97 × 10−61.83 × 10−6+4.08 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+
f27Median7.39 × 1025.00 × 1028.54 × 1021.12 × 1031.02 × 1038.05 × 1024.00 × 1037.58 × 102
Mean7.53 × 1025.00 × 1028.75 × 1021.12 × 1031.01 × 1038.17 × 1024.05 × 1037.59 × 102
Std4.28 × 1010.00 × 1007.68 × 1011.75 × 1028.73 × 1018.19 × 1014.27 × 1022.21 × 101
p-value-1.83 × 10−63.34 × 10−6+1.83 × 10−6+1.82 × 10−6+1.15 × 10−3+1.83 × 10−6+3.99 × 10−1=
f28Median5.85 × 1025.00 × 1028.26 × 1021.33 × 1031.27 × 1035.85 × 1021.43 × 1041.28 × 104
Mean4.99 × 1035.00 × 1028.26 × 1021.37 × 1031.32 × 1031.12 × 1031.41 × 1041.28 × 104
Std5.82 × 1030.00 × 1004.81 × 1013.31 × 1021.61 × 1022.21 × 1031.54 × 1035.96 × 101
p-value-1.82 × 10−63.88 × 10−1=3.88 × 10−1=3.88 × 10−1=2.33 × 10−1=3.34 × 10−6+1.13 × 10−5+
f29Median1.76 × 1036.76 × 1033.08 × 1033.91 × 1033.53 × 1033.95 × 1031.03 × 1043.30 × 103
Mean1.82 × 1036.79 × 1033.10 × 1033.91 × 1033.71 × 1033.94 × 1031.05 × 1043.30 × 103
Std3.67 × 1023.36 × 1025.11 × 1025.41 × 1027.04 × 1026.27 × 1021.15 × 1032.71 × 102
p-value-1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.82 × 10−6+1.83 × 10−6+1.82 × 10−6+
f30Median4.41 × 1037.97 × 1022.59 × 1041.04 × 1052.24 × 1051.13 × 1042.38 × 1095.76 × 104
Mean4.74 × 1038.61 × 1023.28 × 1041.46 × 1056.40 × 1051.50 × 1042.49 × 1097.32 × 104
Std1.39 × 1032.29 × 1022.36 × 1041.34 × 1051.01 × 1061.62 × 1047.38 × 1085.20 × 104
p-value-1.83 × 10−61.83 × 10−6+1.83 × 10−6+1.83 × 10−6+1.07 × 10−5+1.83 × 10−6+1.83 × 10−6+
f20–30w/t/l-7/0/49/1/110/1/010/1/09/2/011/0/010/1/0
w/t/l-21/4/420/7/224/5/024/4/126/2/129/0/027/1/1
rank1.525.312.724.834.834.007.725.07
Table 6. Statistical comparison results between the proposed STTPSO and the 7 state-of-the-art and popular PSO variants on the CEC 2017 benchmark set with different dimensions in terms of “w/t/l”.
Table 6. Statistical comparison results between the proposed STTPSO and the 7 state-of-the-art and popular PSO variants on the CEC 2017 benchmark set with different dimensions in terms of “w/t/l”.
CategoryDDNSPSOXPSOTCSPSOGLPSOHCLPSODPLPSOCLPSO
Unimodal Functions302/0/00/1/10/1/10/1/11/0/12/0/01/0/1
501/1/00/1/10/2/00/1/11/0/12/0/01/1/0
1001/1/00/1/11/1/01/0/11/0/12/0/02/0/0
Simple Multimodal Functions305/0/16/0/06/0/06/0/06/0/06/0/06/0/0
505/0/16/0/06/0/06/0/05/1/06/0/06/0/0
1005/1/06/0/06/0/06/0/06/0/06/0/06/0/0
Hybrid Functions307/1/23/6/16/4/04/5/17/3/010/0/05/2/3
509/0/15/2/37/2/17/1/27/3/010/0/07/0/3
1008/2/05/5/07/3/07/3/010/0/010/0/09/0/1
Composition Functions309/0/29/1/111/0/011/0/010/1/010/1/09/1/1
508/1/28/2/110/0/110/0/18/3/011/0/09/0/2
1007/0/49/1/110/1/010/1/09/2/011/0/010/1/0
Whole Set3023/1/518/8/323/5/121/6/224/4/128/1/021/3/5
5023/2/419/5/523/4/223/2/421/7/129/0/023/1/5
10021/4/420/7/224/5/024/4/126/2/129/0/027/1/1
Table 7. Comparison results between STTPSO with different settings of stagnation times on the 50D CEC 2017 functions. The best results are highlighted in bold.
Table 7. Comparison results between STTPSO with different settings of stagnation times on the 50D CEC 2017 functions. The best results are highlighted in bold.
fstopmax = 0stopmax = 5stopmax = 10stopmax = 15stopmax = 20stopmax = 25stopmax = 30stopmax = 35stopmax = 40
f19.66 × 1061.07 × 1047.82 × 1036.64 × 1036.20 × 1039.07 × 1034.33 × 1037.55 × 1038.64 × 103
f31.75 × 1057.17 × 1046.60 × 1046.66 × 1046.17 × 1046.53 × 1045.85 × 1046.16 × 1046.14 × 104
f42.32 × 1021.97 × 1021.88 × 1021.91 × 1021.95 × 1021.83 × 1021.69 × 1021.74 × 1021.87 × 102
f53.99 × 1029.49 × 1011.50 × 1011.47 × 1011.77 × 1011.67 × 1019.09 × 1001.84 × 1011.69 × 101
f66.74 × 1008.56 × 10−71.51 × 10−61.26 × 10−61.78 × 10−61.50 × 10−64.48 × 10−61.83 × 10−62.53 × 10−6
f74.76 × 1023.35 × 1021.16 × 1026.12 × 1016.52 × 1016.75 × 1016.33 × 1016.90 × 1016.32 × 101
f83.97 × 1024.72 × 1011.68 × 1011.74 × 1011.63 × 1011.73 × 1019.35 × 1001.80 × 1011.81 × 101
f91.46 × 1026.68 × 10−11.62 × 1001.39 × 1001.92 × 1001.93 × 1008.56 × 10−11.19 × 1001.17 × 100
f101.32 × 1041.25 × 1041.22 × 1041.13 × 1041.15 × 1041.14 × 1043.49 × 1031.09 × 1041.09 × 104
f115.40 × 1024.67 × 1015.27 × 1015.62 × 1015.81 × 1016.07 × 1016.12 × 1015.77 × 1016.07 × 101
f127.61 × 1072.82 × 1061.83 × 1062.12 × 1062.16 × 1061.64 × 1068.89 × 1051.64 × 1061.69 × 106
f133.65 × 1043.01 × 1042.19 × 1043.04 × 1042.92 × 1042.72 × 1041.63 × 1042.70 × 1042.64 × 104
f144.53 × 1052.17 × 1052.34 × 1051.77 × 1051.74 × 1052.21 × 1057.09 × 1042.19 × 1051.83 × 105
f153.11 × 1043.13 × 1043.13 × 1042.98 × 1043.12 × 1043.11 × 1041.32 × 1043.00 × 1043.00 × 104
f163.04 × 1031.71 × 1036.33 × 1024.70 × 1024.38 × 1025.28 × 1024.23 × 1025.27 × 1025.81 × 102
f171.95 × 1031.04 × 1037.59 × 1025.10 × 1024.22 × 1023.80 × 1023.11 × 1025.28 × 1026.00 × 102
f185.94 × 1062.78 × 1062.51 × 1062.00 × 1061.82 × 1061.82 × 1064.28 × 1051.51 × 1061.64 × 106
f192.46 × 1032.29 × 1032.11 × 1032.02 × 1031.91 × 1031.90 × 1034.33 × 1031.69 × 1031.81 × 103
f201.63 × 1031.29 × 1031.17 × 1031.01 × 1038.36 × 1028.01 × 1021.76 × 1026.15 × 1026.60 × 102
f216.00 × 1022.79 × 1022.28 × 1022.28 × 1022.30 × 1022.29 × 1022.22 × 1022.32 × 1022.32 × 102
f221.33 × 1041.21 × 1041.19 × 1041.05 × 1049.73 × 1031.02 × 1043.10 × 1039.29 × 1039.76 × 103
f238.36 × 1025.13 × 1025.11 × 1025.14 × 1025.18 × 1025.18 × 1025.09 × 1025.21 × 1025.20 × 102
f248.85 × 1025.98 × 1025.89 × 1025.90 × 1025.94 × 1025.91 × 1025.83 × 1025.96 × 1025.95 × 102
f255.48 × 1024.81 × 1024.82 × 1024.83 × 1024.81 × 1024.82 × 1025.06 × 1024.84 × 1024.83 × 102
f265.49 × 1032.37 × 1032.40 × 1032.45 × 1032.51 × 1032.51 × 1032.27 × 1032.54 × 1032.57 × 103
f276.92 × 1027.72 × 1027.66 × 1027.52 × 1027.59 × 1027.56 × 1026.94 × 1027.59 × 1027.63 × 102
f285.01 × 1034.99 × 1034.30 × 1035.00 × 1034.92 × 1034.96 × 1039.06 × 1024.77 × 1034.78 × 103
f292.07 × 1031.32 × 1031.16 × 1031.00 × 1031.03 × 1039.13 × 1025.90 × 1029.35 × 1021.01 × 103
f301.57 × 1061.52 × 1061.45 × 1061.36 × 1061.37 × 1061.34 × 1068.55 × 1051.15 × 1061.24 × 106
rank8.556.525.144.314.694.622.144.314.72
Table 8. Comparison results between STTPSO with and without the archive and the restart strategy on the 50-D CEC 2017 functions. The best results are highlighted in bold.
Table 8. Comparison results between STTPSO with and without the archive and the restart strategy on the 50-D CEC 2017 functions. The best results are highlighted in bold.
fSTTPSOSTTPSO_WRSTTPSO_WA
f14.33 × 1038.59 × 1035.75 × 103
f35.85 × 1046.82 × 1046.08 × 104
f41.69 × 1021.96 × 1021.96 × 102
f59.09 × 1001.64 × 1011.46 × 101
f64.48 × 10−62.38 × 10−63.85 × 10−6
f76.33 × 1019.71 × 1015.95 × 101
f89.35 × 1001.56 × 1011.45 × 101
f98.56 × 10−11.59 × 1001.57 × 100
f103.49 × 1031.21 × 1041.18 × 104
f116.12 × 1015.07 × 1015.07 × 101
f128.89 × 1052.46 × 1061.85 × 106
f131.63 × 1042.52 × 1042.46 × 104
f147.09 × 1042.17 × 1051.88 × 105
f151.32 × 1043.13 × 1042.70 × 104
f164.23 × 1026.87 × 1026.41 × 102
f173.11 × 1025.46 × 1023.78 × 102
f184.28 × 1051.99 × 1061.63 × 106
f194.33 × 1032.16 × 1032.03 × 103
f201.76 × 1021.12 × 1038.45 × 102
f212.22 × 1022.29 × 1022.27 × 102
f223.10 × 1031.15 × 1041.01 × 104
f235.09 × 1025.12 × 1025.16 × 102
f245.83 × 1025.94 × 1025.98 × 102
f255.06 × 1024.81 × 1024.82 × 102
f262.27 × 1032.40 × 1032.47 × 103
f276.94 × 1027.56 × 1027.47 × 102
f289.06 × 1024.96 × 1034.34 × 103
f295.90 × 1021.01 × 1038.27 × 102
f308.55 × 1051.30 × 1061.12 × 106
rank1.312.622.07
Table 9. Comparison results between STTPSO with different acceleration coefficient settings on the 50D CEC 2017 functions. The best results are highlighted in bold.
Table 9. Comparison results between STTPSO with different acceleration coefficient settings on the 50D CEC 2017 functions. The best results are highlighted in bold.
fDynamicDynamic1Dynamic2c1 = 1.0c1 = 1.5c1 = 2.0
c2 = 1.0c2 = 1.5c2 = 2.0c2 = 1.0c2 = 1.5c2 = 2.0c2 = 1.0c2 = 1.5c2 = 2.0
f14.33 × 1037.70 × 1038.72 × 1032.44 × 1034.08 × 1037.78 × 1036.41 × 1037.75 × 1031.74 × 1041.37 × 1041.93 × 1045.31 × 107
f35.85 × 1046.84 × 1046.43 × 1046.42 × 1047.20 × 1047.39 × 1045.43 × 1046.76 × 1049.44 × 1046.18 × 1049.54 × 1041.46 × 105
f41.69 × 1021.91 × 1021.90 × 1021.18 × 1021.75 × 1021.94 × 1021.91 × 1021.84 × 1021.93 × 1021.80 × 1021.93 × 1022.33 × 102
f59.09 × 1001.68 × 1011.82 × 1019.78 × 1008.50 × 1001.08 × 1011.72 × 1011.71 × 1012.90 × 1022.89 × 1012.94 × 1023.90 × 102
f64.48 × 10−62.37 × 10−42.39 × 10−67.32 × 10−55.44 × 10−61.27 × 10−68.07 × 10−41.50 × 10−63.54 × 10−43.90 × 10−53.99 × 10−33.02 × 100
f76.33 × 1019.75 × 1017.80 × 1016.67 × 1016.24 × 1011.80 × 1026.07 × 1019.58 × 1013.61 × 1021.03 × 1023.65 × 1024.44 × 102
f89.35 × 1001.70 × 1011.76 × 1018.57 × 1008.18 × 1001.13 × 1011.68 × 1011.77 × 1012.85 × 1022.87 × 1012.74 × 1023.88 × 102
f98.56 × 10−11.74 × 1001.60 × 1006.85 × 10−13.14 × 10−18.52 × 10−11.22 × 1001.49 × 1002.31 × 1004.38 × 1003.46 × 1002.95 × 102
f103.49 × 1031.22 × 1041.21 × 1043.52 × 1037.09 × 1031.22 × 1047.77 × 1031.23 × 1041.27 × 1041.14 × 1041.29 × 1041.28 × 104
f116.12 × 1015.16 × 1015.39 × 1017.72 × 1015.91 × 1014.65 × 1015.81 × 1015.04 × 1011.72 × 1028.09 × 1011.91 × 1024.75 × 102
f128.89 × 1052.18 × 1062.40 × 1067.14 × 1051.23 × 1061.87 × 1061.46 × 1062.16 × 1065.52 × 1063.38 × 1066.65 × 1061.08 × 108
f131.63 × 1043.26 × 1043.00 × 1049.33 × 1031.82 × 1042.72 × 1042.11 × 1043.14 × 1043.61 × 1043.65 × 1043.63 × 1043.77 × 104
f147.09 × 1042.77 × 1052.14 × 1057.15 × 1041.02 × 1051.81 × 1051.93 × 1052.41 × 1052.84 × 1052.43 × 1052.40 × 1054.09 × 105
f151.32 × 1043.12 × 1043.14 × 1043.71 × 1032.15 × 1042.97 × 1043.07 × 1043.12 × 1043.15 × 1043.15 × 1043.15 × 1043.19 × 104
f164.23 × 1026.14 × 1025.99 × 1024.96 × 1024.02 × 1026.90 × 1023.92 × 1025.46 × 1022.26 × 1036.08 × 1022.16 × 1032.80 × 103
f173.11 × 1027.20 × 1026.73 × 1023.01 × 1023.85 × 1027.63 × 1022.39 × 1025.93 × 1021.38 × 1034.69 × 1021.26 × 1031.78 × 103
f184.28 × 1052.45 × 1062.39 × 1063.80 × 1056.79 × 1051.55 × 1061.64 × 1062.45 × 1063.21 × 1062.25 × 1063.06 × 1064.03 × 106
f194.33 × 1032.09 × 1032.20 × 1035.91 × 1031.75 × 1031.86 × 1031.70 × 1032.14 × 1032.32 × 1032.37 × 1032.40 × 1032.48 × 103
f201.76 × 1021.13 × 1031.12 × 1031.73 × 1024.35 × 1021.11 × 1034.18 × 1021.17 × 1031.32 × 1031.11 × 1031.36 × 1031.47 × 103
f212.22 × 1022.27 × 1022.30 × 1022.32 × 1022.33 × 1022.22 × 1022.30 × 1022.30 × 1024.92 × 1022.43 × 1024.82 × 1025.92 × 102
f223.10 × 1031.17 × 1041.18 × 1043.24 × 1034.42 × 1031.18 × 1044.83 × 1031.18 × 1041.25 × 1041.23 × 1041.28 × 1041.29 × 104
f235.09 × 1025.18 × 1025.16 × 1025.36 × 1025.40 × 1025.04 × 1025.22 × 1025.15 × 1026.38 × 1025.27 × 1026.32 × 1028.21 × 102
f245.83 × 1025.89 × 1025.87 × 1026.04 × 1026.06 × 1025.73 × 1025.90 × 1025.90 × 1027.90 × 1025.88 × 1027.45 × 1028.85 × 102
f255.06 × 1024.83 × 1024.84 × 1025.51 × 1025.16 × 1024.80 × 1024.84 × 1024.82 × 1025.24 × 1025.16 × 1025.31 × 1025.69 × 102
f262.27 × 1032.44 × 1032.49 × 1032.15 × 1032.34 × 1032.28 × 1032.40 × 1032.42 × 1032.65 × 1032.56 × 1032.67 × 1035.09 × 103
f276.94 × 1027.43 × 1027.67 × 1027.46 × 1027.31 × 1027.07 × 1027.75 × 1027.53 × 1027.42 × 1028.28 × 1028.04 × 1027.80 × 102
f289.06 × 1025.13 × 1035.10 × 1035.29 × 1025.27 × 1022.37 × 1034.60 × 1035.14 × 1035.30 × 1035.67 × 1035.66 × 1035.74 × 103
f295.90 × 1021.14 × 1031.33 × 1035.62 × 1025.92 × 1028.11 × 1021.13 × 1031.13 × 1031.43 × 1031.80 × 1031.62 × 1031.80 × 103
f308.55 × 1051.48 × 1061.38 × 1068.27 × 1059.18 × 1051.33 × 1061.29 × 1061.41 × 1061.53 × 1061.59 × 1061.54 × 1061.51 × 106
rank2.836.456.213.663.794.664.555.979.728.1010.3811.69
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, Q.; Bian, Y.-W.; Gao, X.-D.; Xu, D.-D.; Lu, Z.-Y.; Jeon, S.-W.; Zhang, J. Stochastic Triad Topology Based Particle Swarm Optimization for Global Numerical Optimization. Mathematics 2022, 10, 1032. https://doi.org/10.3390/math10071032

AMA Style

Yang Q, Bian Y-W, Gao X-D, Xu D-D, Lu Z-Y, Jeon S-W, Zhang J. Stochastic Triad Topology Based Particle Swarm Optimization for Global Numerical Optimization. Mathematics. 2022; 10(7):1032. https://doi.org/10.3390/math10071032

Chicago/Turabian Style

Yang, Qiang, Yu-Wei Bian, Xu-Dong Gao, Dong-Dong Xu, Zhen-Yu Lu, Sang-Woon Jeon, and Jun Zhang. 2022. "Stochastic Triad Topology Based Particle Swarm Optimization for Global Numerical Optimization" Mathematics 10, no. 7: 1032. https://doi.org/10.3390/math10071032

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop