Abstract
Particle swarm optimization (PSO) as a swarm intelligence-based optimization algorithm has been widely applied to solve various real-world optimization problems. However, traditional PSO algorithms encounter issues such as premature convergence and an imbalance between global exploration and local exploitation capabilities when dealing with complex optimization tasks. To address these shortcomings, an enhanced PSO algorithm incorporating velocity pausing and adaptive strategies is proposed. By leveraging the search characteristics of velocity pausing and the terminal replacement mechanism, the problem of premature convergence inherent in standard PSO algorithms is mitigated. The algorithm further refines and controls the search space of the particle swarm through time-varying inertia coefficients, symmetric cooperative swarms concepts, and adaptive strategies, balancing global search and local exploitation. The performance of VASPSO was validated on 29 standard functions from Cec2017, comparing it against five PSO variants and seven swarm intelligence algorithms. Experimental results demonstrate that VASPSO exhibits considerable competitiveness when compared with 12 algorithms. The relevant code can be found on our project homepage.
1. Introduction
Swarm intelligence algorithms are computational methods inspired by collective behaviors observed in nature. They simulate interactions and cooperation among individuals within a group to solve complex problems. The development of swarm intelligence algorithms stems from recognizing the limitations of traditional, individual-based algorithms in tackling complex challenges. Conventional algorithms often rely on deterministic, single-agent methods, which may struggle to find global optima or encounter significant computational complexity for certain problems [1,2]. To address these challenges, researchers have introduced a new algorithmic paradigm: swarm intelligence algorithms. These algorithms collectively solve problems by simulating information exchange, cooperation, and competition among group members. Notable examples of swarm intelligence algorithms include genetic algorithms, ant colony algorithms, and bee swarm algorithms, which have demonstrated significant success across various fields.
Particle swarm optimization (PSO), initially proposed by Eberhart and Kennedy in 1995 [3], is inspired by the behavior of bird flocks and fish schools. In PSO, solutions to problems are represented by a swarm of particles, with each particle embodying a solution, and the swarm collectively representing a potential set of solutions within the solution space. Particles optimize their positions and velocities by continuously searching and iterating within the solution space, learning from and memorizing the best solutions found. PSO offers several advantages: it has a rapid convergence rate and robust global search capabilities; its algorithmic structure is simple, facilitating easy implementation and application; and it adapts well to the continuity and non-linearity of various problems. Consequently, PSO has been extensively applied in domains such as function optimization, combinatorial optimization [4], machine learning [5], and image processing, among others.
Since its introduction, PSO has evolved through years of research and development, leading to numerous improvements. Some notable enhancements include the following:
- 1.
- Modifications to parameters, such as constant and random inertia weights [6,7], time-varying inertia [8,9,10], quadratic inertia weights [11], adaptive inertia weights [12,13], and sine chaos inertia weights [14] to enhance PSO’s performance. Improvements to acceleration coefficients include time-varying acceleration coefficients [15], acceleration update strategies based on the Sigmoid function [16,17], and the addition of Gaussian white noise to acceleration coefficients [18] to balance the trade-off between individual experience and group collaboration;
- 2.
- Enhancements in the topological structure, which concerns how individuals within the algorithm communicate with each other. Proposals include selecting the nearest particles as neighbors in each generation [19], recombining randomly divided sub-populations after evolving for certain generations [20], and optimizing algorithms using ring topological structures. The proposed HIDMS-PSO variant enhances the traditional fixed topology of the unit structure with new master- and slave-dominated topologies [21], a random triplet topological structure that allows each particle to communicate with two random particles in the group based on its best position [22], and the particle swarm optimization algorithm with an extra gradient method (EPSO) [23] have been introduced to increase algorithmic diversity;
- 3.
- Improvements to learning strategies include the integration of four particle swarm search strategies into a strategy dynamics-based optimization method [24], a particle swarm optimization with multi-dimensional learning strategy based on adaptive inter-weight [25], and an adaptive hierarchical update particle swarm optimization algorithm employing a multi-choice comprehensive learning strategy comprising weighted composite sub-strategies and average evolutionary sub-strategies. Enhanced learning strategies and crossover operators (PSOLC) [26], and a multi-objective particle swarm with alternate learning strategies (MOPSOALS) [27], utilizing adaptive parameter updating exploratory and exploitative role learning strategies to improve optimization performance, have been proposed. To address large-scale variable problems, a multi-strategy learning particle swarm optimization algorithm (MSL-PSO) [28] employing different learning strategies at various stages has been introduced. To tackle deficiencies in solving complex optimization problems, a hybrid particle swarm optimization algorithm based on an adaptive strategy (ASPSO) [29] has been put forward.
- 4.
- Hybrid research involving the integration of other algorithms with PSO to better address complex optimization problems includes combining genetic factors with PSO, such as selection operators [30], crossover operators [31,32], and mutation operators [33]. Similarly, merging the simulated annealing algorithm to improve the global optimum update strategy in PSO [34], hybridizing with the grey wolf optimization algorithm [35], pattern search algorithm [36], crow search algorithm [37], hybrid particle filtering (PF) [38], and hybrid improved symbiotic organisms search (MSOS) [39] to fully leverage each algorithm’s strengths.
Despite the maturity and successful application of these classic swarm intelligence algorithms in solving real-world optimization challenges, issues like premature convergence and suboptimal performance persist in complex application areas. Moreover, as societal demands evolve, real-world optimization problems increasingly exhibit high-dimensionality, multiple constraints, and multimodality, posing new challenges to classical optimization algorithms such as swarm intelligence algorithms. Against this backdrop, enhancing swarm intelligence algorithms to better tackle increasingly complex optimization problems has emerged as a research focal point.
To augment the particle swarm algorithm’s capacity to solve complex optimization problems and to more effectively balance exploration and exploitation, this paper introduces an improved particle swarm algorithm dubbed VASPSO. VASPSO contributes in five ways as described below:
- Introducing a time-varying inertia coefficient to enhance the algorithm’s convergence speed and global search capability;
- Adopting velocity pausing concepts to mitigate premature convergence issues in particle swarm algorithms;
- Employing an adaptive strategy to dynamically fine-tune the balance between exploration and exploitation, thus boosting algorithm performance;
- Incorporating symmetric cooperative swarms concepts, treating different particles with varied strategies to aid the algorithm in discovering optimal solutions;
- Implementing a terminal replacement mechanism to excise the least performing particles, thereby refining the accuracy of the final solution;
Moreover, to ascertain the viability of the newly proposed VASPSO algorithm, it was tested against 29 benchmark functions from CEC2017, comparing its performance with five PSO variants and seven other swarm intelligence algorithms. Experimental outcomes reveal VASPSO’s superiority over other algorithms in most benchmark functions, underscoring its success.
The remainder of this paper is structured as follows: Section 2 elucidates the standard particle swarm algorithm. The VASPSO algorithm is expounded in Section 3, with Section 4 showcasing the test functions and outcomes of experimental simulations. Conclusions and directions for future work are presented in Section 5.
2. Standard Particle Swarm Optimization
Particle swarm optimization (PSO) is a swarm intelligence-based optimization algorithm inspired by the foraging behavior of bird flocks [3]. The core principle of PSO is to optimize a search process by simulating the behavior of particles within a swarm. Each particle in the swarm represents a candidate solution, navigating the search space by updating its position and velocity based on both its own experience and the collective information from the swarm. Specifically, the position of a particle corresponds to a candidate solution’s value, while its velocity determines the direction and magnitude of the search. Initially, particles are dispersed randomly throughout the search space with varying velocities. Their performance is assessed using a fitness function, which informs the updates to both the best position discovered by each particle (personal best) and the best overall position found by the swarm (global best). Throughout the algorithm, particles dynamically adjust their velocity and position, leveraging insights from their personal best achievements and the global best discoveries, in pursuit of optimal solutions. Below is the initialization of particles in the particle swarm optimization algorithm:
The w in Formula (4) is called the inertia weight and is derived from Formula (3):
The formula for updating the velocity and position of the particle is as follows:
where and are called acceleration coefficients, represents the velocity of the particle at time t, represents the position of the particle at time t, represents the individual best position of the particle, and represents the global best position of the group.
3. Particle Swarm Optimization Algorithm Using Velocity Pausing and Adaptive Strategy
The proposed approach aims to mitigate premature convergence and enhance the optimization performance of particle swarm optimization (PSO) by integrating velocity pausing and adaptive strategy. In VASPSO, five key enhancements are elaborated below.
3.1. Time-Varying Inertia Weight
The modification of the inertia weight is a critical aspect of the particle swarm optimization (PSO) algorithm that significantly enhances its efficacy. Initially, the introduction of velocity pausing aimed to augment the global search capabilities of PSO and to mitigate premature convergence to local optima. However, while this strategy can enhance exploration, it may also, over time, lead to a slower convergence rate and, in some instances, convergence challenges. To address these challenges, particularly in the later stages of execution, a time-decreasing weight function is often implemented to adjust the inertia weight. This function is designed to progressively reduce the inertia weight across iterations, maintaining robust global search capabilities while simultaneously facilitating quicker convergence towards the optimal solution. By dynamically modulating the inertia weight, PSO strikes an improved balance between exploration and exploitation, thereby optimizing both the convergence speed and the algorithm’s overall global search performance.
The inertia weight is calculated as follows:
where T is the maximum number of iterations and b is the set constant.
3.2. Velocity Pausing
In 2023, Tareq M. Shami introduced the concept of velocity pausing into the particle swarm optimization (PSO) algorithm, marking a significant advancement in its performance [40]. This modification allows each particle the option to temporarily suspend velocity updates during iterations, instead selecting from three predefined velocities: slow, fast, and constant. This addition of a third velocity option diverges from traditional PSO mechanics and increases the algorithm’s flexibility, promoting a more balanced approach between exploration and exploitation. Such enhancements address prevalent challenges in classical PSO algorithms, including premature convergence and the tendency to become trapped in local optima.
The careful selection of the pausing coefficient is critical. When the pausing coefficient approaches 1, the particle’s velocity update closely mirrors that of the classical PSO. Conversely, when the coefficient is set very low, it forces the particle’s velocity to remain constant, potentially leading to slower convergence rates and difficulties in discovering superior solutions.
The formula for velocity pausing is as follows:
where a is the coefficient of velocity pausing, and and are the learning factors.
The particle position update formula is as follows:
3.3. Adaptive Strategy
Traditional algorithms that employ a single-strategy approach for position updates often struggle to achieve a suitable balance between exploration and exploitation, particularly in complex scenarios characterized by multiple local optima or highly multimodal objective functions. In contrast, adaptive strategy position updates can significantly enhance the algorithm’s search efficiency. This approach involves dynamically optimizing the search space that may yield the best solution by adaptively selecting the position update strategy during each iteration. Such improvements have proven to markedly increase search efficiency and have been widely adopted in enhancing both traditional and classical algorithms [29].
The core of the adaptive strategy for position updates resides in the iterative modification of the parameter p, which is adjusted based on changes in the fitness value. This responsive adjustment enables the algorithm to better navigate the solution landscape, effectively balancing the dual imperatives of exploring new possibilities and exploiting known good solutions.
The formula for parameter p is as follows:
where N is the total population. is the fitness of the corresponding particle, and is the ratio of the current fitness of the particle to the average fitness of the population. An estimate is obtained in each iteration.
The location update formula is as follows:
When is small, the adaptation value of particle i exceeds the average level of the population. In this scenario, to augment the global exploration capabilities of particle i, the location update strategy is employed. Conversely, when is large, indicating that particle i is performing below the average level of the population, the location update strategy is implemented to boost its local exploitation capabilities.
3.4. Symmetric Cooperative Swarms
Symmetry is a prominent characteristic in algorithms and their applications, playing a crucial role across various domains. In algorithms, symmetry often reveals patterns within data, aiding in simplifying complex problems and enhancing algorithm efficiency. One standout example is the divide-and-conquer approach. By leveraging this principle, the search space is decomposed for exploration, thereby reducing time complexity. In image processing, symmetry is frequently utilized to optimize rendering and image manipulation algorithms. For instance, in 3D graphics, objects typically exhibit symmetrical properties, allowing for reduced computational overhead in rendering and processing, thus improving graphics processing efficiency. In database optimization, exploiting data symmetry can reduce redundant storage and enhance query efficiency. Techniques such as normalization and denormalization are employed to eliminate duplicate and redundant information from data, while indexing and caching techniques accelerate data access and query operations.
In this paper, we also draw inspiration from the concept of symmetry. During each iteration, the entire particle swarm is divided based on the fitness of particles, segregating them into superior and inferior particle clusters. This symmetrical partitioning enables particles to undertake different responsibilities and tasks, facilitating better exploration of the solution space.
The elite group implements advanced strategies to finely balance exploration and exploitation. Initially, a velocity pausing strategy is applied, temporarily halting the velocity updates under specific conditions to allow prolonged investigation within local solution spaces, thereby enhancing the exploration of potentially superior solutions. Additionally, an adaptive strategy modifies the positional behavior based on the problem’s characteristics and the group’s current state, dynamically adjusting the balance between exploration and exploitation to suit the complexity and diversity of the problem.
Conversely, the non-elite group primarily updates positions by referencing the global best solution, fostering broader exploration of the solution space. This group’s focus is on uncovering new potential solutions, thereby injecting diversity into the swarm. By orienting their position updates solely around the global best, the non-elite particles can explore more aggressively, expanding the search range for the algorithm.
Through this cooperative strategy, the particle swarm optimization algorithm achieves a more effective balance between exploration and exploitation. The elite group conducts targeted exploration and refined exploitation using sophisticated strategies, such as velocity pausing and adaptive positional updates. In contrast, the non-elite group pursues extensive exploration driven by the global best solution. This structured approach allows the algorithm to search more comprehensively for viable solutions, thereby enhancing both the performance and the efficacy of the algorithm.
A detailed flowchart of the subgroup collaborative process is shown in Figure 1: Subgroup Collaborative Process Flowchart.
Figure 1.
Flowchart of the VASPSO algorithm.
3.5. Terminal Replacement Mechanism
The terminal replacement mechanism, also known as the ’last-place elimination mechanism,’ is a widely used algorithmic method for selecting optimal solutions under conditions of limited resources or intense competition. Inspired by the biological principle of natural selection, exemplified by the concept of survival of the fittest, this mechanism is prevalent in various optimization and genetic algorithms, aiding in the efficient identification of superior solutions.
Within this mechanism, an initial set of candidate solutions is generated and subsequently ranked according to predefined evaluation criteria. Based on these rankings, candidates with the lowest scores are progressively eliminated, while those with higher scores are retained for additional evaluation rounds. This iterative process is repeated until the optimal solution is identified or a predetermined stopping condition is reached.
To better mimic the unpredictability of real-world scenarios and enhance the algorithm’s robustness, the concept of an elimination factor has been incorporated into the particle swarm optimization (PSO) algorithm. This modification involves selectively replacing particles according to specific rules, with a predetermined probability that permits some particles to avoid elimination due to various factors. If a randomly generated number falls below the established elimination factor, a replacement operation is executed; otherwise, the particle is preserved for further evaluation. By integrating this elimination factor, the algorithm not only replicates the inherent uncertainties of real-life situations but also significantly enhances its adaptability and robustness.
Among them, Gbad is the least fit particle among all the individual optimal particles.
Nbest is generated by the intersection of the global best particle and any two individual best particles.
The fitness values of the new cross-generated particle Nbest and Gbad are compared. If Nbest is better than Gbad, Gbad is replaced by Nbest; otherwise, Gbad remains unchanged, and the eliminated particle is selected.
In summary, VASPSO is proposed as a fusion variant, and its pseudo-code is shown in Algorithm 1.
| Algorithm 1 VASPSO Algorithm. |
|
4. Experimental Results and Discussion
4.1. Test the Function and Compare the Algorithm
In this experiment, 29 benchmark functions from CEC2017 were selected to evaluate the performance of the proposed MPSO algorithm (f2 was excluded due to its unstable behavior at higher dimensions and significant performance variations in the same algorithm implemented in Matlab). These benchmark functions are divided into four categories: f1–f3 are unimodal functions, f4–f10 are multimodal functions, f11–f20 are hybrid functions, and f21–f30 are composition functions [41]. These functions are listed in Table 1.
Table 1.
The benchmark function used in this paper.
To validate the performance of VASPSO, five PSO variants were employed: VPPSO [40], MPSO [42], OldMPSO [43] (to distinguish from the previous MPSO), AGPSO [44], and IPSO [45]; along with seven other swarm intelligence algorithms, including DE [46], GWO [47], CSO [48], DBO [49], BWO [50], SSA [51], and ABC [52]. These algorithms are listed in Table 2.
Table 2.
Algorithm parameter information.
To be fair, each algorithm is run 50 times independently, and the termination condition for all algorithms is the maximum number of iterations, which is set to 600.
4.2. Comparison between VASPSO and 12 Algorithms
In this section, VASPSO is compared with five variant PSOs and seven other swarm intelligence algorithms, with the results listed in Table 2 through 7, respectively. Three quantitative metrics are used: mean (Mean), standard deviation (Std), and minimum value (Min).
The convergence curves of the algorithms are shown in Figure 2 and Figure 3. The trajectory curves represent the search process of the particles in each dimension. The fluctuation of the former indicates that the particles are conducting a global search, while the stability of the latter suggests that the particles have reached a global or local optimum.
Figure 2.
Convergence curve of algorithms for variants of PSO.
Figure 3.
Convergence curve of algorithms for 7 evolutionary algorithms.
From Table 3, it is evident that VASPSO generally outperforms the other five PSO variants. In terms of mean values, VASPSO shows superior performance on functions f3, f10, f11, and f27 compared to the other variants. In terms of variance, VASPSO demonstrates greater stability on functions f3, f5, f8, f9, f11, f16, f21, f25, and f27 than the other variants. In terms of minimum values, VASPSO excels on functions f3, f4, f10, f11, f14, f18, f22, f25, f27, f28, and f29 compared to the other variants. Although VASPSO does not perform well in some cases, it still ranks as the best in performance when considering the average ranking across all functions. The convergence of the algorithm is shown in Figure 2.
Table 3.
Comparisons of experimental results between 7 evolutionary algorithms.
From Table 4, it is clear that VASPSO overall surpasses the other seven swarm intelligence algorithms, with superior performance on functions f3, f11, f13, f14, f15, f19, and f28 compared to the other algorithms. In terms of variance, VASPSO shows greater stability on functions f11, f13, f14, and f15 than the other swarm intelligence algorithms. In terms of minimum values, VASPSO performs better on functions f1, f3, f4, f10, f11, f13, f14, f18, f22, f25, f26, f28, f29, and f30 compared to the other algorithms. Despite some poor performances on specific functions, VASPSO still emerges as the best in the overall ranking. The convergence of the algorithm is shown in Figure 3.
Table 4.
Comparisons of experimental results between some well-known variants of PSO.
4.3. Analysis of Statistical Results
Statistical tests are typically required to analyze experimental results. The Friedman test, a non-parametric statistical method, is employed to assess whether significant differences exist in the median values of multiple paired samples across three or more related groups [53]. In the execution of the Friedman test, observations within each group are first ranked. Subsequently, the average ranks for each sample are calculated. The Friedman test statistic is then computed in a manner akin to that used in the analysis of variance, focusing on the sum of rank differences. This test is particularly suited for experimental designs involving multiple related samples, such as those evaluating the efficacy of different algorithms or treatment methods at various observation points. The progressive significance values from the Friedman test, as presented in Table 5, clearly demonstrate significant differences among the 12 algorithms analyzed.
Table 5.
Comparison of algorithms based on the Friedman test.
The Wilcoxon signed-rank test is a non-parametric statistical method used to determine if there are significant differences between the median values of two related groups of samples. It is applicable in situations where the data do not follow a normal distribution and cannot be paired [54]. When conducting the Wilcoxon signed-rank test, the first step is to calculate the difference between each pair of related samples, then take the absolute value of these differences, and rank them in order of magnitude, assigning corresponding ranks. Next, the signs of the differences are disregarded, retaining only the ranks. Finally, the presence of significant differences between the median values of the two groups of samples is determined based on the sum of ranks.
In the Wilcoxon signed-rank test, the primary statistical measures of interest are the rank sum and the test statistic Z-value. Hypothesis testing is conducted by comparing the Z-value with a critical value or the p-value. If the Z-value is significantly greater than the critical value, or the p-value is less than the significance level, the null hypothesis can be rejected, indicating that there is a significant difference between the median values of the two sample groups. Considering a significance level of 0.05, and based on the statistical results presented in Table 6, it can be seen that the VASPSO algorithm outperforms the other algorithms.
Table 6.
Comparison of algorithms based on the Wilcoxon signed-rank test.
5. Conclusions
This study aims to mitigate several prevalent issues associated with the particle swarm optimization algorithm, such as premature convergence and the imbalance between global exploration and local exploitation, through the introduction of an enhanced version known as VASPSO. In our approach, we incorporate velocity pausing and a terminal replacement mechanism, which are specifically designed to prevent premature convergence. Additionally, VASPSO utilizes time-varying inertia coefficients, the concept of symmetric cooperative swarms, and adaptive strategies for modulating the search step length, which collectively contribute to a more balanced optimization process.
To rigorously evaluate the effectiveness of VASPSO in addressing complex optimization challenges, we conducted a series of comparative experiments using the CEC2017 benchmark. The results from these experiments suggest that VASPSO not only improves upon the performance of many existing PSO variants but also shows promising capabilities when compared with other well-regarded swarm intelligence algorithms.
However, VASPSO still has some limitations:
- 1.
- While VASPSO has shown promising results in experimental settings, the theoretical underpinnings and formal proofs supporting the VASPSO algorithm are still somewhat limited. Further theoretical analysis is crucial to fully understand the mechanisms and principles driving its performance;
- 2.
- The classification of particles within the VASPSO algorithm currently employs a rigid structure that might oversimplify the inherent complexities of diverse optimization problems. This method of division may not adequately capture the nuanced characteristics of different scenarios, potentially limiting the algorithm’s adaptability and effectiveness;
- 3.
- Certain parameters within the algorithm may require adjustments to effectively respond to variations in the problem’s characteristics. A static parameter setting might not always align optimally with the dynamic nature of different optimization challenges;
- 4.
- VASPSO performs poorly on some benchmark functions (such as f5, f6). Despite achieving good results on some optimization problems, the algorithm may exhibit relatively poorer performance on certain specific benchmark functions.
Future work will unfold in the following aspects:
- 1.
- Applying the VASPSO algorithm to real-world problems offers a significant opportunity to test and enhance its utility. For instance, in the field of engineering optimization, VASPSO could be particularly beneficial for complex tasks such as robot path planning and image processing. Integrating VASPSO with these practical applications not only allows for a robust evaluation of the algorithm’s performance but also showcases its potential to provide effective and efficient solutions;
- 2.
- Integrating VASPSO with other optimization algorithms could significantly enhance its performance and robustness. By combining the strengths of multiple algorithms, a more effective balance between global exploration and local exploitation can be achieved. This hybrid approach not only improves the overall search capability of VASPSO but also enhances its convergence performance;
- 3.
- Combining VASPSO with neural networks can create a formidable optimization framework that leverages the strengths of both methodologies. By integrating the robust global search capabilities of VASPSO with the adaptive learning abilities of neural networks, this hybrid approach can facilitate more efficient parameter optimization and model training. This synergy enhances the algorithm’s effectiveness in navigating complex parameter spaces and accelerates the convergence towards optimal solutions.
Author Contributions
Conceptualization, K.T.; Methodology, C.M.; Software, C.M.; Writing—original draft preparation, C.M.; Supervision, K.T.; Project administration, K.T.; Writing—review and editing, K.T.; Visualization, C.M. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no funding.
Data Availability Statement
The data that support the findings of this study are available from the corresponding author, Meng, C.-J., upon reasonable request.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Mokeddem, D. A new improved salp swarm algorithm using logarithmic spiral mechanism enhanced with chaos for global optimization. Evol. Intell. 2022, 15, 1745–1775. [Google Scholar] [CrossRef]
- Parsopoulos, K.E.; Vrahatis, M.N. Recent approaches to global optimization problems through particle swarm optimization. Nat. Comput. 2002, 1, 235–306. [Google Scholar] [CrossRef]
- Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: New York, NY, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
- Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
- Kiranyaz, S.; Ince, T.; Gabbouj, M.; Kiranyaz, S.; Ince, T.; Gabbouj, M. Particle swarm optimization. In Multidimensional Particle Swarm Optimization for Machine Learning and Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2014; pp. 45–82. [Google Scholar]
- Shi, Y.; Eberhart, R.C. Parameter Selection in Particle Swarm Optimization; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
- Eberhart, R.; Shi, Y. Tracking and optimizing dynamic systems with particle swarms. In Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No.01TH8546), Seoul, Republic of Korea, 27–30 May 2001; Volume 1, pp. 94–100. [Google Scholar]
- Chatterjee, A.; Siarry, P. Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization. Comput. Oper. Res. 2006, 33, 859–871. [Google Scholar] [CrossRef]
- Feng, Y.; Teng, G.F.; Wang, A.X.; Yao, Y.M. Chaotic Inertia Weight in Particle Swarm Optimization. In Proceedings of the International Conference on Innovative Computing, Kumamoto, Japan, 5–7 September 2007. [Google Scholar]
- Fan, S.K.S.; Chiu, Y.Y. A decreasing inertia weight particle swarm optimizer. Eng. Optim. 2007, 39, 203–228. [Google Scholar] [CrossRef]
- Tang, Y.; Wang, Z.; Fang, J. Feedback learning particle swarm optimization. Appl. Soft Comput. 2011, 11, 4713–4725. [Google Scholar] [CrossRef]
- Agrawal, A.; Tripathi, S. Particle Swarm Optimization with Probabilistic Inertia Weight. In Harmony Search and Nature Inspired Optimization Algorithms; Yadav, N., Yadav, A., Bansal, J.C., Deep, K., Kim, J.H., Eds.; Springer: Singapore, 2019; pp. 239–248. [Google Scholar]
- Prastyo, P.H.; Hidayat, R.; Ardiyanto, I. Enhancing sentiment classification performance using hybrid query expansion ranking and binary particle swarm optimization with adaptive inertia weights. ICT Express 2022, 8, 189–197. [Google Scholar] [CrossRef]
- Singh, A.; Sharma, A.; Rajput, S.; Bose, A.; Hu, X. An investigation on hybrid particle swarm optimization algorithms for parameter optimization of PV cells. Electronics 2022, 11, 909. [Google Scholar] [CrossRef]
- Ratnaweera, A.; Halgamuge, S.; Watson, H. Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans. Evol. Comput. 2004, 8, 240–255. [Google Scholar] [CrossRef]
- Beheshti, Z. A novel x-shaped binary particle swarm optimization. Soft Comput. 2021, 25, 3013–3042. [Google Scholar] [CrossRef]
- Dixit, A.; Mani, A.; Bansal, R. An adaptive mutation strategy for differential evolution algorithm based on particle swarm optimization. Evol. Intell. 2022, 15, 1571–1585. [Google Scholar] [CrossRef]
- Liu, W.; Wang, Z.; Zeng, N.; Yuan, Y.; Alsaadi, F.E.; Liu, X. A novel random particle swarm optimizer. Int. J. Mach. Learn. Cybern. 2021, 12, 529–540. [Google Scholar] [CrossRef]
- Hu, X.; Eberhart, R. Multiobjective optimization using dynamic neighborhood particle swarm optimization. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No.02TH8600), Honolulu, HI, USA, 12–17 May 2002; Volume 2, pp. 1677–1681. [Google Scholar]
- Liang, J.; Suganthan, P. Dynamic multi-swarm particle swarm optimizer. In Proceedings of the 2005 IEEE Swarm Intelligence Symposium, 2005. SIS 2005, Pasadena, CA, USA, 8–12 June 2005; pp. 124–129. [Google Scholar]
- Varna, F.T.; Husbands, P. HIDMS-PSO Algorithm with an Adaptive Topological Structure. In Proceedings of the 2021 IEEE Symposium Series on Computational Intelligence (SSCI), Orlando, FL, USA, 5–7 December 2021; pp. 1–8. [Google Scholar]
- Yang, Q.; Bian, Y.W.; Gao, X.D.; Xu, D.D.; Lu, Z.Y.; Jeon, S.W.; Zhang, J. Stochastic triad topology based particle swarm optimization for global numerical optimization. Mathematics 2022, 10, 1032. [Google Scholar] [CrossRef]
- Potu, N.; Jatoth, C.; Parvataneni, P. Optimizing resource scheduling based on extended particle swarm optimization in fog computing environments. Concurr. Comput. Pract. Exp. 2021, 33, e6163. [Google Scholar] [CrossRef]
- Liu, Z.; Nishi, T. Strategy Dynamics Particle Swarm Optimizer. Inf. Sci. 2022, 582, 665–703. [Google Scholar] [CrossRef]
- Janakiraman, S.; Priya, M.D. Hybrid grey wolf and improved particle swarm optimization with adaptive intertial weight-based multi-dimensional learning strategy for load balancing in cloud environments. Sustain. Comput. Inform. Syst. 2023, 38, 100875. [Google Scholar] [CrossRef]
- Molaei, S.; Moazen, H.; Najjar-Ghabel, S.; Farzinvash, L. Particle swarm optimization with an enhanced learning strategy and crossover operator. Knowl.-Based Syst. 2021, 215, 106768. [Google Scholar] [CrossRef]
- Koh, W.S.; Lim, W.H.; Ang, K.M.; Isa, N.A.M.; Tiang, S.S.; Ang, C.K.; Solihin, M.I. Multi-objective particle swarm optimization with alternate learning strategies. In Recent Trends in Mechatronics towards Industry 4.0: Selected Articles from iM3F 2020, Malaysia; Springer: Singapore, 2022; pp. 15–25. [Google Scholar]
- Wang, H.; Liang, M.; Sun, C.; Zhang, G.; Xie, L. Multiple-strategy learning particle swarm optimization for large-scale optimization problems. Complex Intell. Syst. 2021, 7, 1–16. [Google Scholar] [CrossRef]
- Wang, R.; Hao, K.; Chen, L.; Wang, T.; Jiang, C. A novel hybrid particle swarm optimization using adaptive strategy. Inf. Sci. 2021, 579, 231–250. [Google Scholar] [CrossRef]
- Angeline, P.J. Using selection to improve particle swarm optimization. In Proceedings of the Evolutionary Computation Proceedings, 1998. IEEE World Congress on Computational Intelligence, Anchorage, AK, USA, 4–9 May 1998. [Google Scholar]
- Løvbjerg, M.; Rasmussen, T.K.; Krink, T. Hybrid Particle Swarm Optimiser with breeding and subpopulations. In Proceedings of the Genetic and Evolutionary Computation Conference, San Francisco, CA, USA, 7–11 July 2001. [Google Scholar]
- Chen, Y.P.; Peng, W.C.; Jian, M.C. Particle Swarm Optimization With Recombination and Dynamic Linkage Discovery. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 2007, 37, 1460–1470. [Google Scholar] [CrossRef]
- Andrews, P. An Investigation into Mutation Operators for Particle Swarm Optimization. In Proceedings of the 2006 IEEE International Conference on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006; pp. 1044–1051. [Google Scholar]
- Yu, Z.; Si, Z.; Li, X.; Wang, D.; Song, H. A Novel Hybrid Particle Swarm Optimization Algorithm for Path Planning of UAVs. IEEE Internet Things J. 2022, 9, 22547–22558. [Google Scholar] [CrossRef]
- Zhang, X.; Lin, Q.; Mao, W.; Liu, S.; Liu, G. Hybrid Particle Swarm and Grey Wolf Optimizer and its application to clustering optimization. Appl. Soft Comput. 2020, 101, 107061. [Google Scholar] [CrossRef]
- Koessler, E.; Almomani, A. Hybrid particle swarm optimization and pattern search algorithm. Optim. Eng. 2021, 22, 1539–1555. [Google Scholar] [CrossRef]
- Adamu, A.; Abdullahi, M.; Junaidu, S.B.; Hassan, I.H. An hybrid particle swarm optimization with crow search algorithm for feature selection. Mach. Learn. Appl. 2021, 6, 100108. [Google Scholar] [CrossRef]
- Pozna, C.; Precup, R.E.; Horváth, E.; Petriu, E.M. Hybrid Particle Filter–Particle Swarm Optimization Algorithm and Application to Fuzzy Controlled Servo Systems. IEEE Trans. Fuzzy Syst. 2022, 30, 4286–4297. [Google Scholar] [CrossRef]
- He, W.; Qi, X.; Liu, L. A novel hybrid particle swarm optimization for multi-UAV cooperate path planning. Appl. Intell. 2021, 51, 7350–7364. [Google Scholar] [CrossRef]
- Shami, T.M.; Mirjalili, S.; Al-Eryani, Y.; Daoudi, K.; Izadi, S.; Abualigah, L. Velocity pausing particle swarm optimization: A novel variant for global optimization. Neural Comput. Appl. 2023, 35, 9193–9223. [Google Scholar] [CrossRef]
- Awad, N.; Ali, M.; Liang, J.; Qu, B.; Suganthan, P. Problem definitions and evaluation criteria for the CEC 2017 special session and competition on single objective bound constrained real-parameter numerical optimization. In Technical Report; Nanyang Technological University Singapore: Singapore, 2016; pp. 1–34. [Google Scholar]
- Liu, H.; Zhang, X.W.; Tu, L.P. A modified particle swarm optimization using adaptive strategy. Expert Syst. Appl. 2020, 152, 113353. [Google Scholar] [CrossRef]
- Bao, G.Q.; Mao, K.F. Particle swarm optimization algorithm with asymmetric time varying acceleration coefficients. In Proceedings of the IEEE International Conference on Robotics and Biomimetics, Guilin, China, 18–22 December 2009; pp. 2134–2139. [Google Scholar]
- Mirjalili, S.; Lewis, A.; Sadiq, A.S. Autonomous Particles Groups for Particle Swarm Optimization. Arab. J. Sci. Eng. 2014, 39, 4683–4697. [Google Scholar] [CrossRef]
- Cui, Z.; Zeng, J.; Yin, Y. An improved PSO with time-varying accelerator coefficients. In Proceedings of the Eighth International Conference on Intelligent Systems Design and Applications, Kaohsiung, Taiwan, 26–28 November 2008; pp. 638–643. [Google Scholar]
- Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
- Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
- Meng, X.; Liu, Y.; Gao, X.; Zhang, H. A New Bio-inspired Algorithm: Chicken Swarm Optimization. In Proceedings of the International Conference in Swarm Intelligence, Hefei, China, 17–20 October 2014. [Google Scholar]
- Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
- Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar]
- Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. Open Access J. 2020, 8, 22–34. [Google Scholar] [CrossRef]
- Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
- Sheldon, M.R.; Fillyaw, M.J.; Thompson, W.D. The use and interpretation of the Friedman test in the analysis of ordinal-scale data in repeated measures designs. Physiother. Res. Int. J. Res. Clin. Phys. Ther. 1996, 14, 221–228. [Google Scholar] [CrossRef]
- Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).