Next Article in Journal
Method of Analyzing Technological Data in Metric Space in the Context of Industry 4.0
Next Article in Special Issue
Progress of Optimization in Manufacturing Industries and Energy System
Previous Article in Journal
An Optimal Model for Determination Shut-In Time Post-Hydraulic Fracturing of Shale Gas Wells: Model, Validation, and Application
Previous Article in Special Issue
An Approach to Data Modeling via Temporal and Spatial Alignment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

APSO-SL: An Adaptive Particle Swarm Optimization with State-Based Learning Strategy

1
School of Computer Science and Software Engineering, University of Science and Technology Liaoning, Anshan 114051, China
2
School of Electrical and Control Engineering, Shenyang Jianzhu University, Shenyang 110168, China
*
Author to whom correspondence should be addressed.
Processes 2024, 12(2), 400; https://doi.org/10.3390/pr12020400
Submission received: 13 December 2023 / Revised: 14 January 2024 / Accepted: 26 January 2024 / Published: 17 February 2024

Abstract

:
Particle swarm optimization (PSO) has been extensively used to solve practical engineering problems, due to its efficient performance. Although PSO is simple and efficient, it still has the problem of premature convergence. In order to address this shortcoming, an adaptive particle swarm optimization with state-based learning strategy (APSO-SL) is put forward. In APSO-SL, the population distribution evaluation mechanism (PDEM) is used to evaluate the state of the whole population. In contrast to using iterations to just the population state, using the population spatial distribution is more intuitive and accurate. In PDEM, the population center position and best position for calculation are used for calculation, greatly reducing the algorithm’s computational complexity. In addition, an adaptive learning strategy (ALS) has been proposed to avoid the whole population’s premature convergence. In ALS, different learning strategies are adopted according to the population state to ensure the population diversity. The performance of APSO-SL is evaluated on the CEC2013 and CEC2017 test suites, and one engineering problem. Experimental results show that APSO-SL has the best performance compared with other competitive PSO variants.

1. Introduction

Optimization problems occur frequently in various practical engineering problems [1]. Thus, it is crucial to solve optimization problems efficiently [2]. Optimization problems have gradually become an important issue in the industrial field [3]. With the progress of science and technology, optimization problems in the industrial field are becoming more and more complex [4]. They are often accompanied by a large number of discontinuous, non-microscopic, computationally complex, local optimums [5,6]. Finding solutions to optimization problems in complex situations is crucial [7]. Traditional optimization algorithms such as quasi-Newton methods, the steepest descent method and the gradient descent method, etc., require the objective function to meet strict conditions [8]. Therefore, the above methods are difficult to apply in practice [9] and some new methods should be proposed to solve complex optimization problems [10].
At present, evolutionary algorithms have been proposed to address complex optimization problems and have achieved satisfactory results [11,12]. Evolutionary algorithms simulate the evolution of natural species with self-organizing and adaptive characteristics, and do not require the objective function to meet strict conditions. Therefore, evolutionary algorithms are adopted to address complex optimization problems. As a type of evolutionary algorithm, swarm intelligence algorithms have received widespread attention. Examples include PSO [13], differential evolution (DE) [10], artificial bee colony optimization (ABC) [14], ant colony optimization (ACO) [15], etc. In swarm intelligence algorithms, a population can be seen as being composed of independent individuals, each of whom interacts to jointly search for the global optimum. Therefore, use of swarm intelligence methods to solve complex optimization problems is feasible [16] and they have received widespread attention in recent times [17].
As a popular evolutionary algorithm, PSO has been extensively adopted in different engineering optimization problems since it was proposed in 1995. PSO simulates the foraging behavior of natural animals, and has the advantages of robustness, efficiency and precision. In PSO, a population is composed of some particles, and each particle has two learning samples, namely its own historical best solution (pbest) and a global best solution (gbest). In the process of evolution, particles in the whole population work together to seek gbest. Although PSO has advantages, it still has the problem of poor balance between global and local search. Recently, a large number of PSO variants have been proposed to address the above issue. These can be divided into three classes: adaptive learning parameter, novel updating strategy and new topology mechanism.
Adaptive learning parameter: Parameter setting is an important part of PSO. A fitness-based PSO algorithm (FMPSO) is developed in reference [18], in which different categories of particles have different parameters. In reference [19], the authors use sine and cosine parameters to increase algorithm diversity. In the literature [20], during the process of evolution, the authors use sigmoid function to realize dynamic parameters to make the algorithm search efficiently. Extensive experiments have proved the robustness of the algorithm. In the literature [21], the nonlinear attenuation acceleration coefficient is put forward. The robustness of an algorithm is enhanced by using different acceleration coefficients at different evolutionary stages of the population. In reference [22], inertial weight with chaotic mechanism is put forward. This is used to increase the diversity of the population. Similarly, in reference [23], the dynamic nonlinear inertia weight is adopted to enhance the anti-jamming capacity of the algorithm. Rosso et al. [24]. propose an enhanced multi-strategy particle swarm optimization for constrained problems with an evolutionary-strategies-based unfeasible local search operator. This method determines the parameters’ values that govern the evolutionary strategy simultaneously during the optimization process. Rosso et al. [25]. propose a new constraint-handling approach for PSO, adopting a supervised classification machine learning method: the support vector machine (SVM).
Novel updating strategy: Updating the strategy is the most important part of PSO, as this determines the whole population’s evolution direction. E et al. [26] introduce human social learning intelligence into the PSO algorithm to enhance its performance. In the literature [27], five different algorithms are integrated to form a new algorithm in which the algorithm with the best performance is selected to work through a rating function. In order to increase the diversity of learning samples, a comprehensive learning particle swarm optimization algorithm (CLPSO) is put forward [28] in which the entire population is updated based on excellent information. In reference [29], a novel dual population algorithm is proposed. In this approach, two sub-populations are devoted to conduct global and local searches, respectively. Extensive experiments have proved the effectiveness of the algorithm. Xia et al. [30] proposed a multi-learning sample algorithm in which different learning samples and update mechanisms collaborate to work. Li et al. [31] proposed an adaptive cooperative particle swarm optimization with difference learning. In this, performance accuracy can be enhanced by using different learning strategies. In reference [32], according to the spatial distribution, the population state can be obtained. Different learning mechanisms are used in various evolutionary states. Recently, researchers have incorporated other technologies into the PSO algorithm to enhance its performance [33,34].
New topology mechanism: Multiple swarm collaboration methods have been widely adopted recently. In the literature [35], a new multi-swarm interaction mechanism has been put forward (ADPSO). In ADPSO, the proposed strategies can be used to enhance the performance of traditional PSO. Yang and Li [36] combine evolutionary states with the collaborative mechanisms of the whole population for the first time. The experimental results have received praise from peers. In the literature [37], three learning features are used to enhance the performance of the traditional method. As a famous variant of PSO, dynamic multi-swarm PSO variants achieve excellent results [38] and have gained widespread attention from peers. In the literature [39], a fully informed PSO is proposed (FIPS). In FIPS, each individual can be influenced not only by the global best solution, but also by the particles in its neighborhood. Lu et al. [40] let four sub-swarms work together to find global optimization. Xia et al. [9] put forward an adaptive multi-swarm particle swarm optimization algorithm (MSPSO). In MSPSO, the number of sub-populations can be changed according to population stage.
Although the existing research has made significant improvements to the PSO algorithm, there are still some problems that have not been effectively addressed. The evolutionary state of the whole population is an important indicator. Specifically, in the early stage the whole population should learn more diversity information to increase the global ability. In the later stage, the whole population should learn more accuracy information to increase the local ability. Although a large number of methods have emerged to calculate population spatial distribution to evaluate population evolutionary state, calculating population spatial distribution consumes lots of computing resources. The additional calculations far exceed the complexity of PSO and it is difficult to achieve efficient and fast work. Therefore, how to design a lightweight complexity evaluating population evolutionary state method while obtaining satisfactory results is of great significance. In addition, using different evolutionary strategies in various evolutionary states is worthy. Therefore, it is crucial to adjust the learning strategy adaptively.
Inspired by above discussions, an adaptive PSO with state-based learning strategy (APSO-SL) is put forward. In APSO-SL, two new features are proposed in PSO. The first strategy is the population distribution evaluation mechanism (PDEM), in which the population center position and best position are used for calculating the whole population state. In this way, the whole population state can be evaluated more intuitively and accurately without excessive computation. The second strategy is the adaptive learning strategy (ALS). In ALS, different learning strategies are adopted based on the population state, to ensure the whole population diversity. Specifically, if the population diversity is high, the method conducts a global search to increase the population diversity. Instead, if the population diversity is low, the method carries out local search to increase the population’s local search ability. Lots of experiments have been conducted to confirm the effectiveness of APSO-SL, and the main contributions of APSO-SL are as follows:
(1)
Population distribution evaluation mechanism (PDEM) is put forward. In PDEM, instead of using iterations to just population state, APSO-SL only uses the population center position and best position for calculation. Thus, the whole population diversity can be improved without significantly increasing the calculation complexity. Our method can evaluate the population state accurately without increasing computational complexity.
(2)
Adaptive learning strategy (ALS) is put forward in this paper. In ALS, different learning strategies are used according to the population state, to ensure the whole population diversity. In this way, the whole population can achieve a balance between exploration and exploitation.
(3)
An efficient PSO variant, APSO-SL, is proposed, which combines PDEM and ALS, and outperforms 6 competitive optimization approaches. Therefore, APSO-SL is an efficient optimization algorithm.
The organizational framework of this study is as follows: In Section 2, we describe traditional PSO. Section 3 introduces our method: APSO-SL. In Section 4, we conduct ample experiments and discussions. Finally, the study is concluded and some future works are proposed in Section 5.

2. Particle Swarm Optimization (PSO)

PSO is an optimization algorithm with a certain degree of randomness; each particle stands for a result in the D-dimensional feasible domain space. N particles form a population, and each particle in the population has position [ X i 1 t , X i 2 t , , X i D ( t ) ] and velocity [ V i 1 t , V i 2 t , , V i D ( t ) ] . In each iteration, the learning sample of each particle consists of two parts: pbest and gbest. The update guidelines for PSO are shown in Equations (1) and (2):
V i d ( t + 1 ) = w × V i d ( t ) + c 1 × r 1 × ( p b e s t i d ( t ) X i d ( t ) ) + c 2 × r 2 × ( g b e s t d ( t ) X i d ( t ) )
X i d ( t + 1 ) = X i d ( t ) + V i d ( t + 1 )
where w is the inertia weight, d is the current dimension and r1 and r2 are two random numbers in the interval [0, 1], c1 and c2 stand for the two acceleration constants.
Generally, PSO can be divided into two types: global version and local version. The aforementioned formulas are global PSO. In order to increase the diversity of particles, local optimum is used instead of global optimum. lbest stands for the best position in the particle’s neighborhood. The velocity update strategy is described as follows:
V i d ( t + 1 ) = ω × V i d ( t ) + c 1 × r 1 × ( p b e s t i d ( t ) x i d ( t ) ) + c 2 × r 2 × ( l b e s t d ( t ) x i d ( t ) )

3. APSO-SL

In this part, the proposed APSO-SL is described in detail. Firstly, the population distribution evaluation mechanism (PDEM) is adopted to evaluate the population state. Secondly, the adaptive learning strategy (ALS) is used to achieve a balance between exploration and exploitation. Finally, APSO-SL is described in detail in Algorithm 1. The APSO-SL proposed in this paper includes these two improvements, and we also conduct a detailed analysis of APSO-SL in Section 4.
Algorithm 1. APSO-SL
01.  Objective function: f(x)
02.  Input: x m i n d , x m a x d ,   v m i n d ,   v m a x d , pbest, gbest, D, d, t, Tmax, r1, r2, U, L, w, c1, c2;
03.  Output: Optimal solution;
04.  Initialization: x i d = x m i n d + rand×( x m a x d x m i n d ), v i d =   x m i n d + rand×( v m a x d v m i n d );
05.  while (t <= Tmax)
06.      for i = 1:N do
07.          Calculate the center position of the population based on Equation (4);
08.          Judging the Euclidean distance in space using Equation (5);
09.          Determine the evolutionary state of the population based on Equation (6);
10.          Case 1:
11.          Update particles in the population using Equations (1) and (2);
12.          Case 2:
13.          Update particles in the population using Equations (7) and (2);
14.          Case 3:
15.          Keep the population updating method unchanged.
16.          t = t + 1;
17.      end for
18.  end while

3.1. Population Distribution Evaluation Mechanism (PDEM)

The evolutionary state of a population is an important indicator. Inspired by the diversity of natural species, different individuals possess different characteristics. As a result, use of different evolutionary strategies in various evolutionary states is worthy. In the initial stage, the whole population is suitable for conducting global search. In contrast, at the end stage the whole population is suitable for small-scale local search. Therefore, this is crucial to judge the population state.
In general, researchers use iterations to judge the evolutionary state. However, there are some problems when using this approach. In the initial iteration, the diversity may be low and suitable for exploitation. In the final iteration, the diversity of the population may be high and suitable for global search. Recently, researchers have used the spatial distribution of a population to judge its evolutionary state. The population spatial distribution is the most intuitive indicator to evaluate its state. However, calculating the population spatial distribution requires a lot of calculation. The additional calculations far exceed the complexity of PSO, and it is difficult to achieve efficient and fast work. Therefore, the question of how to design a lightweight method of evaluating the population state while obtain satisfying result is of great significance. The calculation process of the algorithm is shown in Equations (4)–(6).
X M d = 1 N ( i = 1 N x i d ) ,       d = 1 , , D
d i s t B = j = 1 D X B j X M j
f B = d i s t B j = 1 D ( U j L j ) 2
In this section, the population distribution evaluation mechanism (PDEM) is proposed to evaluate the whole population state. Firstly, we calculate the center position of the whole population according to Equation (4). Secondly, the spatial Euclidean distance between the center and best positions is calculated, according to Equation (5). Finally, the population state can be determined based on the above spatial Euclidean distance, based on Equation (6). The whole population evolutionary state can be evaluated without significantly increasing the calculation complexity. Our method can evaluate the population state accurately without increasing computational complexity.

3.2. Adaptive Learning Strategy (ALS)

Based on the above discussion, APSO-SL can evaluate the evolutionary state of the population. We classify fB into three states: case 1, case 2 and case 3. These represent the states of exploration, exploitation and balance, respectively, and we define the following three cases.
Case (1)—Exploration: In this condition, fB is a relatively large value (e.g., larger than the threshold 0.4). Specifically, the best individual of the population is far from the center of the population, indicating that the population is currently relatively dispersed and suitable for global search. The proposed method defines this state as exploration state. In this case, exploring as many regions as possible and increasing the population diversity are beneficial to the population evolution. We use traditional PSO algorithms for a large-scale search, based on Equations (1) and (2).
Case (2)—Exploitation: In contrast, a small value of fB (e.g., smaller than the threshold 0.3) signifies that the current population is in the same region, most of the individuals are around the best individual, and only a few individuals are distributed in other areas. The population distribution is relatively dense, and suitable for small-scale local search. This case is therefore likely to represent the exploitation state. The formula for the population in this case is as follows. Equation (7) consists of three parts, with the first two parts being the same as the basic PSO algorithm, representing the weights learned from inertial learning and the weights learned from one’s own historical optimal values. The third item is the weight learned from sample e, where e is a randomly selected sample from the historical optimal values of all particles in the population.
V i d ( t + 1 ) = w × V i d ( t ) + c 1 × r 1 × ( p b e s t i d ( t ) X i d ( t ) ) + c 2 × r 2 × ( e d ( t ) X i d ( t ) )
In formula, e is the new learning sample which is randomly selected from the historical optimum values of all particles in the whole population. In this way, the population can learn high-quality diversity information.
Case (3)—Balance: When fB is a relatively middle value (e.g., smaller than the threshold 0.4 and larger than the threshold 0.3), we maintain the learning strategy of the whole population unchanged to achieve a balance between exploration and exploitation. When the evolutionary state has been estimated, we can control the mutation strategy adaptively. Different learning strategies are used according to the population state, to ensure the whole population diversity. In this way, the whole population can achieve a balance between exploration and exploitation. The flowchart of the algorithm is shown in Figure 1. All input and output parameters have been explained in the corresponding sections of the text, and detailed explanations will not be provided here.

4. Experiments

4.1. Experimental Fundamentals

The APSO-SL is tested on CEC2013 [41,42] test suite, and one practical engineering problem. CEC2013 is a famous test suite with extensive applications, which are listed in Supplementary Materials. CEC2013 is an authoritative dataset in the field of single objective evolutionary computing which includes 28 benchmark functions. Therefore, in the field of evolutionary computing, the CEC2013 dataset is widely used as the platform for evaluating algorithm performance.
The proposed APSO-SL is compared with six competitive PSO variants. The first comparison method is traditional PSO [13]; we can compare APSO-SL and PSO visually. The second peer algorithm is HCLPSO [29], in which two populations collaborate to find the optimum solution. TAPSO [30] is the third peer method, which uses three different strategies to update the population. The fourth method is CLPSO [28], which uses a novel comprehensive learning manner to work. The fifth peer algorithm is DMS-PSO [38], in which multiple populations are used to work together. EPSO [27] is the sixth comparison algorithm, in which five classic algorithms are integrated together, and the best method is selected based on scores during each iteration. The parameter settings of all comparison algorithms are shown in Table 1.

4.2. Experimental Analysis on CEC2013

To test the effect of APSO-SL, extensive experiments are conducted on CEC2013 and CEC2017 test suites, and the solutions are shown in Table 2 and Table 3, where the best results are expressed in bold.

4.2.1. Accuracy Analysis

For the five unimodal functions, the solutions in Table 2 and Table 3 show that APSO-SL obtains excellent results in both 30-D and 50-D conditions, followed by TAPSO and HCLPSO. Although HCLPSO does not achieve the best results on a certain function, its overall performance is excellent. For the 15 multimodal functions, APSO-SL gets the best solutions in 30-D conditions, because it gets best solutions on 11 of 15 conditions. In 50-D condition, APSO-SL also obtains excellent performance, which obtains best solutions on 8 of 15 multimodal functions. In addition, TAPSO and HCLPSO have also achieved preferable results in solving multimodal functions. For the eight composition functions, we can see from Table 2 and Table 3 that APSO-SL achieves remarkable results in 30-D and 50-D conditions. In conclusion, population distribution evaluation mechanism (PDEM) and adaptive learning strategy (ALS) are effective. In addition, we use box-plot charts to evaluate the capabilities of all comparison methods. Due to the page limit, only some typical test functions are selected for comparison. The performances of APSO-SL and some excellent peer algorithms are shown in Figure 2. According to observation, the box-plot charts show that APSO-SL exhibited a more stable search performance in all conditions.

4.2.2. Convergence Analysis

In this part, we compare APSO-SL with three competitive peer methods (TAPSO, HCLPSO and DMS-PSO), and the experimental results are displayed in Figure 3. The abscissa is fitness evaluation (100×D), and the ordinate is the difference between the solution acquired by the peer method and the actual value. To be fair, each method is worked 30 times independently. Due to space limitations, only three multimodal functions (f9, f18, f20) and three complex functions (f23, f26, f28) are selected for analysis and discussion in this section. Through the results in Figure 2, we find APSO-SL has an excellent convergence performance in most functions (f9, f18, f20, f23, f26). In function f28, the convergence character of APSO-SL is the same as DMS-PSO and HCLPSO, but worse than TAPSO.

4.2.3. Statistical Analysis

Statistical analysis methods are an important part of data processing. We use the Friedman test to summarize the accuracy of information in all peer algorithms in Table 2 and Table 3, and the experimental results are displayed in Table 4. Through the results in Table 4, we find APSO-SL achieves the best performance in all conditions in both dimensions. Hence, APSO-SL has obvious advantages.

4.3. Results on CEC2017

To test the proposed APSO-SL overall, CEC2017 has been used to evaluate all comparison methods performance. In order to reduce the length, only 30-D is selected in this part. Table 5 shows that APSO-SL achieves the best results in unimodal conditions. For seven multimodal functions, APSO-SL gets the best solutions on f5, f6, f8 and f9, in four of the total seven conditions, followed by EPSO and DMS-PSO. In hybrid and composition conditions, APSO-SL obtains preferable results. Experimental results show that APSO-SL performs competitively on CEC2017 test suite.

4.4. APSO-SL for Engineering Application Problem

The optimization problem of tension springs is a classic problem in the field of engineering. The purpose is to find the minimum weight [36] based on material diameter d (x1), average winding diameter D (x2) and turns N (x3).
Its schematic diagram is shown in Figure 4 and the principle of the formula is shown in Equation (8). The experimental results of APSO-SL and six peer algorithms are shown in Table 6 and the results reveal that APSO-SL is competitive.
min f ( x ) = ( x 3 + 2 ) x 2 x 1 2 s . t . g 1 ( x ) = 1 x 2 3 x 3 71785 x 1 4 0 g 2 ( x ) = 4 x 2 2 x 1 x 2 12566 ( x 2 x 1 3 x 1 4 ) + 1 5108 x 1 2 0 g 3 ( x ) = 1 140.45 x 1 x 2 2 x 3 0 g 4 ( x ) = x 1 + x 2 1.5 1 0
where 0.05 x 1 2 ,     0.25 x 2 1.30 ,     2.00 x 3 15 .

5. Conclusions

We propose an adaptive PSO with state-based learning strategy (APSO-SL). In PDEM, the population center position and best position are used for calculating the whole population state. In this way, the whole population state can be evaluated more intuitively and accurately without excessive computation. In the second strategy, ALS, different learning strategies are adopted based on the population state to ensure the whole population diversity. Specifically, if the population diversity is high, we conduct a global search. If the population diversity is low, we carry out a local search.
We can draw some conclusions through experimental analysis. First, PDEM can be used to evaluate the population state more intuitively and accurately. Second, ALS can be used to achieve a balance between global and local search. Therefore, the PDEM and ALS strategies proposed in this article can effectively improve the diversity and purposefulness of the population, have good universality, and can be widely promoted. In future work, we can investigate the effectiveness of PDEM and ALS strategies in solving large-scale problems and expensive optimization problems.
However, PDEM and ALS still have limitations, which lacks theoretical proof and sufficient experimentation. In the following work, in terms of methods, we should design more efficient mechanisms. For the applications, we should apply our method to more engineering applications. In addition, we should explore the performance of the proposed method in solving large-scale optimization, expensive optimization, robust optimization, multitasking optimization and dynamic optimization problems.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/pr12020400/s1.

Author Contributions

Conceptualization, X.Y.; Software, X.Y.; Investigation, X.Y.; Resources, M.G.; Writing—original draft, M.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article and supplementary materials.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Yuan, Q.; Sun, R.; Du, X. Path Planning of Mobile Robots Based on an Improved Particle Swarm Optimization Algorithm. Processes 2022, 11, 26. [Google Scholar] [CrossRef]
  2. Yang, X.; Li, H. Multi-sample learning particle swarm optimization with adaptive crossover operation. Math. Comput. Simul. 2023, 208, 246–282. [Google Scholar] [CrossRef]
  3. Ali, Y.A.; Awwad, E.M.; Al-Razgan, M.; Maarouf, A. Hyperparameter Search for Machine Learning Algorithms for Optimizing the Computational Complexity. Processes 2023, 11, 349. [Google Scholar] [CrossRef]
  4. Azrag, M.A.K.; Zain, J.M.; Kadir, T.A.A.; Yusoff, M.; Jaber, A.S.; Abdlrhman, H.S.M.; Ahmed, Y.H.Z.; Husain, M.S.B. Estimation of Small-Scale Kinetic Parameters of Escherichia coli (E. coli) Model by Enhanced Segment Particle Swarm Optimization Algorithm ESe-PSO. Processes 2023, 11, 126. [Google Scholar] [CrossRef]
  5. Chen, H.; Cheng, R.; Pedrycz, W.; Jin, Y. Solving Many-Objective Optimization Problems via Multistage Evolutionary Search. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 3552–3564. [Google Scholar] [CrossRef]
  6. Castillo, O.; Melin, P.; Ontiveros, E.; Peraza, C.; Ochoa, P.; Valdez, F.; Soria, J. A high-speed interval type 2 fuzzy system approach for dynamic parameter adaptation in metaheuristics. Eng. Appl. Artif. Intell. 2021, 85, 666–680. [Google Scholar] [CrossRef]
  7. Li, X.; Mao, K.; Lin, F.; Zhang, X. Particle swarm optimization with state-based adaptive velocity limit strategy. Neurocomputing 2021, 447, 64–79. [Google Scholar] [CrossRef]
  8. Yang, X.; Li, H.; Yu, X. Adaptive heterogeneous comprehensive learning particle swarm optimization with history information and dimensional mutation. Multimed. Tools Appl. 2022, 82, 9785–9817. [Google Scholar] [CrossRef]
  9. Xia, X.; Gui, L.; Zhan, Z.-H. A multi-swarm particle swarm optimization algorithm based on dynamical topology and purposeful detecting. Appl. Soft Comput. 2018, 67, 126–140. [Google Scholar] [CrossRef]
  10. Sun, G.; Yang, B.; Yang, Z.; Xu, G. An adaptive differential evolution with combined strategy for global numerical optimization. Soft Comput. 2019, 24, 6277–6296. [Google Scholar] [CrossRef]
  11. Jiang, L.; Wang, X. Research on the Participation of Household Battery Energy Storage in the Electricity Peak Regulation Ancillary Service Market. Processes 2023, 11, 794. [Google Scholar] [CrossRef]
  12. Li, L.; Li, Y.; Lin, Q.; Ming, Z.; Coello, C.A.C. A convergence and diversity guided leader selection strategy for many-objective particle swarm optimization. Eng. Appl. Artif. Intell. 2022, 115, 105249. [Google Scholar] [CrossRef]
  13. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar]
  14. Hancer, E.; Xue, B.; Zhang, M.; Karaboga, D.; Akay, B. Pareto front feature selection based on artificial bee colony optimization. Inf. Sci. 2018, 422, 462–479. [Google Scholar] [CrossRef]
  15. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  16. Rosso, M.M.; Aloisio, A.; Cucuzza, R.; Asso, R.; Marano, G.C. Structural Optimization with the Multistrategy PSO-ES Unfeasible Local Search Operator. In Proceedings of the International Conference on Data Science and Applications: ICDSA 2022, Kolkata, India, 26–27 March 2022; pp. 215–229. [Google Scholar]
  17. Marano, G.C.; Cucuzza, R. Structural Optimization Through Cutting Stock Problem. In Italian Workshop on Shell and Spatial Structures; Springer: Cham, Switzerland, 2023; pp. 210–220. [Google Scholar]
  18. Xia, X.; Xing, Y.; Wei, B.; Zhang, Y.; Li, X.; Deng, X.; Gui, L. A fitness-based multi-role particle swarm optimization. Swarm Evol. Comput. 2019, 44, 349–364. [Google Scholar] [CrossRef]
  19. Chen, K.; Zhou, F.; Yin, L.; Wang, S.; Wang, Y.; Wan, F. A hybrid particle swarm optimizer with sine cosine acceleration coefficients. Inf. Sci. 2018, 422, 218–241. [Google Scholar] [CrossRef]
  20. Lin, J.C.-W.; Yang, L.; Fournier-Viger, P.; Hong, T.-P.; Voznak, M. A binary PSO approach to mine high-utility itemsets. Soft Comput. 2016, 21, 5103–5121. [Google Scholar] [CrossRef]
  21. Tian, D.; Zhao, X.; Shi, Z. Chaotic particle swarm optimization with sigmoid-based acceleration coefficients for numerical function optimization. Swarm Evol. Comput. 2019, 51, 100573. [Google Scholar] [CrossRef]
  22. Chen, K.; Zhou, F.; Wang, Y.; Yin, L. An ameliorated particle swarm optimizer for solving numerical optimization problems. Appl. Soft Comput. 2018, 73, 482–496. [Google Scholar] [CrossRef]
  23. Liang, H.; Kang, F. Adaptive mutation particle swarm algorithm with dynamic nonlinear changed inertia weight. Optik 2016, 127, 8036–8042. [Google Scholar] [CrossRef]
  24. Rosso, M.M.; Cucuzza, R.; Aloisio, A.; Marano, G.C. Enhanced Multi-Strategy Particle Swarm Optimization for Constrained Problems with an Evolutionary-Strategies-Based Unfeasible Local Search Operator. Appl. Sci. 2022, 12, 2285. [Google Scholar] [CrossRef]
  25. Rosso, M.M.; Cucuzza, R.; Di Trapani, F.; Marano, G.C. Nonpenalty Machine Learning Constraint Handling Using PSO-SVM for Structural Optimization. Adv. Civ. Eng. 2021, 2021, 6617750. [Google Scholar] [CrossRef]
  26. Jiyue, E.; Liu, J.; Wan, Z. A novel adaptive algorithm of particle swarm optimization based on the human social learning intelligence. Swarm Evol. Comput. 2023, 80, 101336. [Google Scholar] [CrossRef]
  27. Lynn, N.; Suganthan, P.N. Ensemble particle swarm optimizer. Appl. Soft Comput. 2017, 55, 533–548. [Google Scholar] [CrossRef]
  28. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  29. Lynn, N.; Suganthan, P.N. Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation. Swarm Evol. Comput. 2015, 24, 11–24. [Google Scholar] [CrossRef]
  30. Xia, X.; Gui, L.; Yu, F.; Wu, H.; Wei, B.; Zhang, Y.-L.; Zhan, Z.-H. Triple Archives Particle Swarm Optimization. IEEE Trans. Cybern. 2020, 50, 4862–4875. [Google Scholar] [CrossRef]
  31. Li, W.; Jing, J.; Chen, Y.; Chen, Y. A cooperative particle swarm optimization with difference learning. Inf. Sci. 2023, 643, 119238. [Google Scholar] [CrossRef]
  32. Xia, X.; Gui, L.; He, G.; Wei, B.; Zhang, Y.; Yu, F.; Wu, H.; Zhan, Z.-H. An expanded particle swarm optimization based on multi-exemplar and forgetting ability. Inf. Sci. 2020, 508, 105–120. [Google Scholar] [CrossRef]
  33. Shankar, R.; Ganesh, N.; Čep, R.; Narayanan, R.C.; Pal, S.; Kalita, K. Hybridized Particle Swarm—Gravitational Search Algorithm for Process Optimization. Processes 2022, 10, 616. [Google Scholar] [CrossRef]
  34. Ghorbanpour, S.; Jin, Y.; Han, S. Differential Evolution with Adaptive Grid-Based Mutation Strategy for Multi-Objective Optimization. Processes 2022, 10, 2316. [Google Scholar] [CrossRef]
  35. Yang, X.; Li, H.; Huang, Y. An adaptive dynamic multi-swarm particle swarm optimization with stagnation detection and spatial exclusion for solving continuous optimization problems. Eng. Appl. Artif. Intell. 2023, 123, 106215. [Google Scholar] [CrossRef]
  36. Yang, X.; Li, H. Evolutionary-state-driven multi-swarm cooperation particle swarm optimization for complex optimization problem. Inf. Sci. 2023, 646, 119302. [Google Scholar] [CrossRef]
  37. Yang, X.; Li, H.; Yu, X. A dynamic multi-swarm cooperation particle swarm optimization with dimension mutation for complex optimization problem. Int. J. Mach. Learn. Cybern. 2022, 13, 2581–2608. [Google Scholar] [CrossRef]
  38. Liang, J.; Suganthan, P. Dynamic Multi-Swarm Particle Swarm Optimizer with Local Search. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Scotland, UK, 2–5 September 2005. [Google Scholar]
  39. Peram, T.; Veeramachaneni, K.; Mohan, C.K. Fitness-Distance-Ratio Based Particle Swarm Optimization. In Proceedings of the 2003 IEEE Swarm Intelligence Symposium, Indianapolis, IN, USA, 24–26 April 2003. [Google Scholar]
  40. Lu, J.; Zhang, J.; Sheng, J. Enhanced multi-swarm cooperative particle swarm optimizer. Swarm Evol. Comput. 2021, 69, 100989. [Google Scholar] [CrossRef]
  41. Jing, L.; Bo-Yang, Q.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session on Real-Parameter Optimization. Appl. Math. Sci. 2013, 7, 281–295. [Google Scholar]
  42. Awad, N.H.; Ali, M.Z.; Suganthan, P.N.; Liang, J. Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization; Technical Report; Nanyang Technological University: Singapore, 2016. [Google Scholar]
Figure 1. The overall flowchart of the proposed algorithm.
Figure 1. The overall flowchart of the proposed algorithm.
Processes 12 00400 g001
Figure 2. Box-plot figures of peer methods on CEC2013. (a) f09; (b) f18; (c) f20; (d) f23; (e) f26; (f) f28.
Figure 2. Box-plot figures of peer methods on CEC2013. (a) f09; (b) f18; (c) f20; (d) f23; (e) f26; (f) f28.
Processes 12 00400 g002
Figure 3. Convergence characteristic of APSO-SL and other methods. (a) f09; (b) f18; (c) f20; (d) f23; (e) f26; (f) f28.
Figure 3. Convergence characteristic of APSO-SL and other methods. (a) f09; (b) f18; (c) f20; (d) f23; (e) f26; (f) f28.
Processes 12 00400 g003
Figure 4. Tension spring optimization problem.
Figure 4. Tension spring optimization problem.
Processes 12 00400 g004
Table 1. Parameters for six peer methods and APSO-SL.
Table 1. Parameters for six peer methods and APSO-SL.
AlgorithmYearParameters Settings
PSO [13]1995w = 0.729, c1 = c2 = 1.49445
HCLPSO [29]2015w = 0.729, c = 3–1.5, c1 = 2.5–0.5, c2 = 0.5–2.5, a = 0, b = 0.25
TAPSO [30]2020w = 0.7298, pc = 0.5, pm = 0.01, sg = 7
CLPSO [28]2006w = 0.9–0.2, c = 1.49445, G = 5
DMS-PSO [38]2006w = 0.729, c1 = c2 = 1.49445
EPSO [27]2017ensemble wPSO, CLPSO, FDRPSO, HPSO-TVAC and LIPS
APSO-SL2023w = 0.729, c1 = c2 = 1.49445
Table 2. The result of all methods on CEC2013 (D = 30).
Table 2. The result of all methods on CEC2013 (D = 30).
PSOCLPSOEPSOHCLPSODMS-PSOTAPSOAPSO-SL
F11.20 × 102
1.88 × 102
2.20 × 10−13
4.21 × 10−13
2.24 × 10−13
4.33 × 10−13
2.69 × 10−13
1.20 × 10−13
1.72 × 10−13
1.09 × 10−13
3.77 × 10−15
1.62 × 10−13
0.00 × 100
0.00 × 100
F21.32 × 107
1.28 × 107
1.30 × 106
7.26 × 105
2.38 × 105
1.35 × 105
8.69 × 105
3.79 × 105
1.31 × 106
6.20 × 105
1.40 × 106
2.88 × 106
1.25 × 105
7.85 × 104
F32.44 × 108
2.11 × 108
4.51 × 107
4.22 × 107
1.16 × 108
1.39 × 108
2.90 × 107
2.80 × 107
2.38 × 107
3.92 × 107
1.99 × 106
1.85 × 106
7.95 × 106
1.21 × 106
F42.41 × 102
2.22 × 102
5.45 × 103
2.52 × 103
3.49 × 102
2.44 × 102
1.75 × 103
1.31 × 103
4.40 × 103
1.54 × 103
1.71 × 102
1.18 × 102
1.10 × 102
1.02 × 102
F57.11 × 10−13
2.45 × 10−12
3.20 × 10−13
8.68 × 10−13
2.50 × 10−13
5.69 × 10−13
3.99 × 10−13
8.44 × 10−13
1.31 × 10−13
2.09 × 10−13
4.19 × 10−13
1.21 × 10−13
3.79 × 10−13
1.02 × 10−13
F64.49 × 101
2.69 × 101
2.55 × 101
2.29 × 101
1.51 × 101
5.63 × 100
1.70 × 101
2.41 × 100
2.11 × 101
2.10 × 101
2.66 × 101
2.45 × 101
7.88 × 10−1
2.23 × 100
F76.11 × 101
2.22 × 101
3.77 × 101
1.66 × 101
3.72 × 101
1.77 × 101
2.21 × 101
7.66 × 100
1.11 × 101
5.92 × 100
3.84 × 100
2.81 × 100
1.86 × 101
4.72 × 100
F82.09 × 101
5.78 × 10−2
2.09 × 101
5.61 × 10−2
2.09 × 101
6.44 × 10−2
2.09 × 101
2.62 × 10−2
2.09 × 101
4.50 × 10−2
2.09 × 101
2.38 × 10−2
2.09 × 101
0.00 × 100
F92.49 × 101
4.11 × 100
2.20 × 101
4.55 × 100
2.51 × 101
3.40 × 100
1.81 × 101
3.62 × 100
2.41 × 101
3.20 × 100
1.77 × 101
2.32 × 101
1.44 × 101
4.26 × 100
F103.77 × 10−1
3.44 × 10−1
1.82 × 10−1
8.43 × 10−2
2.11 × 10−1
1.20 × 10−1
2.45 × 10−1
1.23 × 10−1
2.62 × 10−1
1.45 × 10−1
1.20 × 10−1
4.88 × 10−2
6.17 × 10−2
3.77 × 10−2
F113.31 × 101
1.44 × 101
2.53 × 101
6.34 × 100
3.29 × 10−2
1.82 × 10−1
3.95 × 10−2
1.86 × 10−1
3.33 × 101
1.34 × 101
1.16 × 101
3.66 × 100
1.34 × 101
5.35 × 100
F128.20 × 101
2.49 × 101
7.32 × 101
2.22 × 101
7.31 × 101
2.35 × 101
6.33 × 101
1.87 × 101
3.41 × 101
9.76 × 100
3.92 × 101
1.90 × 101
3.13 × 101
1.23 × 101
F131.70 × 102
3.15 × 101
1.31 × 102
3.11 × 101
1.21 × 102
3.79 × 101
1.29 × 102
2.99 × 101
5.48 × 101
1.67 × 101
1.22 × 102
5.11 × 101
7.08 × 101
2.58 × 101
F141.61 × 103
3.33 × 102
1.19 × 103
2.38 × 102
1.66 × 101
2.84 × 101
1.75 × 101
4.25 × 101
2.42 × 103
4.35 × 102
1.31 × 103
4.66 × 102
1.49 × 101
9.94 × 100
F153.70 × 103
6.22 × 102
3.81 × 103
9.71 × 102
3.62 × 103
4.55 × 102
3.58 × 103
5.71 × 102
3.38 × 103
2.59 × 102
3.23 × 103
1.78 × 103
2.99 × 103
5.02 × 102
F161.70 × 100
3.40 × 10−1
1.24 × 100
4.75 × 10−1
1.51 × 100
2.81 × 10−1
1.42 × 100
2.51 × 10−1
1.31 × 100
1.77 × 10−1
1.29 × 100
2.62 × 10−1
1.18 × 100
1.18 × 10−1
F177.24 × 101
1.66 × 101
6.81 × 101
1.33 × 101
3.51 × 101
1.78 × 100
3.18 × 101
1.90 × 101
6.31 × 101
9.96 × 100
6.16 × 101
4.55 × 101
2.55 × 101
1.33 × 100
F189.29 × 101
2.15 × 101
7.66 × 101
1.51 × 101
1.34 × 102
5.59 × 101
8.45 × 101
1.75 × 101
9.11 × 101
7.49 × 100
1.96 × 102
8.66 × 100
7.35 × 101
2.71 × 100
F193.95 × 100
1.67 × 100
3.29 × 100
7.41 × 10−1
1.96 × 100
3.51 × 10−1
1.71 × 100
2.66 × 10−1
3.19 × 100
7.60 × 10−1
2.45 × 100
5.11 × 10−1
2.13 × 100
3.00 × 10−1
F201.37 × 101
7.55 × 10−1
1.40 × 101
1.22 × 100
1.21 × 101
1.76 × 100
1.12 × 101
8.85 × 10−1
1.19 × 101
5.77 × 10−1
1.51 × 101
1.30 × 100
8.99 × 100
4.77 × 10−1
F218.24 × 102
4.62 × 102
2.95 × 102
9.38 × 101
2.31 × 102
3.72 × 101
2.42 × 102
4.67 × 101
2.91 × 102
7.77 × 101
2.18 × 102
8.22 × 101
2.09 × 102
7.22 × 101
F221.52 × 103
4.90 × 102
8.45 × 102
2.19 × 102
1.21 × 102
5.61 × 101
1.08 × 102
2.55 × 101
1.90 × 103
5.42 × 102
9.11 × 101
2.88 × 101
8.98 × 101
7.44 × 100
F234.62 × 103
8.51 × 102
4.11 × 103
6.71 × 102
4.24 × 103
5.59 × 102
4.23 × 103
5.66 × 102
3.56 × 103
2.58 × 102
4.11 × 103
2.20 × 103
3.32 × 103
2.51 × 102
F242.79 × 102
1.11 × 101
2.49 × 102
1.42 × 101
2.56 × 102
1.22 × 101
2.51 × 102
9.77 × 100
2.19 × 102
8.22 × 100
2.20 × 102
1.69 × 101
2.28 × 102
8.72 × 100
F253.23 × 102
1.20 × 101
2.78 × 102
1.30 × 101
2.88 × 102
7.51 × 100
2.83 × 102
1.52 × 101
2.71 × 102
1.17 × 101
2.59 × 102
7.22 × 100
2.44 × 102
6.11 × 100
F262.50 × 102
7.41 × 101
2.31 × 102
4.90 × 101
2.34 × 102
2.80 × 101
2.20 × 102
2.55 × 10−2
2.33 × 102
4.55 × 101
2.11 × 102
4.91 × 101
2.00 × 102
2.11× 10−3
F278.51 × 102
9.22 × 101
8.15 × 102
1.19 × 102
8.09 × 102
1.50 × 102
5.44 × 102
8.99 × 101
5.43 × 102
8.85 × 101
4.72 × 102
1.35 × 102
4.95 × 102
9.22 × 101
F284.41 × 102
5.23 × 102
3.45 × 102
4.52 × 102
2.96 × 102
3.55 × 101
3.11 × 102
2.22 × 10−13
3.00 × 102
2.79 × 10−13
2.97 × 102
4.61 × 101
3.00 × 102
1.55 × 10−13
Table 3. The result of all methods on CEC2013 (D = 50).
Table 3. The result of all methods on CEC2013 (D = 50).
PSOCLPSOEPSOHCLPSODMS-PSOTAPSOAPSO-SL
F13.41 × 102
6.32 × 102
5.42 × 10−13
1.42 × 10−13
3.29 × 10−13
1.32 × 10−13
6.51 × 10−13
1.27 × 10−13
2.42 × 10−13
0.00 × 100
2.22 × 10−13
6.23 × 10−13
0.00 × 100
0.00 × 100
F22.60 × 107
2.51 × 107
2.61 × 106
8.69 × 105
6.77 × 105
2.66 × 105
1.71 × 106
4.82 × 105
1.51 × 106
5.09 × 105
1.31 × 107
6.70 × 106
3.22 × 105
1.23 × 105
F35.32 × 108
4.60 × 108
7.80 × 108
1.35 × 109
4.61 × 108
4.59 × 108
2.69 × 108
2.32 × 108
1.20 × 108
1.40 × 108
3.69 × 107
6.41 × 107
1.86 × 107
1.27 × 106
F47.51 × 102
8.66 × 102
1.31 × 104
3.62 × 103
2.11 × 102
8.51 × 101
1.70 × 103
5.91 × 102
7.49 × 103
2.56 × 103
6.55 × 102
1.59 × 102
1.86 × 101
7.77 × 100
F51.39 × 10−10
4.41 × 10−10
7.73 × 10−13
2.73 × 10−13
5.37 × 10−13
1.33 × 10−13
7.66 × 10−13
1.42 × 10−13
2.18 × 10−13
4.71 × 10−13
3.21 × 10−13
1.88 × 10−13
1.18 × 10−13
4.30 × 10−13
F66.21 × 101
2.51 × 101
4.50 × 101
1.97 × 100
4.41 × 101
1.22 × 100
4.50 × 101
6.34 × 10−1
5.37 × 101
2.33 × 101
4.20 × 101
4.32 × 100
4.42 × 101
8.19 × 10−1
F77.48 × 101
1.62 × 101
7.20 × 101
2.51 × 101
6.29 × 101
1.80 × 101
3.92 × 101
9.95 × 100
3.11 × 101
8.37 × 100
2.77 × 101
6.56 × 100
3.20 × 101
5.41 × 100
F82.11 × 101
0.00 × 100
2.11 × 101
0.00 × 100
2.11 × 101
0.00 × 100
2.11 × 101
0.00 × 100
2.11 × 101
0.00 × 100
2.11 × 101
0.00 × 100
2.11 × 101
0.00 × 100
F95.11 × 101
5.55 × 100
4.52 × 101
5.66 × 100
4.78 × 101
4.56 × 100
4.31 × 101
5.77 × 100
4.43 × 101
4.33 × 100
2.60 × 101
4.33 × 100
2.11 × 101
3.08 × 100
F102.20 × 102
3.71 × 102
2.31 × 10−1
1.39 × 10−1
2.30 × 10−1
1.30 × 10−1
2.31 × 10−1
1.46 × 10−1
2.82 × 10−1
1.41 × 10−1
7.70 × 10−2
4.23 × 10−2
4.21 × 10−2
1.91 × 10−2
F116.81 × 101
2.31 × 101
7.51 × 101
1.77 × 101
9.82 × 10−2
4.11 × 10−1
2.33 × 10−13
6.61 × 10−13
8.61 × 101
1.86 × 101
6.33 × 101
1.23 × 101
2.36 × 101
6.16 × 100
F122.22 × 102
5.51 × 101
1.49 × 102
5.27 × 101
1.64 × 102
4.72 × 101
1.25 × 102
2.88 × 101
7.66 × 101
1.86 × 101
1.34 × 102
4.11 × 101
5.69 × 101
1.24 × 101
F133.50 × 102
5.11 × 101
2.92 × 102
4.49 × 101
2.97 × 102
6.51 × 101
2.57 × 102
6.88 × 101
1.66 × 102
2.99 × 101
2.11 × 102
4.76 × 101
1.37 × 102
4.95 × 101
F142.70 × 103
6.26 × 102
2.29 × 103
5.11 × 102
3.42 × 101
5.25 × 101
1.88 × 102
1.34 × 102
4.77 × 103
7.60 × 102
2.33 × 103
4.55 × 102
2.05 × 102
1.92 × 101
F157.71 × 103
9.66 × 102
7.80 × 103
1.33 × 103
7.58 × 103
6.35 × 102
7.24 × 103
7.11 × 102
7.23 × 103
4.09 × 102
6.64 × 103
2.57 × 103
6.82 × 103
1.25 × 102
F162.60 × 100
5.45 × 10−1
1.53 × 100
3.11 × 10−1
1.87 × 100
3.34 × 10−1
1.71 × 100
3.51 × 10−1
1.49 × 100
2.22 × 10−1
3.34 × 100
2.31 × 10−1
1.49 × 100
4.36 × 10−1
F171.50 × 102
2.55 × 101
1.52 × 102
2.29 × 101
5.78 × 101
3.64 × 100
5.30 × 101
1.43 × 10−1
1.19 × 102
2.33 × 101
1.41 × 102
1.55 × 101
8.40 × 101
6.49 × 100
F181.90 × 102
3.51 × 101
1.62 × 102
2.73 × 101
2.23 × 102
1.30 × 102
1.78 × 102
3.33 × 101
1.82 × 102
1.50 × 101
3.29 × 102
6.29 × 101
1.35 × 102
3.14 × 101
F198.90 × 100
3.14 × 100
7.55 × 100
1.88 × 100
3.51 × 100
6.90 × 10−1
2.40 × 100
3.95 × 10−1
6.85 × 100
1.96 × 100
5.29 × 100
1.42 × 100
5.20 × 100
8.41 × 10−1
F202.45 × 101
8.90 × 10−1
2.23 × 101
1.39 × 100
1.90 × 101
1.09 × 100
2.11 × 101
8.23 × 10−1
1.88 × 101
6.77 × 10−1
2.12 × 101
7.65 × 10−1
1.88 × 101
2.30 × 10−1
F212.09 × 102
6.31 × 102
7.70 × 102
3.55 × 102
3.38 × 102
2.61 × 102
2.31 × 102
6.39 × 101
7.31 × 102
4.20 × 102
7.74 × 102
1.66 × 102
5.72 × 102
5.17 × 102
F223.22 × 103
6.77 × 102
2.83 × 103
6.33 × 102
7.69 × 101
6.66 × 101
3.55 × 101
3.70 × 101
5.58 × 103
9.11 × 102
2.88 × 103
4.55 × 102
2.32 × 103
2.15 × 102
F239.11 × 103
1.35 × 103
7.61 × 103
1.30 × 103
8.70 × 103
1.41 × 103
8.29 × 103
1.33 × 103
7.80 × 103
6.41 × 102
8.11 × 103
1.22 × 103
7.11 × 103
1.16 × 102
F243.38 × 102
1.41 × 101
3.33 × 102
1.71 × 101
3.11 × 102
1.61 × 101
2.83 × 102
1.44 × 101
2.70 × 102
1.74 × 101
3.11 × 102
1.66 × 101
2.33 × 102
1.19 × 101
F253.77 × 102
2.22 × 101
3.61 × 102
1.51 × 101
3.72 × 102
1.09 × 101
3.51 × 102
1.72 × 101
3.50 × 102
1.42 × 101
3.00 × 102
1.41 × 101
1.88 × 102
6.66 × 100
F263.33 × 102
1.29 × 102
3.29 × 102
1.23 × 102
2.05 × 102
2.66 × 10−2
2.12 × 102
5.16 × 10−2
3.41 × 102
7.20 × 101
3.49 × 102
4.44 × 101
1.51 × 102
6.46 × 100
F271.51 × 103
1.41 × 102
1.44 × 103
1.76 × 102
1.66 × 103
2.20 × 102
1.42 × 103
1.81 × 102
1.18 × 103
1.72 × 102
1.30 × 103
8.31 × 101
6.88 × 102
5.41 × 101
F281.49 × 103
1.51 × 103
5.55 × 102
1.22 × 103
4.00 × 102
3.21 × 10−12
4.00 × 102
5.11 × 10−13
4.00 × 102
0.00 × 100
4.00 × 102
0.00 × 100
4.00 × 102
0.00 × 100
Table 4. Friedman test of all compared algorithms on CEC2013 test suite.
Table 4. Friedman test of all compared algorithms on CEC2013 test suite.
Average RankAll
Method
Ranking30-D
Method
Ranking50-D
Method
Ranking
1APSO-SL1.74APSO-SL1.55APSO-SL1.93
2TAPSO3.52TAPSO3.32HCLPSO3.54
3HCLPSO3.70HCLPSO3.86TAPSO3.71
4DMS-PSO3.96DMS-PSO3.98EPSO3.89
5EPSO4.04EPSO4.14DMS-PSO3.93
6CLPSO4.88CLPSO4.86CLPSO4.89
7PSO6.20PSO6.29PSO6.11
Table 5. The result of all methods on CEC2017 (D = 30).
Table 5. The result of all methods on CEC2017 (D = 30).
PSOCLPSOEPSOHCLPSODMS-PSOTAPSOAPSO-SL
F1Mean4.86 × 1073.29 × 1031.20 × 1028.77 × 1012.82 × 1031.92 × 1031.34 × 10−1
F3Mean4.79 × 1012.09 × 1025.57 × 10−117.33 × 10−41.72 × 1013.33 × 10−21.13 × 100
F4Mean6.77 × 1017.41 × 1012.10 × 1017.46 × 1015.39 × 1011.22 × 1023.26 × 101
F5Mean6.69 × 1015.78 × 1014.41 × 1013.77 × 1012.92 × 1017.69 × 1012.07 × 101
F6Mean2.71 × 10−16.69 × 10−44.33 × 10−133.77 × 10−133.34 × 10−45.24 × 1002.61 × 10−13
F7Mean1.20 × 1029.48 × 1017.39 × 1018.39 × 1015.88 × 1011.19 × 1029.11 × 101
F8Mean6.71 × 1015.55 × 1015.01 × 1014.33 × 1012.77 × 1018.67 × 1011.94 × 101
F9Mean6.21 × 1012.52 × 1012.31 × 1011.20 × 1011.38 × 1005.77 × 1020.00 × 100
F10Mean3.52 × 1032.77 × 1031.58 × 1032.11 × 1032.74 × 1032.69 × 1032.46 × 103
F11Mean7.41 × 1014.66 × 1014.75 × 1015.55 × 1012.66 × 1011.20 × 1019.11 × 101
F12Mean1.41 × 1063.33 × 1042.50 × 1043.33 × 1042.97 × 1042.40 × 1044.20 × 103
F13Mean1.47 × 1041.58 × 1041.34 × 1034.22 × 1028.20 × 1036.54 × 1024.20 × 102
F14Mean6.32 × 1021.51 × 1043.70 × 1035.83 × 1032.66 × 1032.45 × 1031.94 × 103
F15Mean7.76 × 1025.77 × 1036.20 × 1022.35 × 1023.90 × 1036.81 × 1025.11 × 102
F16Mean8.75 × 1027.67 × 1026.34 × 1024.98 × 1023.00 × 1023.11 × 1021.20 × 102
F17Mean3.24 × 1021.86 × 1021.48 × 1021.19 × 1027.48 × 1014.19 × 1026.02 × 101
F18Mean3.19 × 1041.44 × 1051.32 × 1058.69 × 1041.19 × 1051.66 × 1043.16 × 103
F19Mean4.88 × 1039.41 × 1038.30 × 1021.44 × 1026.11 × 1031.23 × 1023.77 × 103
F20Mean2.66 × 1022.77 × 1022.04 × 1021.61 × 1021.58 × 1025.16 × 1011.01 × 102
F21Mean2.71 × 1022.77 × 1022.50 × 1022.45 × 1022.76 × 1022.99 × 1022.40 × 102
F22Mean1.22 × 1022.22 × 1021.99 × 1021.23 × 1021.22 × 1021.11 × 1021.00 × 102
F23Mean4.38 × 1024.31 × 1023.90 × 1023.66 × 1023.67 × 1023.68 × 1023.46 × 102
F24Mean4.1 × 1024.92 × 1024.55 × 1024.71 × 1024.52 × 1024.41 × 1024.20 × 102
F25Mean4.23 × 1023.99 × 1023.99 × 1023.91 × 1023.99 × 1024.22 × 1027.33 × 102
F26Mean4.17 × 1032.55 × 1034.22 × 1023.11 × 1021.41 × 1032.18 × 1031.93 × 103
F27Mean4.51 × 1024.44 × 1024.54 × 1024.35 × 1024.44 × 1024.22 × 1023.89 × 102
F28Mean4.22 × 1023.49 × 1023.08 × 1023.48 × 1023.50 × 1023.70 × 1023.00 × 102
F29Mean6.70 × 1026.61 × 1025.55 × 1025.77 × 1025.64 × 1023.83 × 1022.87 × 102
F30Mean3.88 × 1035.66 × 1033.67 × 1033.98 × 1037.98 × 1033.69 × 1022.12 × 103
Table 6. Experimental results on engineering issues.
Table 6. Experimental results on engineering issues.
Optimal ResultOptimal Cost
x1x2x3
PSO0.0516770.356735511.288980.012671
CLPSO0.0517990.361500011.000000.012655
TAPSO0.0516720.356716211.288550.012677
DMS-PSO0.0530170.38953229.6001660.012701
HCLPSO0.0516890.356716011.289010.012665
EPSO0.0517090.357107311.270820.012672
APSO-SL0.0500100.349986711.846870.012233
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, M.; Yang, X. APSO-SL: An Adaptive Particle Swarm Optimization with State-Based Learning Strategy. Processes 2024, 12, 400. https://doi.org/10.3390/pr12020400

AMA Style

Gao M, Yang X. APSO-SL: An Adaptive Particle Swarm Optimization with State-Based Learning Strategy. Processes. 2024; 12(2):400. https://doi.org/10.3390/pr12020400

Chicago/Turabian Style

Gao, Mingqiang, and Xu Yang. 2024. "APSO-SL: An Adaptive Particle Swarm Optimization with State-Based Learning Strategy" Processes 12, no. 2: 400. https://doi.org/10.3390/pr12020400

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop