Previous Article in Journal
A Hybrid Quantitative Method for Evaluating HMI Layout Design in Service Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A High-Performance Learning Particle Swarm Optimization Based on the Knowledge of Individuals for Large-Scale Problems

1
School of Architecture and Civil Engineering, Anhui Polytechnic University, Wuhu 241000, China
2
Engineering Research Center of Anhui Green Building and Digital Construction, Anhui Polytechnic University, Wuhu 241000, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(12), 2103; https://doi.org/10.3390/sym17122103 (registering DOI)
Submission received: 5 November 2025 / Revised: 2 December 2025 / Accepted: 4 December 2025 / Published: 7 December 2025
(This article belongs to the Section Computer)

Abstract

To improve the performance of particle swarm optimization in solving large-scale problems, a High-Performance Learning Particle Swarm Optimization (HPLPSO) based on the knowledge of individuals is proposed. In HPLPSO, two strategies are designed to balance global exploration and local exploitation according to the principle of symmetry, which emphasizes balance and consistency during the optimization process. A strategy for elite individuals to guide population updates is proposed to reduce the impact of local optimal positions. Meanwhile, a synchronous opposition-based learning strategy for multiple elite and poor individuals in the current iteration population is proposed to help individuals quickly jump out of the non-ideal search areas. Based on classical test functions for large-scale problems, HPLPSO performance is tested in 100, 200, 500 and 1000 dimensions. The results show that HPLPSO can converge to the theoretical optimal value in each of its 30 independent runs in 11 functions. Moreover, the values of mean variation from dimension 100 to 1000 present that HPLPSO is little affected by dimensional changes. The case application further validates the performance of the algorithm in solving practical problems. Therefore, the paper provides a method with high optimization performance to solve large-scale problems.

1. Introduction

With the development of digital transformation, the scale of data increases suddenly. There will also be more and more large-scale problems, which have the characteristics of high dimensionality, large search space and multiple local optimal positions. Therefore, traditional methods are difficult to solve these problems. For large-scale optimization problems, swarm intelligence optimization algorithms have become the preferred method.
Swarm intelligence optimization algorithms include Particle Swarm Optimization (PSO), gray wolf optimization, whale optimization algorithm, etc. The theory of PSO is derived from the process of birds searching for food. PSO has been applied in many industries. For example, wireless body area networks [1], high-dimensional data mixed feature selection [2], robot welding process parameter optimization [3] and path planning [4]. Its application has solved the problems that are difficult to solve by traditional optimization methods to a certain extent.
During the application of PSO, scholars also found that the performance of standard PSO is not ideal, such as accuracy and symmetry in process behavior [5,6,7,8]. Moreover, individual updates have strong dependence on the population. Therefore, scholars are constantly improving their performance. However, many scholars ignore the role of individual knowledge in iteration. Based on test functions for large-scale problems, performance tests were conducted on many existing optimization algorithms. The results show that the difference between the search value obtained by many algorithms and the theoretical optimal value is very large. The optimization accuracy is poor compared to the case of low dimensionality. Moreover, the search values obtained by the algorithm each time it runs are unstable and vary greatly. Therefore, many existing optimization algorithms fail to meet the performance requirements in large-scale problems.
To obtain high-precision search values for large-scale problems through PSO, this paper studies the knowledge of individuals to improve symmetry in optimization process behavior. First, this paper proposes that multiple elite individuals are used to refer to population updates, and a strategy is designed to reduce the impact of local extremum. The determination of elite individuals used to guide population updates is based on the individual fitness value. Second, a novel approach to perform opposition-based learning on multiple elite individuals and individuals with poor fitness values is proposed to help individuals quickly jump out of the non-ideal search areas. Based on these two ideas, a High-Performance Learning Particle Swarm Optimization (HPLPSO) is proposed. The results show that HPLPSO has high accuracy and better stability. Especially, HPLPSO has a higher probability of obtaining the theoretical optimal value.
The main contributions and innovations of this paper are summarized as follows:
(1)
To reduce the influence of local extremum on the population during large-scale spatial search, this paper proposes that multiple elite individuals are used to refer to population updates and designs corresponding to population update strategy.
(2)
This paper proposes a novel approach of performing opposition-based learning on multiple elite individuals and individuals with poor fitness values. A synchronous opposition-based learning strategy for multiple elite and poor individuals is designed to help individuals quickly jump out of the poor search areas.
(3)
HPLPSO is proposed and its optimization performance is also verified in solving large-scale problems through experiments and case applications.
The remainder of the paper is organized as follows. In Section 2, the related work is introduced. Section 3 describes two strategies and HPLPSO is proposed. Section 4 presents the comparative experiments. Section 5 further analyzes the algorithm performance by case application. Finally, Section 6 concludes the paper.

2. Related Work

PSO has been studied by many scholars due to its characteristics of few parameters and easy implementation. Many scholars initially conducted performance research on the parameters of PSO and proposed some improvement strategies. With the deepening of research, many scholars have begun to explore the performance improvement from different aspects. In low-dimensional problems, PSO can currently meet performance requirements. However, as the size of data increases, there are more large-scale problems. PSO used for lower dimensional problems cannot obtain ideal search values for large-scale problems. How to apply PSO and other algorithms to more accurately solve large-scale problems is a research topic for some scholars. This section provides a comprehensive review of some research on PSO.

2.1. PSO Improvement

The first idea for PSO improvement is to adjust the relevant parameters, mainly the inertia weight. Shi and Eberhart [9] initially proposed that inertia weight should be linearly decremented during the iterative process and range from 0.9 to 0.4. Later, many scholars also carried out in-depth research and found that the linear adjustment of inertia weight is not the best. Nobile et al. [10] proposed a novel self-tuning algorithm called fuzzy self-tuning PSO which exploits fuzzy logic to calculate the inertia weight and other parameters for each particle. Hou et al. [11] further optimized the global and local search abilities of PSO by improving the inertia weight adjustment equation. Xu et al. [12] analyzed the influence of inertial weight decrease speed on PSO performance and obtained a better exponential function equation to adjust inertia weight. Bangyal et al. [13] applied opposite rank inertia weights and pseudo-random sequences for population initialization to enhance convergence speed and population diversity. To accelerate the convergence speed, Zong et al. [14] added fitness feedback to inertia weight, and the adaptive feedback inertia weight is proposed.
The second idea for PSO improvement is to introduce other theories or operators. Gao et al. [15] introduced the artificial bee colony search operator into PSO to help individuals quickly jump out of local extremum. Wang et al. [16] introduced chaotic sequence to initialize the position of particle swarm. Chen et al. [17] introduced crossover operator into PSO to obtain an initial population that is more conducive to later iterations. Gao and Yao [18] integrated mutation operations with PSO to reduce the probability of falling into local optimal positions. Kumar et al. [19] introduced quantum mechanics into PSO, which can allow all particles to better explore different areas. Pan et al. [20] combined PSO and gravitational search algorithm to solve the optimal allocation of charging and discharging power for electric vehicles. To study a data acquisition scheme for wireless sensor networks with the assistance of a single unmanned aerial vehicle, Zeng et al. [21] studied PSO to minimize the total system consumption, and a dynamic tabu table is introduced to improve the convergence speed of PSO.
The third idea for PSO improvement is to design PSO through multiple strategies or populations. To further improve PSO, Du and Li [22] first divided the particle population into two parts, and then introduced two new strategies, namely, Gaussian local search and differential mutation. The Gaussian local search strategy can enhance the convergence ability, and the differential mutation strategy can also expand the search area. Zhang and Li [23] proposed an improved PSO with four behavioral evolution strategies. During the iteration process, each evolutionary strategy is selected according to its immediate value, future value and comprehensive reward. Asghari et al. [24] designed a PSO with multiple populations based on chaos theory and proved that multiple populations have better performance than single population. Hu et al. [25] proposed a multi-strategy PSO method incorporating snow ablation operation to prevent premature convergence. The experimental results show that the improved PSO has certain advantages in the comparison of similar algorithms.
All the above studies have improved the performance of PSO and solved optimization problems with lower dimensions to a certain extent. However, it was found that the performance of many algorithms will decrease when the dimension of the optimization problem increases, according to simulation experiments. The difference between the obtained search value and the theoretical optimal value is large. Therefore, these algorithms are not suitable for solving large-scale problems.

2.2. PSO in Large-Scale Problems

According to the characteristics of large-scale problems, it can be seen that if a better solution is needed, the optimization performance requirements for PSO are very high.
Ge et al. [26] proposed a cooperative hierarchical PSO framework to solve large-scale problems, in which emergency leadership, interactive cognition and self-developed operators were designed. The performance of the proposed PSO framework was verified by benchmark functions. Liang et al. [27] proposed a random dynamic coevolutionary strategy and introduced it into PSO to solve large-scale problems. Wang et al. [28] studied the large-scale cloud work scheduling problem based on PSO and proposed a dynamic group learning distributed particle swarm optimization. By dividing the population into multiple parts, these parts coevolve based on the distributed model to enhance the diversity of the population. To solve the performance that PSO declines sharply when solving large-scale problems, Mousavirad et al. [29] proposed center-based PSO in which a new component, named opening “center of gravity factor”, is added to velocity update rule. Through the standard test function, it is verified that the proposed PSO has better performance. Wang et al. [30] further studied the performance of PSO in solving large-scale problems and proposed an adaptive granular learning distributed particle swarm optimization. Machine-learning techniques are integrated into the algorithm. The performance of the proposed method is verified by large-scale test functions.
Through a literature search, it is found that there are still few large-scale problem solutions based on PSO. The performance of some proposed PSO was analyzed and studied through simulation experiments in the preliminary research work. The results show that many algorithms still have significant differences between the search value and the theoretical optimal value, and the theoretical optimal value cannot be found. Moreover, many algorithms obtain unstable search values during each iteration, indicating that their performance in solving such problems needs to be improved. This paper studies corresponding strategies to improve some of PSO shortcomings in solving large-scale problems. Regarding the research on population updating and opposition-based learning, previous scholars only considered an elite individual or individual with the worst fitness value. To reduce the influence of many local optimal positions in complex spaces and prevent long-term searching in irrational regions with a higher probability, this paper proposes the guidance of multiple elite individuals for updating population, as well as the synchronous opposition-based learning of multiple elite individuals and multiple individuals with poor fitness values.

3. Proposed HPLPSO

The update of individual position and velocity is mainly based on two best positions during the iteration process. One is the best position discovered by each individual so far, and the other is the best position discovered by all individuals so far [31]. The updating equations of particle velocity and position are as follows [32]:
v i s ( t + 1 ) = ω v i s ( t ) + c 1 r 1 p i s ( t ) x i s ( t ) + c 2 r 2 p g s ( t ) x i s ( t )
x i s ( t + 1 ) = x i s ( t ) + v i s ( t + 1 )
where ω is inertia weight, c1 and c2 are acceleration factors, r1 and r2 are random values at (0,1), pis is the best position discovered by individual so far, pgs is the best position discovered by all individuals so far, xis is the current individual position and t is the number of the current iteration.
It can be seen that the updating of individuals in PSO depends heavily on the population according to Equation (1). When PSO is used to solve large-scale problems, it is very likely to fall into local extremum and poor search areas. This paper designs two strategies based on the knowledge of individuals to substantially maintain symmetry between exploration and exploitation for large-scale problems.

3.1. Strategy for Elite Individuals to Guide Population Updates

This paper proposes that multiple elite individuals are used to refer to population updates, and a population update strategy is designed. In this strategy, the average fitness value of individuals in the initial population is calculated to obtain a group of individuals with fitness values greater than the average fitness value. Then, Equation (3) is adopted to determine the number of elite individuals used to guide later population updates. Let the group of these elite individuals be Celite. After each iteration, some elite individuals in Celite may be replaced by better individuals.
s = ceil ( p / 2 )
where s is the number of the elite individuals used to guide later population updates, p is the number of individuals with fitness value greater than the average fitness value and ceil () is the rounding function.
The specific population update strategy is as follows: the individual with the biggest fitness value in Celite is as pgs in Equation (1). However, when the best position discovered by all individuals so far remains unchanged after j consecutive iterations, a random function is adopted to select an elite individual from Celite. The selected elite individual is as pgs in Equation (1) to update population. The value of j can be taken appropriately based on the maximum number of iterations T. This strategy can reduce the impact of local optimal positions. The framework of elite individuals to guide population updates is shown in Figure 1.

3.2. Synchronous Opposition-Based Learning Strategy for Elite and Poor Individuals

A novel approach to perform opposition-based learning on multiple elite individuals and individuals with poor fitness values is proposed in this paper. Assuming that the population size is M and the fitness value of the ith particle in iteration t is expressed as fitnessi(t), the average fitness value in the current iteration is calculated as follows:
f i t n e s s a v e ( t ) =   f i t n e s s 1 ( t ) +   f i t n e s s 2 ( t ) + +   f i t n e s s M ( t ) M
Equation (5) and Equation (6) are adopted to calculate the number of the elite and poor individuals in the current iteration population, respectively.
m = ceil ( q / 2 )
k = ceil ( h / 2 )
where m is the number of the elite individuals in the current iteration population, q is the number of the individuals with fitness values greater than the average fitness value in the current iteration population, k is the number of the poor individuals in the current iteration population and h is the number of the individuals with fitness values less than the average fitness value in the current iteration population.
To enable the elite and poor individuals in the current iteration population to change the search area in time when the iterative search encounters poor areas, multiple elite and poor individuals are opposition-based learned by combining the boundary vector of the search space. Let the upper boundary vector of the search space be ubv, the lower boundary vector be lbv, the space dimension be n. xnm, and xnk represent the numerical value of an individual’s position in one direction before opposition-based learning. anm and ank represent the numerical value of an individual’s position in one direction after opposition-based learning. rand is a random vector with the same dimension as ubv. For the poor individuals in the current iteration population, the random opposition-based learning is adopted to increase the global search ability. Combining the upper and lower boundary vectors, the opposition-based learning update equations for the elite and poor individuals in the current iteration population are Equation (7) and Equation (8), respectively.
a 11 a 1 m a n 1 a n m ( t ) = u b v x 11 x 1 m x n 1 x n m ( t ) + l b v
a 11 a 1 k a n 1 a n k ( t ) = r a n d u b v x 11 x 1 k x n 1 x n k ( t ) + l b v

3.3. HPLPSO Based on Two Strategies

The pseudo-code of the HPLPSO is described in Algorithm 1.
Algorithm 1: Pseudo-code of the HPLPSO algorithm
1. Initialize parameter values, individual velocity vi (i = 1…M) and position xi (i = 1…M)
2. Calculate fitnessi (i = 1…M) for the initial individual
3. Determine the best positions discovered by each individual so far pi (i = 1…M)
4. Determine the Celite used to guide later population updates by Equations (3) and (4)
5. while (t < T)
6. Calculate fitnessave (t)
7. Determine multiple elite and poor individuals in the current iteration population by Equations (5) and (6)
8. Opposition-based learning of multiple elite and poor individuals in the current iteration by Equation (7) and Equation (8), respectively
9. Update velocity vi (i = 1…M) and position xi (i = 1…M) based on Section 3.1
10.  Calculate all individual fitness values
11.  Update pi (i = 1…M)
12.  Update Celite
13.  t = t + 1
14. end while
15. Return Best individual position and its fitness value among the elite individuals
In step 1, parameter initialization includes maximum number of iterations T, population size M, c1, c2, ωmax and ωmin. The initial individual positions are determined through good-point-set. In step 9, the individual with the biggest fitness value in Celite is as pgs in Equation (1). If the best position discovered by all individuals so far remains unchanged after j consecutive iterations, a random function is adopted to select an elite individual from Celite as pgs in Equation (1). The inertia weight is dynamically adjusted according to Equation (9). In step 12, if the minimum individual fitness value in Celite is smaller compared with the updated individual fitness value in step 9, the elite individual corresponding to the minimum fitness value in Celite will be replaced by this updated individual.
ω = ω min ω max ω min 1 t T 2
In HPLPSO, global search is conducted in the early stage. Under the strategy of elite individuals to guide population updates, individuals are more likely to avoid interference from many local optimal positions. In addition, due to the large and complex space for solving large-scale problems, some individuals may be trapped in an undesirable region for a long time. When this situation is encountered, a synchronous opposition-based learning strategy for multiple elite and poor individuals can quickly help these individuals. Therefore, HPLPSO algorithm can quickly converge to a good value in the early stage. In the later iteration process, the above two strategies ensure the local search while continuously improving the diversity of individuals, thus balancing global exploration and local exploitation. Therefore, HPLPSO has a strong ability to avoid interference throughout the entire iteration process and also has the ability to continuously develop new area searches. HPLPSO can converge to the global optimum with a higher probability.
According to the pseudo-code of HPLPSO, the time complexity of HPLPSO mainly depends on four parts: individual velocity and position initialization in step 1, fitness calculation, determination of the elite and poor individuals in each iteration and update. To distribute the computational load, parallel computing is used in the experiment.

4. Numerical Experiments

Well-known benchmark functions are selected to test algorithm performance. These functions can be divided into unimodal and multimodal functions. Unimodal functions have single optima, which can be used to detect the convergence speed and optimization ability of algorithms. Multimodal functions have many optimal values, making them more complex than unimodal functions. Multimodal functions can be used to detect the global exploration ability and the performance of algorithms to avoid falling into local optimal positions. Therefore, the selection includes unimodal and multimodal functions to comprehensively evaluate the performance of algorithms, as shown in Table 1.
The selected dimensions include 100, 200, 500 and 1000. In the experiment, this paper also chooses the PSO algorithm improved by other scholars for comparative experiments. The selected algorithms include Improved Chaotic Particle Swarm Optimization (ICPSO) [33], Improved Inertial Weighting Decreasing Speed Particle Swarm Optimization (IIWDSPSO), Mutation Particle Swarm Optimization (MPSO), Improved Weighted Quantum Particle Swarm Optimization (IWQPSO) and Hybrid Reverse Particle Swarm Optimization (HRPSO), in which the hybrid opposition-based learning strategy is proposed by Geng et al. [34]. The difference between HRPSO and HPLPSO proposed in this paper is that HRPSO only considers an elite individual and an individual with poor fitness value. HPLPSO considers multiple elite individuals and multiple individuals with poor fitness values. In order to comprehensively evaluate the performance of these algorithms, the performance of six algorithms in the same dimension is compared, and the performance change with increasing dimensions is also analyzed.
Simulation experiments are conducted in MATLAB 2016a environment and 8-core processor for parallel computing. In the experiment, the values of c1 and c2 are finally confirmed through multiple adjustments and repeated experiments. Among the selected values, except for ICPSO, the convergence values from IIWDSPSO, IWQPSO, HRPSO, MPSO and HPLPSO are the best when c1 and c2 are set to 0.1. Regardless of the values of c1 and c2, the convergence value from ICPSO is the worst among the six algorithms. Considering the relative fairness, c1 and c2 in all algorithms are set to 0.1. The same values are set for the common parameters of 6 algorithms or some algorithms. For parameters unique to the algorithm, their values are directly obtained through corresponding references. All parameters and their values of the 6 algorithms are shown in Table 2 [35,36,37,38].
Under the dimensions of 100, 200, 500 and 1000, each algorithm independently runs 30 times for 15 test functions. The evaluation indexes include the minimum, mean and standard deviation (SD). The mean and SD are calculated as follows.
mean = i = 1 30 x i 30
SD = i = 1 30 x i mean   30
where xi is the optimization value in ith run.
According to the experimental results, the performance of six algorithms in the dimensions of 100, 200, 500 and 1000 is analyzed, respectively.

4.1. Dimension 100

In dimension 100, the minimum, mean and SD obtained by the six algorithms in 15 test functions are shown in Table 3.
According to the simulation results in 15 test functions, ICPSO, HRPSO and HPLPSO can obtain the theoretical optimal value of 0 in some test functions. The functions that HPLPSO can obtain the value of 0 include f1, f2, f3, f4, f6, f7, f9, f10, f11, f12 and f14. HRPSO obtains the theoretical optimal values in 8 functions, namely, f1, f2, f4, f6, f7, f9, f11 and f12. ICPSO can obtain the value of 0 only in f4 and f6. Although the minimum values obtained by HRPSO are 0 in 8 functions, it can be seen that only the mean in f6 is 0, indicating that HRPSO can obtain the theoretical optimal value every run in f6. The convergence values obtained by ICPSO, IIWDSPSO and IWQPSO are very poor in f7 and f15. The iterative process data shows that these three algorithms have not changed since obtaining local optima in the early iterations. This result indicates that these three algorithms have poor ability to get rid of local optimal positions. However, HRPSO and HPLPSO perform significantly better in f7 and f15, with HPLPSO achieving the theoretical optimal value in f7. The opposition-based learning strategy can greatly help algorithms avoid becoming stuck in local optima and searching in non-ideal areas for a long time in complex spaces. Compared with the other five algorithms in f5, f8, f13 and f15, the minimum and mean values obtained by HPLPSO are the smallest. According to Table 3, the SD values of ICPSO, IIWDSPSO and IWQPSO are significantly higher than the other three algorithms, indicating that ICPSO, IIWDSPSO and IWQPSO have the worst stability in finding optimal values. Further analysis for the SD values of HRPSO, MPSO and HPLPSO reveals that HPLPSO has the best comprehensive optimization stability among the 15 test functions. The Friedman test and Wilcoxon test are adopted to further determine whether the performance of HPLPSO and HRPSO is significantly different. The significant difference analysis is based on the experimental data of mean. The results are shown in Table 4. It can be seen that the values of p < 0.05 in the Friedman test and Wilcoxon test. Therefore, the convergence performances of HPLPSO and HRPSO are significantly different when the dimension is 100 and HPLPSO has high optimization accuracy.
In dimension 100, The iterative process data with the optimization value closest to the mean of 30 independent runs is selected to draw the iterative curve. The iteration curves are shown in Figure 2.
Because all iteration curves are drawn by the semilogy function in MATLAB, the optimal value of 0 will not appear on the chart. It can be seen from Figure 2 that the iteration performance of HPLPSO and HRPSO is significantly better than that of ICPSO, IIWDSPSO, IWQPSO and MPSO. In some functions, the iterative curves of ICPSO, IIWDSPSO, IWQPSO, and MPSO overlap approximately, indicating that these algorithms have similar performance in these functions. The iteration curves of HPLPSO indicate that the theoretical optimal value has been obtained, except for f5, f8, f13 and f15. Moreover, HPLPSO has excellent comprehensive optimization ability according to the downward trend of the curves. Many curves of HRPSO indicate that no better position is found after converging to a position in the early stage of iteration. In f6, HPLPSO converges slower than HRPSO but still converges to the value of 0. In f5, f8, f13 and f15, although neither HPLPSO nor HRPSO converge to the value of 0, the iterative performance of HPLPSO is obviously better than that of HRPSO. Therefore, HPLPSO has significantly higher optimization performance in dimension 100 from the above analysis.

4.2. Dimensions 200, 500 and 1000

In dimensions 200, 500 and 1000, the performance test results for six algorithms are shown in Appendix A. Among the 15 test functions, ICPSO, IIWDSPSO, IWQPSO and MPSO do not obtain the theoretical optimal value at one time in dimensions 200, 500 and 1000. In dimensions 200 and 1000, HRPSO can obtain the value of 0 in eight functions but can only converge to the theoretical optimal value every time in f6 according to the mean. In dimension 500, HRPSO can obtain the value of 0 in seven functions and can converge to the theoretical optimal value every time in f6 too. Compared with the above algorithms in dimensions 200, 500 and 1000, HPLPSO can stably converge to the theoretical optimal value in f1, f2, f3, f4, f6, f7, f9, f10, f11, f12 and f14. The values of minimum and mean obtained by HPLPSO are also the smallest in f5, f8, f13 and f15. The SD values in the three dimensions also indicate that the stability of HPLPSO search values is the best, while the stability of ICPSO, IIWDSPSO and IWQP search values are the worst.
The significant difference analysis results between HRPSO and HPLPSO are shown in Table 5. The values of p are all less than 0.05 in the Friedman test and Wilcoxon test. Therefore, HPLPSO achieves the best search value in dimensions 200, 500 and 1000.
Based on the data analysis of six algorithms in dimensions 100, 200, 500 and 1000, HPLPSO proposed in this paper has a higher probability of obtaining the theoretical optimal value for large-scale problems. Meanwhile, HPLPSO is superior to other five algorithms in terms of search value accuracy.

4.3. Stability Analysis of Obtaining Theoretical Optimal Value

The experimental results in Table 2, Table A1, Table A2 and Table A3 indicate that HRPSO and HPLPSO can obtain theoretical optimal values in many test functions. However, there are also some iterative processes with not finding theoretical optimal value in 30 independent runs. In order to further analyze this performance, the number of times to obtain the theoretical optimal value in 30 independent runs is counted and Figure 3 is obtained.
In four dimensions, neither HRPSO nor HPLPSO achieve the theoretical optimal value in f5, f8, f13 and f15. The number of times HRPSO obtains theoretical optimal value is 30 only in f6. In other functions, HRPSO obtains fewer times than HPLPSO. In dimension 500, HRPSO performs worse compared to the other three dimensions. According to Figure 3, it can be seen that HPLPSO can obtain the theoretical optimal value for each run in 11 functions.
Overall, the number of times HRPSO obtains theoretical optimal value decreases as the dimension increases. Looking at the number of times HPLPSO obtains the theoretical optimal value, it is almost independent of the dimension. The performance that HPLPSO converges to the theoretical optimal value is stable.

4.4. Performance Change Analysis of Six Algorithms with Increasing Dimensions

This part further analyzes the performance change in six algorithms with the increase in dimensions. The values of mean are selected as the analysis data. Figure 4 shows the mean variation from dimension 100 to 1000 in 15 test functions. The symbol (+) indicates that the value of mean increases as the dimension increases from 100 to 1000. The symbol (−) indicates that the value of mean decreases as the dimension increases from 100 to 1000. The symbol (0) indicates that the value of mean has not changed.
The values of mean from ICPSO increase with increasing dimensions in f1, f2, f4, f5, f7, f8, f9, f10, f11, f12, f13, f14 and f15. In IIWDSPSO and IWQPSO, the values of mean increase in all 15 functions. In f1, f2, f3, f4, f5, f7, f8, f12, f14 and f15, the mean values obtained by HRPSO increase with increasing dimensions too. Except for f3 and f6, the obtained mean values by MPSO also increase. However, the mean values obtained by HPLPSO only increase in f5, f13 and f15, and there is no change as the dimension increases in other test functions. According to the calculation, the maximum mean variation is 0.2228 for HPLPSO. Therefore, HPLPSO is little affected by dimensional changes.

5. 5G Base Station Deployment Optimization Application

To test the performance of the algorithm in solving practical problems, this paper applies the algorithm to solve the large-scale 5G base station deployment under certain constraints. According to the simulation experiment results, HPLPSO and HRPSO have significantly better performance than the other four algorithms. Therefore, HRPSO and HPLPSO are further compared and analyzed in this case study.
This paper considers the fixed number of base stations as a constraint when considering the investment in construction costs. In addition, the deployment area and shape have also been fixed as a constraint condition. The problem description of 5G base station deployment optimization is as follows: t base stations are deployed in an area of S = L1L2 (unit: m) with a communication radius of R for each station. The deployed area is divided into n1n2 grids. The coordinates corresponding to each base station are (xi, yi) (i = 1, 2, …, t), and the coordinates of each grid point are Qj = (xj, yj) (j = 1, 2, …, n1 + 1 ∗ n2 + 1). If the distance between base station and grid point is less than the communication radius R, the grid point is within the communication range after deploying t base stations. The number of grid points that can be covered by base stations and their coverage proportion are obtained. Algorithms are mainly used to optimize the coordinates of base stations, so that the base station can cover more grid points under constrained conditions. In this case, the deployment and grid division parameter values are listed in Table 6.
HPLPSO and HRPSO are adopted to independently solve the 5G base station deployment optimization for five times. The maximum number of grid points covered and coverage proportion are obtained in Table 7. If 50 base stations are arranged according to the coordinates optimized by HPLPSO, the signal coverage is significantly better. Figure 5 shows the base station signal coverage area plotted based on the optimized coordinates obtained from two algorithms. It can be seen that HPLPSO achieves a more ideal coverage area without a large area having no signal. Therefore, the results obtained by HPLPSO in solving 5G base station deployment further validate its performance.

6. Conclusions

This paper studies the knowledge of individuals and proposes two strategies to improve PSO for large-scale problems. To reduce the impact of local extremum in complex solution space, a strategy for multiple elite individuals to guide population updates was proposed. Moreover, a synchronous opposition-based learning strategy for multiple elite and poor individuals in the current iteration population was proposed to help individuals quickly jump out of the non-ideal search areas. This paper fully considers the role of individual knowledge in the iterative process to improve the PSO performance.
The proposed HPLPSO was tested based on the large-scale test functions in dimensions 100, 200, 500 and 1000. Meanwhile, the PSOs improved by other scholars were selected for comparative experiments. Results show that HPLPSO can converge to the theoretical optimal value in 11 functions and have high convergence accuracy. Friedman test and Wilcoxon test were adopted to further analyze the performance differences between HPLPSO and HRPSO. The values of p are all less than 0.5, indicating that HPLPSO is significantly optimal. To further analyze the stability to obtain theoretical optimal value, the number of times to obtain the theoretical optimal value in 30 independent runs was counted. HPLPSO achieves the theoretical optimal value for each of the 30 independent runs in 11 functions. Therefore, HPLPSO not only has high accuracy but also has a greater probability of converging to the theoretical optimal value. Moreover, the algorithm performance change analysis with increasing dimensions shows that HPLPSO is little affected by dimensional changes. In case application, HPLPSO achieves a more ideal coverage area without a large area having no signal. Therefore, HPLPSO proposed in this paper has high optimization performance for solving large-scale problems.
Experimental data shows that HPLPSO does not achieve the theoretical optimal value in all test functions. Therefore, future research will continue to explore methods for algorithms to obtain the theoretical optimal value in large-scale problems.

Author Contributions

Conceptualization, Z.X.; Methodology, Z.X.; Funding acquisition, Z.X. and F.G.; Software, Z.X.; Experiments, Z.X.; Writing—original draft, Z.X.; Writing—review and editing, Z.X. and F.G. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by Anhui Province Housing and Urban Rural Construction Science and Technology Plan Project (No.2024-YF033), Anhui Polytechnic University Introduced Talents Research Fund (No.2021YQQ064), Anhui Polytechnic University Scientific Research Project (No.Xjky2022168), Innovation Team Project for Natural Sciences in Universities of Anhui Province (No.2024AH010005) and Anhui Province University Research Project (No.2024AH050128).

Data Availability Statement

The codes presented in the study are openly available at https://github.com/Zhedong-xu/HPLPSO_code-.

Acknowledgments

The authors sincerely thank the anonymous reviewers for their critical comments and suggestions for improving the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

In dimensions 200, 500, 1000, the minimum, mean and SD obtained by the six algorithms in 15 test functions are shown in Table A1, Table A2 and Table A3.
Table A1. Experimental results in dimension 200.
Table A1. Experimental results in dimension 200.
FunctionIndicatorICPSOIIWDSPSOIWQPSOHRPSOMPSOHPLPSO
f1Minimum4.0297 × 1043.4417 × 1042.7981 × 1040.0000 × 1004.8726 × 10−10.0000 × 100
Mean5.4190 × 1044.5487 × 1043.9420 × 1046.1508 × 10−278.1520 × 10−10.0000 × 100
SD5.3564 × 1037.2074 × 1034.6405 × 1031.0538 × 10−262.1816 × 10−10.0000 × 100
f2Minimum4.1497 × 1045.3400 × 1045.5820 × 1040.0000 × 1002.1772 × 1000.0000 × 100
Mean5.1557 × 1046.7546 × 1047.9162 × 1041.0074 × 10−262.5628 × 1010.0000 × 100
SD4.6920 × 1037.5803 × 1031.0774 × 1041.4311 × 10−263.8590 × 1010.0000 × 100
f3Minimum2.3555 × 10−51.9791 × 10−71.0429 × 10−67.0821 × 10−2202.5962 × 10−350.0000 × 100
Mean1.3228 × 10−33.1934 × 10−66.4852 × 10−61.2700 × 10−492.4739 × 10−320.0000 × 100
SD1.4087 × 10−33.9286 × 10−66.1184 × 10−66.9561 × 10−493.9578 × 10−320.0000 × 100
f4Minimum2.5226 × 1022.5389 × 1022.8034 × 1020.0000 × 1006.1139 × 10−10.0000 × 100
Mean2.8987 × 1023.5359 × 1023.6128 × 1026.0975 × 10−152.6925 × 1000.0000 × 100
SD2.4028 × 1014.1574 × 1015.2945 × 1017.7860 × 10−153.1046 × 1000.0000 × 100
f5Minimum4.1216 × 1043.5442 × 1042.7387 × 1043.8783 × 10−45.7137 × 10−11.6372 × 10−5
Mean5.3779 × 1044.4756 × 1043.9349 × 1041.8719 × 10−18.0927 × 10−12.7297 × 10−3
SD4.7378 × 1035.5925 × 1035.8046 × 1034.9175 × 10−11.2897 × 10−13.0115 × 10−3
f6Minimum4.6498 × 10−1489.8478 × 10−2625.3177 × 10−1170.0000 × 1003.4867 × 10−1160.0000 × 100
Mean1.7200 × 10−77.9778 × 10−1467.0446 × 10−1090.0000 × 1007.9406 × 10−450.0000 × 100
SD5.6824 × 10−74.3696 × 10−1453.7643 × 10−1080.0000 × 1004.3493 × 10−440.0000 × 100
f7Minimum1.3754 × 1095.3688 × 1087.7907 × 1080.0000 × 1001.1779 × 1060.0000 × 100
Mean2.1405 × 1099.3784 × 1081.1783 × 1091.8738 × 10−262.2797 × 1060.0000 × 100
SD4.0627 × 1082.2276 × 1083.2041 × 1081.0147 × 10−255.6454 × 1050.0000 × 100
f8Minimum1.3589 × 1011.3364 × 1011.3099 × 1013.9968 × 10−148.2476 × 10−28.8818 × 10−16
Mean1.4842 × 1011.4248 × 1011.4017 × 1015.9419 × 10−131.0592 × 10−18.8818 × 10−16
SD3.9600 × 10−13.4765 × 10−15.4625 × 10−17.6016 × 10−132.2673 × 10−20.0000 × 100
f9Minimum3.6609 × 1022.9137 × 1024.1791 × 1020.0000 × 1008.0503 × 10−10.0000 × 100
Mean4.9299 × 1023.9658 × 1025.2689 × 1021.1842 × 10−168.8830 × 10−10.0000 × 100
SD4.4594 × 1015.4337 × 1015.6726 × 1014.3921 × 10−163.5043 × 10−20.0000 × 100
f10Minimum1.6903 × 1031.4103 × 1031.5238 × 1035.0781 × 10−114.1769 × 1020.0000 × 100
Mean1.8090 × 1031.5435 × 1031.7412 × 1031.1622 × 10−85.8503 × 1020.0000 × 100
SD5.9645 × 1016.9431 × 1011.1355 × 1023.0609 × 10−81.1632 × 1020.0000 × 100
f11Minimum2.2844 × 1051.9806 × 1051.7804 × 1050.0000 × 1006.9212 × 10−10.0000 × 100
Mean1.4402 × 1064.4532 × 1055.1852 × 1056.5976 × 1041.0449 × 1000.0000 × 100
SD3.8851 × 1061.5319 × 1054.7123 × 1051.3997 × 1051.7616 × 10−10.0000 × 100
f12Minimum8.6170 × 1043.7784 × 1043.4288 × 1040.0000 × 1003.0973 × 1000.0000 × 100
Mean1.5802 × 1055.6198 × 1044.6781 × 1045.7643 × 10−273.9569 × 1000.0000 × 100
SD6.0023 × 1041.3090 × 1046.9351 × 1031.0145 × 10−266.4178 × 10−10.0000 × 100
f13Minimum1.6157 × 1031.4827 × 1031.6058 × 1031.5640 × 10−51.0447 × 1003.8320 × 10−6
Mean1.8149 × 1031.7051 × 1031.8952 × 1036.4201 × 10−31.9009 × 1001.3925 × 10−4
SD1.0964 × 1021.1592 × 1021.6404 × 1021.1432 × 10−26.1636 × 10−11.2488 × 10−4
f14Minimum6.7192 × 1025.1453 × 1025.0537 × 1021.0267 × 10−16.9890 × 1010.0000 × 100
Mean7.7649 × 1026.1534 × 1026.0423 × 1024.3151 × 1011.8248 × 1020.0000 × 100
SD5.1693 × 1015.2348 × 1015.4742 × 1014.6386 × 1016.0666 × 1010.0000 × 100
f15Minimum3.0444 × 1091.9030 × 1092.4388 × 1094.8538 × 10−15.2777 × 1023.4018 × 10−4
Mean3.3629 × 1092.4991 × 1092.9446 × 1094.3206 × 1011.0541 × 1034.5509 × 10−2
SD1.8523 × 1081.8840 × 1081.8971 × 1084.5285 × 1019.1910 × 1024.9955 × 10−2
Table A2. Experimental results in dimension 500.
Table A2. Experimental results in dimension 500.
FunctionIndicatorICPSOIIWDSPSOIWQPSOHRPSOMPSOHPLPSO
f1Minimum1.3617 × 1051.1899 × 1051.0013 × 1050.0000 × 1002.2822 × 1010.0000 × 100
Mean1.5822 × 1051.4888 × 1051.1800 × 1052.8969 × 10−263.1911 × 1010.0000 × 100
SD1.1215 × 1041.1287 × 1047.9957 × 1034.4345 × 10−264.3221 × 1000.0000 × 100
f2Minimum3.3451 × 1054.8581 × 1054.9173 × 1050.0000 × 1003.6430 × 1020.0000 × 100
Mean3.9215 × 1056.0097 × 1055.6835 × 1058.7055 × 10−266.8689 × 1020.0000 × 100
SD3.3099 × 1045.1591 × 1045.1014 × 1041.4838 × 10−252.5284 × 1020.0000 × 100
f3Minimum3.0330 × 10−51.4853 × 10−77.7417 × 10−73.4918 × 10−2513.0304 × 10−420.0000 × 100
Mean1.7133 × 10−35.8049 × 10−61.7049 × 10−51.0381 × 10−601.0482 × 10−350.0000 × 100
SD2.4376 × 10−36.2711 × 10−62.0276 × 10−55.6852 × 10−603.9829 × 10−350.0000 × 100
f4Minimum6.6340 × 1029.4230 × 1028.3737 × 1020.0000 × 1007.3172 × 1000.0000 × 100
Mean7.6032 × 1021.0430 × 1039.4841 × 1023.6619 × 10−142.0643 × 1010.0000 × 100
SD5.9861 × 1015.2136 × 1014.4583 × 1013.6220 × 10−147.7742 × 1000.0000 × 100
f5Minimum1.3558 × 1051.2192 × 1059.5723 × 1042.4679 × 10−32.7850 × 1012.0168 × 10−6
Mean1.5730 × 1051.4899 × 1051.1785 × 1052.5039 × 10−13.4414 × 1017.6894 × 10−3
SD1.1919 × 1041.5402 × 1048.7414 × 1034.3068 × 10−13.8462 × 1001.1080 × 10−2
f6Minimum2.2392 × 10−2553.8412 × 10−2615.8765 × 10−1170.0000 × 1002.4862 × 10−1590.0000 × 100
Mean1.2916 × 10−81.3510 × 10−1233.9903 × 10−1100.0000 × 1002.2540 × 10−500.0000 × 100
SD4.2527 × 10−87.3998 × 10−1231.5012 × 10−1090.0000 × 1001.2346 × 10−490.0000 × 100
f7Minimum6.2242 × 1093.8444 × 1093.8482 × 1090.0000 × 1008.4643 × 1060.0000 × 100
Mean8.1124 × 1095.2923 × 1095.3786 × 1096.4970 × 10−221.3214 × 1070.0000 × 100
SD1.0889 × 1091.0024 × 1091.1629 × 1092.0035 × 10−212.8302 × 1060.0000 × 100
f8Minimum1.4786 × 1011.4687 × 1011.4200 × 1016.8390 × 10−149.6859 × 10−18.8818 × 10−16
Mean1.5264 × 1011.5383 × 1011.4787 × 1012.6649 × 10−121.2315 × 1008.8818 × 10−16
SD2.3126 × 10−13.5291 × 10−13.1416 × 10−14.7465 × 10−121.3344 × 10−10.0000 × 100
f9Minimum1.2297 × 1031.2043 × 1031.2733 × 1030.0000 × 1001.2225 × 1000.0000 × 100
Mean1.4296 × 1031.3412 × 1031.5069 × 1031.3053 × 10−121.3052 × 1000.0000 × 100
SD1.0570 × 1027.8702 × 1011.1109 × 1027.1491 × 10−123.8217 × 10−20.0000 × 100
f10Minimum4.6972 × 1034.5338 × 1034.7173 × 1033.9861 × 10−111.3313 × 1030.0000 × 100
Mean4.9449 × 1034.8262 × 1035.0656 × 1032.7767 × 10−91.7664 × 1030.0000 × 100
SD9.7280 × 1011.2618 × 1021.7939 × 1026.8560 × 10−92.6174 × 1020.0000 × 100
f11Minimum5.7039 × 1056.3400 × 1056.7734 × 1053.4354 × 10−63.0891 × 1010.0000 × 100
Mean2.3060 × 1061.1151 × 1061.0122 × 1063.0162 × 1053.9351 × 1010.0000 × 100
SD6.8304 × 1062.7614 × 1052.0724 × 1057.3832 × 1056.0538 × 1000.0000 × 100
f12Minimum2.1392 × 1051.2655 × 1059.2178 × 1040.0000 × 1003.1342 × 1020.0000 × 100
Mean4.4722 × 1051.8216 × 1051.2679 × 1052.8259 × 10−264.2410 × 1020.0000 × 100
SD1.7742 × 1053.2732 × 1041.9701 × 1044.1808 × 10−266.2363 × 1010.0000 × 100
f13Minimum4.5471 × 1035.1603 × 1035.2854 × 1031.2334 × 10−51.0824 × 1015.2003 × 10−6
Mean5.2788 × 1035.5692 × 1035.7743 × 1034.3901 × 10−21.3620 × 1013.5655 × 10−4
SD2.9997 × 1022.4923 × 1022.7680 × 1021.3626 × 10−11.3281 × 1003.9324 × 10−4
f14Minimum2.0727 × 1031.7952 × 1031.6142 × 1031.5114 × 10−23.9472 × 1020.0000 × 100
Mean2.2232 × 1032.0123 × 1031.8277 × 1031.5057 × 1028.3830 × 1020.0000 × 100
SD8.4496 × 1011.3297 × 1021.1079 × 1021.7094 × 1022.7119 × 1020.0000 × 100
f15Minimum8.9720 × 1098.1270 × 1098.4956 × 1092.9754 × 10−34.4331 × 1031.3367 × 10−3
Mean9.6603 × 1098.9428 × 1099.1331 × 1098.6219 × 1016.1594 × 1031.0665 × 10−1
SD3.7057 × 1084.3379 × 1084.0604 × 1081.0650 × 1021.1364 × 1031.0014 × 10−1
Table A3. Experimental results in dimension 1000.
Table A3. Experimental results in dimension 1000.
FunctionIndicatorICPSOIIWDSPSOIWQPSOHRPSOMPSOHPLPSO
f1Minimum2.9540 × 1052.8271 × 1052.0454 × 1050.0000 × 1008.1192 × 1020.0000 × 100
Mean3.2795 × 1053.2529 × 1052.5525 × 1059.7907 × 10−269.4759 × 1020.0000 × 100
SD1.8229 × 1042.1050 × 1041.8248 × 1041.8538 × 10−258.9128 × 1010.0000 × 100
f2Minimum1.4255 × 1062.3546 × 1062.3831 × 1060.0000 × 1001.1904 × 1040.0000 × 100
Mean1.6509 × 1062.6763 × 1062.6007 × 1066.0668 × 10−251.8470 × 1040.0000 × 100
SD1.0542 × 1051.3375 × 1051.3208 × 1059.7629 × 10−253.2878 × 1030.0000 × 100
f3Minimum3.6544 × 10−55.2990 × 10−71.0619 × 10−65.1529 × 10−2314.1656 × 10−390.0000 × 100
Mean1.2835 × 10−33.4224 × 10−51.4559 × 10−53.7464 × 10−486.1014 × 10−340.0000 × 100
SD1.3303 × 10−37.6936 × 10−52.7172 × 10−52.0520 × 10−471.9867 × 10−330.0000 × 100
f4Minimum1.2250 × 1031.3708 × 1038.3737 × 1020.0000 × 1001.2057 × 1020.0000 × 100
Mean9.0277 × 10214.2054 × 1049.4841 × 1024.1506 × 1043.9819 × 1040.0000 × 100
SD4.9447 × 10223.1388 × 1044.4583 × 1013.2121 × 1043.2034 × 1040.0000 × 100
f5Minimum2.9657 × 1052.7302 × 1052.3251 × 1051.1387 × 10−37.9344 × 1022.1923 × 10−4
Mean3.3000 × 1053.2513 × 1052.5833 × 1056.5771 × 10−19.3025 × 1021.3942 × 10−2
SD1.7559 × 1042.2347 × 1041.5832 × 1041.0386 × 1008.2377 × 1011.9242 × 10−2
f6Minimum1.3405 × 10−801.8197 × 10−2245.1048 × 10−1160.0000 × 1008.5865 × 10−1860.0000 × 100
Mean8.8966 × 10−98.7109 × 10−1141.3882 × 10−1060.0000 × 1003.0443 × 10−520.0000 × 100
SD1.6367 × 10−84.7712 × 10−1137.4572 × 10−1060.0000 × 1001.5938 × 10−510.0000 × 100
f7Minimum1.5369 × 10101.2945 × 10101.0927 × 10100.0000 × 1005.4319 × 1070.0000 × 100
Mean1.9001 × 10101.5679 × 10101.3545 × 10103.2089 × 10−217.4270 × 1070.0000 × 100
SD2.0619 × 1091.6789 × 1091.6399 × 1095.3257 × 10−211.2995 × 1070.0000 × 100
f8Minimum1.4853 × 1011.5349 × 1011.4472 × 1011.4300 × 10−133.0835 × 1008.8818 × 10−16
Mean1.5391 × 1011.5644 × 1011.4941 × 1011.2656 × 10−83.2837 × 1008.8818 × 10−16
SD2.2673 × 10−11.9947 × 10−12.7326 × 10−16.9052 × 10−81.0852 × 10−10.0000 × 100
f9Minimum2.6654 × 1032.5926 × 1032.9147 × 1030.0000 × 1007.7545 × 1000.0000 × 100
Mean2.9548 × 1032.8508 × 1033.3837 × 1031.4803 × 10−179.5230 × 1000.0000 × 100
SD1.6381 × 1021.7000 × 1022.2212 × 1026.3432 × 10−177.3750 × 10−10.0000 × 100
f10Minimum1.0075 × 1041.0139 × 1041.0483 × 1042.3130 × 10−113.5558 × 1030.0000 × 100
Mean1.0379 × 1041.0831 × 1041.1000 × 1044.8138 × 10−94.3903 × 1030.0000 × 100
SD1.6316 × 1023.1183 × 1022.9411 × 1021.0488 × 10−87.7341 × 1020.0000 × 100
f11Minimum1.0170 × 1061.2346 × 1061.1123 × 1060.0000 × 1001.1826 × 1030.0000 × 100
Mean4.6637 × 1062.1380 × 1062.0596 × 1068.5539 × 1046.1428 × 1030.0000 × 100
SD9.4867 × 1064.8288 × 1054.6796 × 1052.1970 × 1051.0237 × 1040.0000 × 100
f12Minimum4.7922 × 1052.6576 × 1051.5585 × 1050.0000 × 1001.8221 × 1040.0000 × 100
Mean8.9141 × 1054.2382 × 1052.4689 × 1051.2397 × 10−253.0788 × 1040.0000 × 100
SD2.2368 × 1059.8986 × 1044.0934 × 1041.5350 × 10−258.6278 × 1030.0000 × 100
f13Minimum1.0090 × 1041.1497 × 1041.1533 × 1046.5023 × 10−48.1053 × 1011.4340 × 10−5
Mean1.0912 × 1041.2656 × 1041.2429 × 1043.2252 × 10−28.5025 × 1015.4160 × 10−4
SD3.7131 × 1025.1532 × 1024.2731 × 1023.2202 × 10−23.2350 × 1006.2737 × 10−4
f14Minimum4.2307 × 1034.0227 × 1033.9026 × 1034.5971 × 10−11.3283 × 1030.0000 × 100
Mean4.6162 × 1034.4627 × 1034.1006 × 1033.1388 × 1022.0946 × 1030.0000 × 100
SD1.4452 × 1022.2820 × 1021.5627 × 1023.2284 × 1024.5626 × 1020.0000 × 100
f15Minimum1.8803 × 10101.6743 × 10101.8776 × 10102.5459 × 10−13.0104 × 1071.3149 × 10−2
Mean2.0190 × 10101.9867 × 10101.9652 × 10102.6315 × 1023.8167 × 1072.4985 × 10−1
SD5.2031 × 1089.7205 × 1086.3024 × 1082.9957 × 1023.8032 × 1062.2250 × 10−1

References

  1. Bilandi, N.; Verma, H.; Dhir, R. hPSO-SA: Hybrid particle swarm optimization-simulated annealing algorithm for relay node selection in wireless body area networks. Appl. Intell. 2021, 51, 1410–1438. [Google Scholar] [CrossRef]
  2. Song, X.; Zhang, Y.; Gong, D.; Gao, X. A fast hybrid feature selection based on correlation-guided clustering and particle swarm optimization for high-dimensional data. IEEE Trans. Cybern. 2021, 52, 9573–9586. [Google Scholar] [CrossRef]
  3. Liang, H.; Qi, L.; Liu, X. Modeling and optimization of robot welding process parameters based on improved SVM-PSO. Int. J. Adv. Manuf. Technol. 2024, 133, 2595–2605. [Google Scholar] [CrossRef]
  4. Lin, S.; Liu, A.; Wang, J.; Kong, X. An improved fault-tolerant cultural-PSO with probability for multi-AGV path planning. Expert Syst. Appl. 2024, 237, 121510. [Google Scholar] [CrossRef]
  5. Pahnehkolaei, S.; Alfi, A.; Machado, J. Analytical stability analysis of the fractional-order particle swarm optimization algorithm. Chaos Solitons Fractals 2022, 155, 111658. [Google Scholar] [CrossRef]
  6. Zhai, S.; Li, G.; Wu, G.; Hou, M.; Jia, Q. Cooperative task allocation for multi heterogeneous aerial vehicles using particle swarm optimization algorithm and entropy weight method. Appl. Soft Comput. 2023, 148, 110918. [Google Scholar] [CrossRef]
  7. Hou, S.; Gebreyesus, G.D.; Fujimura, S. Day-ahead multi-modal demand side management in microgrid via two-stage improved ring-topology particle swarm optimization. Expert Syst. Appl. 2024, 238, 122135. [Google Scholar] [CrossRef]
  8. Hu, G.; Wang, S.; Zhang, J.; Houssein, E. Particle swarm optimization for hybrid mutant slime mold: An efficient algorithm for solving the hyperparameters of adaptive Grey-Markov modified model. Inf. Sci. 2025, 689, 121417. [Google Scholar] [CrossRef]
  9. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the IEEE International Conference on Evolutionary Computation, Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar]
  10. Nobile, M.S.; Cazzaniga, P.; Besozzi, D.; Colombo, R.; Mauri, G.; Pasi, G. Fuzzy Self-Tuning PSO: A settings-free algorithm for global optimization. Swarm Evol. Comput. 2018, 39, 70–85. [Google Scholar] [CrossRef]
  11. Hou, G.; Xu, Z.; Liu, X.; Jin, C. Improved particle swarm optimization for selection of shield tunneling parameter values. Comput. Model. Eng. Sci. 2019, 118, 317–337. [Google Scholar] [CrossRef]
  12. Xu, Z.D.; Hou, G.Y.; Yang, L.; Huang, X.J. Selection of shield construction parameters in specific sections based on e-SVR and improved particle swarm optimization algorithm. China Mech. Eng. 2022, 33, 3007–3014. [Google Scholar]
  13. Bangyal, W.; Nisar, K.; Soomro, T.; Ibrahim, A.; Mallah, G.; Hassan, N.; Rehman, N. An improved particle swarm optimization algorithm for data classification. Appl. Sci. 2023, 13, 283. [Google Scholar] [CrossRef]
  14. Zong, T.; Li, J.; Lu, G. Parameter estimation of multivariable Wiener nonlinear systems by the improved particle swarm optimization and coupling identification. Inf. Sci. 2024, 661, 120192. [Google Scholar] [CrossRef]
  15. Gao, W.; Liu, S.; Jiao, H.; Qin, C. Particle swarm optimization with search operator of artificial bee colony algorithm. Control. Decis. 2012, 27, 833–838. [Google Scholar]
  16. Wang, Z.; Sun, J.; Yin, C. A support vector machine based on an improved particle swarm optimization algorithm and its application. J. Harbin Eng. Univ. 2016, 37, 1728–1733. [Google Scholar]
  17. Chen, Y.; Li, L.; Xiao, J.; Yang, Y.; Liang, J.; Tao, L. Particle swarm optimizer with crossover operation. Eng. Appl. Artif. Intell. 2018, 70, 159–169. [Google Scholar] [CrossRef]
  18. Gao, X.; Yao, S. Research on photovoltaic MPPT based on mutation strategy PSO algorithm. Modul. Mach. Tool Autom. Manuf. Tech. 2022, 2022, 151–153. [Google Scholar]
  19. Kumar, S.; Muthusamy, P.; Jerald, M. A hybrid framework for improved weighted quantum particle swarm optimization and fast mask recurrent CNN to enhance phishing-URL prediction performance. Int. J. Comput. Intell. Syst. 2024, 17, 251. [Google Scholar] [CrossRef]
  20. Pan, K.; Liang, C.D.; Lu, M. Optimal scheduling of electric vehicle ordered charging and discharging based on improved gravitational search and particle swarm optimization algorithm. Int. J. Electr. Power Energy Syst. 2024, 157, 109766. [Google Scholar] [CrossRef]
  21. Zeng, Y.; Guo, G.; Chen, S.; Qiang, Y.; Liu, J. Energy-efficient data collection from UAV in WSNs based on improved PSO algorithm. IEEE Sens. J. 2024, 24, 35762–35774. [Google Scholar] [CrossRef]
  22. Du, W.; Li, B. Multi-strategy ensemble particle swarm optimization for dynamic optimization. Inf. Sci. 2008, 178, 3096–3109. [Google Scholar] [CrossRef]
  23. Zhang, Q.; Li, P. An adaptive multi-strategy behavior particle swarm optimization algorithm. Control. Decis. 2020, 35, 115–122. [Google Scholar]
  24. Asghari, K.; Masdari, M.; Gharehchopogh, F.; Saneifard, R. Multi-swarm and chaotic whale-particle swarm optimization algorithm with a selection method based on roulette wheel. Expert Syst. 2021, 38, 12779. [Google Scholar] [CrossRef]
  25. Hu, G.; Guo, Y.; Zhao, W.; Houssein, E. An adaptive snow ablation-inspired particle swarm optimization with its application in geometric optimization. Artif. Intell. Rev. 2024, 57, 332. [Google Scholar] [CrossRef]
  26. Ge, H.; Sun, L.; Tan, G.; Chen, Z.; Chen, C. Cooperative hierarchical PSO with two stage variable interaction reconstruction for large scale optimization. IEEE Trans. Cybern. 2017, 47, 2809–2823. [Google Scholar] [CrossRef]
  27. Liang, J.; Liu, R.; Yu, K.; Qu, B. Dynamic multi-swarm particle swarm optimization with cooperative coevolution for large scale global optimization. J. Softw. 2018, 29, 2595–2605. [Google Scholar]
  28. Wang, Z.; Zhan, Z.; Yu, W.; Lin, Y.; Zhang, J.; Gu, T.; Zhang, J. Dynamic group learning distributed particle swarm optimization for large-scale optimization and its application in cloud workflow scheduling. IEEE Trans. Cybern. 2020, 50, 2715–2729. [Google Scholar] [CrossRef] [PubMed]
  29. Mousavirad, S.; Rahnamayan, S. CenPSO: A novel center-based particle swarm optimization algorithm for large-scale optimization. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 2020; pp. 2066–2071. [Google Scholar]
  30. Wang, Z.; Zhan, Z.; Kwong, S.; Jin, H.; Zhang, J. Adaptive granularity learning distributed particle swarm optimization for large-scale optimization. IEEE Trans. Cybern. 2021, 51, 1175–1188. [Google Scholar] [CrossRef]
  31. Wang, R.; Hao, K.; Huang, B.; Zhu, X. Adaptive niching particle swarm optimization with local search for multimodal optimization. Appl. Soft Comput. 2023, 133, 109923. [Google Scholar] [CrossRef]
  32. Li, J. Oversampling framework based on sample subspace optimization with accelerated binary particle swarm optimization for imbalanced classification. Appl. Soft Comput. 2024, 162, 111708. [Google Scholar] [CrossRef]
  33. Gu, X.; Huang, M.; Liang, X.; Jiao, X. An improved chaotic particle swarm optimization algorithm with improved inertia weight. J. Dalian Jiaotong Univ. 2020, 41, 102–106. [Google Scholar]
  34. Geng, Z.; Li, M.; Cao, S.; Liu, C. A whale optimization algorithm based on hybrid reverse learning strategy. Comput. Eng. Sci. 2022, 44, 355–363. [Google Scholar]
  35. Zhang, Y.; Liu, X.; Bao, F.; Chi, J.; Zhang, C.; Liu, P. Particle swarm optimization with adaptive learning strategy. Knowl. Based Syst. 2020, 196, 105789. [Google Scholar] [CrossRef]
  36. Zhang, X.; Wang, X.; Kang, Q.; Cheng, J. Differential mutation and novel social learning particle swarm optimization algorithm. Inf. Sci. 2018, 480, 109–129. [Google Scholar] [CrossRef]
  37. Zhao, L. Cloud computing resource scheduling based on improved quantum particle swarm optimization algorithm. J. Nanjing Univ. Sci. Technol. 2016, 40, 223–228. [Google Scholar]
  38. Zhou, C.; Ding, L.; He, R. PSO-based Elman neural network model for predictive control of air chamber pressure in slurry shield tunneling under Yangtze River. Autom. Constr. 2013, 36, 208–217. [Google Scholar] [CrossRef]
Figure 1. Framework of elite individuals to guide population updates.
Figure 1. Framework of elite individuals to guide population updates.
Symmetry 17 02103 g001
Figure 2. The iterative process of six algorithms in dimension 100.
Figure 2. The iterative process of six algorithms in dimension 100.
Symmetry 17 02103 g002aSymmetry 17 02103 g002b
Figure 3. The number of times HRPSO and HPLPSO obtain the theoretical optimal value.
Figure 3. The number of times HRPSO and HPLPSO obtain the theoretical optimal value.
Symmetry 17 02103 g003
Figure 4. Mean variation in convergence values of five algorithms from dimension 100 to 1000.
Figure 4. Mean variation in convergence values of five algorithms from dimension 100 to 1000.
Symmetry 17 02103 g004aSymmetry 17 02103 g004b
Figure 5. Signal coverage area based on the schemes optimized by HRPSO and HPLPSO.
Figure 5. Signal coverage area based on the schemes optimized by HRPSO and HPLPSO.
Symmetry 17 02103 g005
Table 1. The test functions.
Table 1. The test functions.
NumberFunctionSearch SpaceDimension
1 f 1 ( x ) = i = 1 n x i 2 [−100,100]100/200/500/1000
2 f 2 ( x ) = i = 1 n i x i 2 [−10,10]100/200/500/1000
3 f 3 ( x ) = i = 1 n x i i + 1 [−1,1]100/200/500/1000
4 f 4 ( x ) = i = 1 n x i + i = 1 n x i [−10,10]100/200/500/1000
5 f 5 ( x ) = i = 1 n x i + 0.5 2 [−100,100]100/200/500/1000
6 f 6 ( x ) = min x i , 1 i n [−100,100]100/200/500/1000
7 f 7 ( x ) = i = 1 n 10 6 ( i 1 ) / ( n 1 ) x i 2 [−100,100]100/200/500/1000
8 f 8 ( x ) = 20 exp 0.2 1 n i = 1 n x i 2 exp 1 n i = 1 n cos 2 π x i + 20 + e [−32,32]100/200/500/1000
9 f 9 ( x ) = 1 / 4000 i = 1 n x i 2 i = 1 n cos x i / i + 1 [−600,600]100/200/500/1000
10 f 10 ( x ) = i = 1 n x i 2 10 cos 2 π x i + 10 [−5.12,5.12]100/200/500/1000
11 f 11 ( x ) = 10 6 x 1 2 + i = 2 n x i 2 [−100,100]100/200/500/1000
12 f 12 ( x ) = i = 1 n x i 2 + i = 1 n 0 .5 x i 2 + i = 1 n 0 .5 x i 4 [−100,100]100/200/500/1000
13 f 13 ( x ) = 0.1 n 0.1 i = 1 n cos 5 π x i i = 1 n x i 2 [−10,15]100/200/500/1000
14 f 14 ( x ) = i = 1 n x i sin x i + 0.1 x i [−50,50]100/200/500/1000
15 f 15 ( x ) = i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 [−10,50]100/200/500/1000
Table 2. Parameters and their values.
Table 2. Parameters and their values.
AlgorithmParameter and Value
ICPSOT = 5000, M = 50, c1 = c2 = 0.1, variance critical value ε = 0.0001, ωmax = 0.9 and ωmin = 0.4
IIWDSPSOT = 5000, M = 50, c1 = c2 = 0.1, ωmax = 0.9, ωmin = 0.4
HRPSOT = 5000, M = 50, c1 = c2 = 0.1, ωmax = 0.9, ωmin = 0.4
IWQPSOT = 5000, M = 50, c1 = c2 = 0.1, α = 0.001, ω = 0.9
MPSOT = 5000, M = 50, c1 = c2 = 0.1, ωmax = 0.9, ωmin = 0.4,
mutation probability = 0.6
HPLPSOT = 5000, M = 50, c1 = c2 = 0.1, j = 10 (according to T), ωmax = 0.9, ωmin = 0.4
Table 3. Experimental results in dimension 100.
Table 3. Experimental results in dimension 100.
FunctionIndicatorICPSOIIWDSPSOIWQPSOHRPSOMPSOHPLPSO
f1Minimum1.7513 × 1049.9533 × 1038.7169 × 1030.0000 × 1002.7714 × 10−20.0000 × 100
Mean2.3555 × 1041.4021 × 1041.4443 × 1041.1326 × 10−276.7620 × 10−20.0000 × 100
SD2.9355 × 1032.7239 × 1032.6879 × 1032.2021 × 10−271.9948 × 10−20.0000 × 100
f2Minimum7.5088 × 1035.4491 × 1031.1223 × 1040.0000 × 1004.3063 × 10−20.0000 × 100
Mean1.0999 × 1041.0253 × 1041.3934 × 1049.3992 × 10287.8452 × 10−10.0000 × 100
SD1.3028 × 1031.9945 × 1032.0602 × 1032.0780 × 10−271.3252 × 1000.0000 × 100
f3Minimum5.0922 × 10−51.2775 × 10−76.9402 × 10−73.3456 × 10−2826.4938 × 10−280.0000 × 100
Mean1.4100 × 10−33.0676 × 10−69.9950 × 10−62.4307 × 10−574.7819 × 10−260.0000 × 100
SD2.1920 × 10−32.5330 × 10−61.1879 × 10−51.3309 × 10−567.8184 × 10−260.0000 × 100
f4Minimum0.0000 × 1009.6208 × 1011.0526 × 1020.0000 × 1001.2973 × 10−10.0000 × 100
Mean0.0000 × 1001.3155 × 1021.4467 × 1024.5019 × 10−151.8028 × 10−10.0000 × 100
SD0.0000 × 1002.8290 × 1013.1411 × 1017.2779 × 10−152.2002 × 10−20.0000 × 100
f5Minimum1.7697 × 1041.0084 × 1049.6906 × 1033.8796 × 10−53.8005 × 10−21.3167 × 10−6
Mean2.3523 × 1041.4345 × 1041.4716 × 1041.7458 × 10−16.5010 × 10−28.7372 × 10−4
SD2.9547 × 1032.6668 × 1032.7361 × 1037.2496 × 10−12.2280 × 10−21.1115 × 10−3
f6Minimum0.0000 × 1001.9361 × 10−2622.4334 × 10−1170.0000 × 1002.2998 × 10−760.0000 × 100
Mean2.5030 × 10−79.4984 × 10−1333.7416 × 10−1120.0000 × 1002.6175 × 10−390.0000 × 100
SD9.5818 × 10−75.2025 × 10−1321.5907 × 10−1110.0000 × 1001.0923 × 10−380.0000 × 100
f7Minimum4.5410 × 1087.7807 × 1071.2661 × 1080.0000 × 1001.6507 × 1050.0000 × 100
Mean8.6958 × 1081.9058 × 1082.6365 × 1081.9936 × 10−276.2627 × 1050.0000 × 100
SD3.0020 × 1086.6627 × 1079.9451 × 1071.0130 × 10−262.5102 × 1050.0000 × 100
f8Minimum1.2998 × 1011.2534 × 1011.1433 × 1012.2204 × 10−142.8085 × 10−28.8818 × 10−16
Mean1.4219 × 1011.3323 × 1011.2886 × 1011.8527 × 10−133.5691 × 10−28.8818 × 10−16
SD4.4341 × 10−15.5417 × 10−18.0146 × 10−11.1604 × 10−134.6858 × 10−30.0000 × 100
f9Minimum1.5965 × 1028.8168 × 1011.2588 × 1020.0000 × 1002.8456 × 10−10.0000 × 100
Mean2.1060 × 1021.2544 × 1021.9250 × 1022.5905 × 10−174.1438 × 10−10.0000 × 100
SD2.8763 × 1012.4017 × 1013.2640 × 1017.5374 × 10−178.0106 × 10−20.0000 × 100
f10Minimum6.8354 × 1024.7656 × 1025.5833 × 1025.8842 × 10−111.8873 × 1020.0000 × 100
Mean8.1287 × 1025.5010 × 1026.6587 × 1021.1844 × 10−82.6274 × 1020.0000 × 100
SD4.7896 × 1014.8178 × 1016.1698 × 1012.6709 × 10−83.4349 × 1010.0000 × 100
f11Minimum1.0310 × 1058.6993 × 1049.2270 × 1040.0000 × 1004.9837 × 10−20.0000 × 100
Mean1.3315 × 1061.9725 × 1052.0374 × 1051.7267 × 1049.3048 × 10−20.0000 × 100
SD3.5984 × 1066.4806 × 1045.2324 × 1044.4518 × 1042.9254 × 10−20.0000 × 100
f12Minimum3.4347 × 1041.4216 × 1041.3585 × 1040.0000 × 1001.7687 × 10−10.0000 × 100
Mean6.2876 × 1042.0836 × 1041.7919 × 1041.1084 × 10−272.5605 × 10−10.0000 × 100
SD2.5469 × 1044.7079 × 1033.3004 × 1032.0802 × 10−274.7403 × 10−20.0000 × 100
f13Minimum5.8170 × 1024.2538 × 1025.2337 × 1026.8061 × 10−69.9968 × 10−31.4714 × 10−7
Mean7.6860 × 1025.1784 × 1027.0418 × 1025.8565 × 10−14.2782 × 10−15.7884 × 10−5
SD8.8485 × 1015.3366 × 1011.0012 × 1022.2350 × 1002.6240 × 10−15.5313 × 10−5
f14Minimum2.8651 × 1021.7279 × 1021.8432 × 1022.4167 × 10−21.8294 × 1010.0000 × 100
Mean3.4265 × 1022.2056 × 1022.4579 × 1021.6991 × 1015.5923 × 1010.0000 × 100
SD3.1232 × 1012.8994 × 1013.0792 × 1011.9610 × 1012.6074 × 1010.0000 × 100
f15Minimum1.1903 × 1094.5588 × 1088.5860 × 1082.0981 × 10−29.7967 × 1018.0217 × 10−5
Mean1.4696 × 1097.2627 × 1081.0713 × 1092.4299 × 1014.1217 × 1022.7046 × 10−2
SD1.6924 × 1081.5221 × 1081.3442 × 1082.9723 × 1012.6843 × 1023.3449 × 10−2
Table 4. Analysis of significant difference in dimension 100.
Table 4. Analysis of significant difference in dimension 100.
Mean of HRPSOMean of HPLPSO
f11.1326 × 10−270.0000 × 100
f29.3992 × 10−280.0000 × 100
f32.4307 × 10−570.0000 × 100
f44.5019 × 10−150.0000 × 100
f51.7458 × 10−18.7372 × 10−4
f60.0000 × 1000.0000 × 100
f71.9936 × 10−270.0000 × 100
f81.8527 × 10−138.8818 × 10−16
f92.5905 × 10−170.0000 × 100
f101.1844 × 10−80.0000 × 100
f111.7267 × 1040.0000 × 100
f121.1084 × 10−270.0000 × 100
f135.8565 × 10−15.7884 × 10−5
f141.6991 × 1010.0000 × 100
f152.4299 × 1012.7046 × 10−2
Friedman testChi-sq = 14, p = 0.0002 < 0.05
Wilcoxon testp = 0.0029 < 0.05, h = 1
Table 5. Analysis of significant difference in dimensions 200, 500 and 1000.
Table 5. Analysis of significant difference in dimensions 200, 500 and 1000.
2005001000
Mean of HRPSOMean of HPLPSOMean of HRPSOMean of HPLPSOMean of HRPSOMean of HPLPSO
f16.1508 × 10−270.0000 × 1002.8969 × 10−260.0000 × 1009.7907 × 10−260.0000 × 100
f21.0074 × 10−260.0000 × 1008.7055 × 10−260.0000 × 1006.0668 × 10−250.0000 × 100
f31.2700 × 10−490.0000 × 1001.0381 × 10−600.0000 × 1003.7464 × 10−480.0000 × 100
f46.0975 × 10−150.0000 × 1003.6619 × 10−140.0000 × 1004.1506 × 1040.0000 × 100
f51.8719 × 10−12.7297 × 10−32.5039 × 10−17.6894 × 10−36.5771 × 10−11.3942 × 10−2
f60.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 100
f71.8738 × 10−260.0000 × 1006.4970 × 10−220.0000 × 1003.2089 × 10−210.0000 × 100
f85.9419 × 10−138.8818 × 10−162.6649 × 10−128.8818 × 10−161.2656 × 10−88.8818 × 10−16
f91.1842 × 10−160.0000 × 1001.3053 × 10−120.0000 × 1001.4803 × 10−170.0000 × 100
f101.1622 × 10−80.0000 × 1002.7767 × 10−90.0000 × 1004.8138 × 10−90.0000 × 100
f116.5976 × 1040.0000 × 1003.0162 × 1050.0000 × 1008.5539 × 1040.0000 × 100
f125.7643 × 10−270.0000 × 1002.8259 × 10−260.0000 × 1001.2397 × 10−250.0000 × 100
f136.4201 × 10−31.3925 × 10−45.7535 × 10−29.1016 × 10−33.2252 × 10−25.4160 × 10−4
f144.3151 × 1010.0000 × 1001.5057 × 1020.0000 × 1003.1388 × 1020.0000 × 100
f154.3206 × 1014.5509 × 10−28.6219 × 1011.0665 × 10−12.6315 × 1022.4985 × 10−1
Friedman testChi-sq = 14,
p = 0.0002 < 0.05
Chi-sq = 14, p = 0.0002 < 0.05Chi-sq = 14, p = 0.0002 < 0.05
Wilcoxon testp = 0.0033 < 0.05, h = 1p = 0.0029 < 0.05, h = 1p = 0.0022 < 0.05, h = 1
Table 6. The deployment and grid division parameter values.
Table 6. The deployment and grid division parameter values.
ParametertL1L2n1n2R
Value5021002100300300250
Table 7. The maximum number of grid points covered and coverage proportion.
Table 7. The maximum number of grid points covered and coverage proportion.
AlgorithmHRPSOHPLPSO
Maximum number of grid points covered88,62289,730
Coverage proportion97.82%99.04%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, Z.; Guo, F. A High-Performance Learning Particle Swarm Optimization Based on the Knowledge of Individuals for Large-Scale Problems. Symmetry 2025, 17, 2103. https://doi.org/10.3390/sym17122103

AMA Style

Xu Z, Guo F. A High-Performance Learning Particle Swarm Optimization Based on the Knowledge of Individuals for Large-Scale Problems. Symmetry. 2025; 17(12):2103. https://doi.org/10.3390/sym17122103

Chicago/Turabian Style

Xu, Zhedong, and Fei Guo. 2025. "A High-Performance Learning Particle Swarm Optimization Based on the Knowledge of Individuals for Large-Scale Problems" Symmetry 17, no. 12: 2103. https://doi.org/10.3390/sym17122103

APA Style

Xu, Z., & Guo, F. (2025). A High-Performance Learning Particle Swarm Optimization Based on the Knowledge of Individuals for Large-Scale Problems. Symmetry, 17(12), 2103. https://doi.org/10.3390/sym17122103

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop