You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Article
  • Open Access

9 May 2019

An Entropy-Assisted Particle Swarm Optimizer for Large-Scale Optimization Problem

,
,
,
and
1
Key Laboratory of Intelligent Computing & Signal Processing (Ministry of Education), Anhui University, Hefei 230039, China
2
Sino-German College of Applied Sciences, Tongji University, Shanghai 201804, China
3
Key Lab of Information Network Security Ministry of Public Security, Shanghai 201112, China
4
School of Electronics and Information Engineering, Tongji University, Shanghai 201804, China
This article belongs to the Special Issue Evolutionary Computation

Abstract

Diversity maintenance is crucial for particle swarm optimizer’s (PSO) performance. However, the update mechanism for particles in the conventional PSO is poor in the performance of diversity maintenance, which usually results in a premature convergence or a stagnation of exploration in the searching space. To help particle swarm optimization enhance the ability in diversity maintenance, many works have proposed to adjust the distances among particles. However, such operators will result in a situation where the diversity maintenance and fitness evaluation are conducted in the same distance-based space. Therefore, it also brings a new challenge in trade-off between convergence speed and diversity preserving. In this paper, a novel PSO is proposed that employs competitive strategy and entropy measurement to manage convergence operator and diversity maintenance respectively. The proposed algorithm was applied to the large-scale optimization benchmark suite on CEC 2013 and the results demonstrate the proposed algorithm is feasible and competitive to address large scale optimization problems.

1. Introduction

Swarm intelligence plays a very active role in optimization areas. As a powerful tool in swarm optimizers, particles swarm optimizer (PSO) has been widely and successfully applied to many different areas, including electronics [], communication technique [], energy forecasting [], job-shop scheduling [], economic dispatch problems [], and many others []. In the design PSO, each particle has two properties that are velocity and position, respectively. For each generation in the algorithm, particles’ properties update according to the mechanisms presented in Equations (1) and (2).
V i ( t + 1 ) = ω V i ( t ) + c 1 R 1 ( P i , p b e s t ( t ) P i ( t ) )
+ c 2 R 2 ( P g b e s t ( t ) P i ( t ) )
P i ( t + 1 ) = P i ( t ) + V i ( t + 1 )
where V i ( t ) and P i ( t ) are used to represent the velocity and position of the ith particle in the tth generation. ω [ 0 , 1 ] is an inertia weight and c 1 , c 2 [ 0 , 1 ] are acceleration coefficients. R 1 , R 2 [ 0 , 1 ] n are two random vectors, where n is the dimension of the problem. P i , p b e s t ( t ) is the best position where the ith particle ever arrived, while P g b e s t is the current global best position found by the whole swarm so far.
According to the update mechanism of PSO, the current global best particle P g b e s t attracts the whole swarm. However, if P g b e s t is a local optimal position, it is very difficult for the whole swarm to get rid of it. Therefore, for PSO, it is a notorious problem that the algorithm lacks competitive ability in diversity maintenance, which usually causes a premature convergence or a stagnation in convergence. To overcome this issue, many works are proposed in current decades, which are presented in Section 2 in detail. However, since diversity maintenance and fitness evaluation are conducted in the same distance-based space, it is difficult to distinguish the role of an operator in exploration and exploitation, respectively. It is also a big challenge to explicitly balance the two abilities. Hence, in current research, the proposed methods usually encounter problems, such as structure design, parameter tuning and so on. To overcome the problem, in this paper, on one the hand, we propose a novel method to maintain swarm diversity by an entropy measurement, while, on the other hand, a competitive strategy is employed for swarm convergence. Since entropy is a frequency measurement, while competitive strategy is based on the Euclidean space, the proposed method eliminates the coupling in traditional way to balance exploration and exploitation.
The rest of this paper is organized as follows. In Section 2, the related work to enhance PSO’s ability in diversity maintenance is introduced. In Section 3, we propose a novel algorithm named entropy-assisted PSO, which considers convergence and diversity maintenance simultaneously and independently. The experiments on the proposed algorithm are presented in Section 4. We also select several peer algorithms in the comparisons to validate the optimization ability. The conclusions and future works are proposed in Section 5.

3. Algorithm

Iin traditional PSO, both diversity maintenance operator and fitness evaluation operator are conducted in distance-based measurement space. This will result in a heavy coupling in particles’ update for exploitation and exploration, which brings a big challenge in balance the weights of the two abilities. To overcome this problem, in this paper, we propose a novel idea to improve PSO which is termed entropy-assisted particle swarm optimization (EAPSO). In the proposed algorithm, we consider both diversity maintenance and fitness evaluation independently and simultaneously. Diversity maintenance and fitness evaluation are conducted by frequency-based space and distance-based space, respectively. To reduce the computation load in large scale optimization problem, in this paper, we only consider the phonotypic diversity, which is depicted by fitness domain, rather than genetic diversity. In each generation, the fitness domain is divided into several segments. We account the number of particles in each segment, as shown in Figure 1.
Figure 1. The illustration for the entropy diversity measurement.
The maximum fitness and minimal fitness are set as the fitness landscape boundaries. For the landscape, it is uniformly divided into several segments. For each segment, we account for the number of particles, namely number of fitness values. For the entropy calculation, we use the following formulas, which are inspired by Shannon entropy.
H = i m p i log p i
where H is used to depict the entropy of a swarm and p i is the probability that fitness values are located in the ith segment, which can be obtained by Equation (4).
p i = n u m i m
where m is the swarm size, and n u m i is the number of fitness values that appear in the ith segment. Inspired by the maximum entropy principle, the value of H is maximized if and only if p i = p j , where i , j [ 1 , n ] . Hence, to gain a large value of entropy, the fitness values are supposed to be distributed uniformly in all segments. To pursue this goal, we define a novel measurement to select global best particle, which considers fitness and entropy simultaneously. All particles are evaluated by Equation (5).
Q i = α f i t n e s s r a n k + β e n t r o p y r a n k
where f i t n e s s r a n k is the fitness value rank of a particle, while e n t r o p y r a n k is the entropy value of a particle. α and β are employed to manage the weights of the two ranks. However, in real applications, the tuning on two parameters will increase the difficulty. Considering that the two parameters are used to adjust the weights of exploration and exploitation respectively, in real applications, we fix the value of one of them, while tuning the other one. In this paper, we set the weight of β as 1, and therefore, by regulating the value of α , the weights of exploration and exploitation can be adjusted. To calculate the value of f i t n e s s r a n k , all particles are ranked according to their fitness values. For a particle’s e n t r o p y r a n k , it is defined as the segment rank where the particle is. A segment has a top rank if there is few particles in the segment, while a segment ranks behind if there are crowded particles in the segment. According to Equation (5), a small value of Q i means a good performance of particle i.
In the proposed algorithm, we propose a novel learning mechanism as shown in Equation (6). We randomly and uniformly divide a swarm into several groups. Namely, the numbers of particles in each group are equal. The particle with high quality of Q in a group is considered as an exemplar, which means that the exemplars are selected according to both fitness evaluation and entropy selection. In this paper, we abandon the historical information, but only use the information of the current swarm, which reduces the spacing complexity of the algorithm. The update mechanism of the proposed algorithm is given in Equation (6).
V i ( t + 1 ) = ω V i ( t ) + r 1 c 1 ( P l w ( t ) P l l ( t ) ) + r 2 c 2 ( P g P l l ( t ) )
P l l ( t + 1 ) = P l l ( t ) + V i ( t + 1 )
where V i is the velocity of the ith particle, ω is the inertia parameter, P l w is the position of the local winner in a group, P l w is the position of local loser in the same group, P g is the current best position found, c 1 and c 2 are the cognitive coefficient and social coefficient respectively, r 1 and r 2 are random values that belong to [ 0 , 1 ] , and t is used to present the index of generation. On the one hand, the fitness is evaluated according to the objective function. On the other hand, the divergence situation of a particle is evaluated by the entropy measurement. By assigning weights to the divergence and convergence, the update mechanism involves both exploration and exploitation. The pseudo-codes of the proposed algorithms are given in Algorithm 1.
Algorithm 1: Pseudo-codes of entropy-assisted PSO.
  Input: Swarm size n, Group size g, Number of segments m, Weight value α
  Output: The current global best particle
  1
Loop 1: Evaluate the fitness for all particles, and for the ith particle, its fitness is f i ;
  2
Set the maximum value and minimal value of fitness as f m a x and f m i n respectively;
  3
Divide the interval [ f m i n , f m a x ] into m segments;
  4
Calculate the number of fitness values in each segment, for the ith segment, the number of fitness values is recorded as n u m i ;
  5
Sort the number of fitness values, and record the fitness rank f r i for each particle;
  6
Sort the segments according to the number of fitness values, and record the segment rank s r i for each particle;
  7
Evaluate each particle’ quality Q by Equation (5);
  8
Divide the swarm into g groups, and compare the particles by their performances Q;
  9
Select the global best particle according to Q;
 10
Update particles according to Equation (6);
 11
If the termination condition is satisfied, output the current global best particle; Otherwise, goto Loop 1;
In the proposed algorithm, for each particle, it has two exemplars, which are local best exemplar and global best particle respectively. The ability to maintain diversity is improved from two aspects. First, in the evaluation of particles, we consider both fitness and diversity by objective function and entropy respectively. Second, we divide a swarm into several sub-swarms, the number of local best exemplars are equal to the number of sub-swarms. In this way, even some exemplars are located in local optimal positions, and they will not affect the whole swarm so that the diversity of exemplars is maintained. Finally, in this paper, the value of α in Equation (5) provides an explicit way to manage the weights of exploration ability and exploitation ability and therefore eliminate the coupling of the two abilities.

4. Experiments and Discussions

We applied the proposed algorithm to large scale optimization problems (LSOPs). In general, LSOPs have hundreds or thousands of variables. For meta-heuristic algorithms, they usually suffer from “the curse of dimensionality”, which means the performance deteriorates dramatically as the problem dimension increases []. Due to a large number of variables, the searching space is complex, which brings challenges for meta-heuristic algorithms. First, the searching space is huge and wide, which demands of a high searching efficiency [,]. Second, the large scale causes capacious attraction domains of local optimum and exacerbates the difficulty for algorithms to get rid of local optimal positions []. Hence, in the optimization process, both convergence ability and diversity maintenance of a swarm are crucial to algorithm’s performance. We employed LSOPs in CEC 2013 as benchmark suits to test the proposed algorithm. The details of the benchmarks are listed in []. In comparisons, several peer algorithms, including DECC-dg (Cooperative Co-Evolution with Differential Grouping), MMOCC(Multimodal optimization enhanced cooperative coevolutio), SLPSO (Social Learning Particle Swarm Optimization), and CSO (Competitive Swarm Optimizer), were selected. DECC-dg is an improved version of DECC, which is reported in []. CSO was proposed by Cheng and Jin, which exhibits a powerful ability in dealing with the large scale optimization problems of IEEE CEC 2008 []. SLPSO was proposed by the same authors in [], where a social learning concept is employed. For MMOCC, which is currently proposed by Peng etc, which adopts the idea of CC framework and the techniques of multi-modular optimization [].
For each algorithm, we present a mean performance of 25 independent runs. The termination condition was limited by the maximum number of Fitness Evaluations (FEs), i.e., 3 × 10 6 , as recommended in []. For EA-PSO, the population size was 1200. The reasons to employ a large size of population are presented as follows: First, a large size of population enhances the parallel computation ability for the algorithm. Second, the grouping strategy will be more efficient when a large size of population is employed. If the population size is too small, the size of groups will also be small and therefore the learning efficiency in each group decreases. Third, in EA-PSO, the diversity management is conducted by entropy control, which is a frequency based approach. As mentioned in [], a large population size is recommended when using FBMs. Fourth, a large population size is helpful to avoid empty segments. Although a large size of population was employed, we used the number of fitness evaluations (FEs) to limit the computational resources in comparisons to guarantee a fair comparison. The number of intervals (m) was 30. The group size and α were set as 20 and 0.1, respectively. The experimental results are presented in Table 1. The best results of mean performance for each benchmark function are marked by bold font. To provide a statistical analysis, the p values were obtained by Wilcoxon signed ranks test. According to the statistical analysis, most of the p values were smaller than 0.05, which demonstrates the differences were effective. However, for benchmark F6, the comparisons “EA-PSO vs. CSO” and “EA-PSO vs. DECC-DG”, the p values were larger than 0.05, which means that there was no significant differences between the algorithms’ performance for the benchmark. The same was found for “EA-PSO vs. MMO-CC” on benchmark F8, “EA-PSO vs. SLPSO” on benchmark F12.
Table 1. The experimental results of 1000-dimensional IEEE CEC’ 2013 benchmark functions with fitness evaluations of 3 × 10 6 . The best performance is marked by bold font in each line.
According to Table 1, EA-PSO outperformed the other algorithms for 10 benchmark functions. For F2, F4, F12, and F13, EA-PSO took the second or third ranking in the comparisons. The comparison results demonstrate that the proposed algorithm is very competitive to address large scale optimization problems. We present the convergence profiles of different algorithms in Figure 2.
Figure 2. Convergence profiles of different algorithms obtained on the CEC’2013 test suite with 1000 dimensions.
In this study, the value of α was used to balance the abilities of exploration and exploitation. Hence, we investigated the influence of α to algorithm’s performance. In this test, we set α as 0.2, 0.3 and 0.4. For other parameters, we used the same setting as in Table 1. For each value of α , we ran the algorithm 25 times and present the mean optimization results in Table 2. According to the results, there was no significant difference in the order of magnitude. On the other hand, for the four values, α = 0.1 and α = 0.2 , both won six times, which demonstrates that a small value of α would help the algorithm achieve a more competitive optimization performance. The convergence profiles for algorithm’s performances with different values of α are presented in Figure 3.
Table 2. The different values of α to EA-PSO’s performances on IEEE CEC 2013 large scale optimization problems with 1000 dimensions (fitness evaluations = 3 × 10 6 ).
Figure 3. Convergence profiles of different algorithms obtained on the CEC’2013 test suite with 1000 dimensions.

5. Conclusions

In this paper, a novel particle swarm optimizer named entropy-assisted PSO is proposed. All particles are evaluated by fitness and diversity simultaneously and the historical information of the particles are no longer needed in particle update. The optimization experiments were conducted on the benchmarks suite of CEC 2013 with the topic of large scale optimization problems. The comparison results demonstrate the proposed structure helped enhance the ability of PSO in addressing large scale optimization and the proposed algorithm EA-PSO achieved competitive performance in the comparisons. Moreover, since the exploration and exploitation are conducted independently and simultaneously in the proposed structure, the proposed algorithm’s structure is flexible to many different kinds of optimization problems.
In the future, the mathematical mechanism of the proposed algorithm will be further investigated and discussed. Considering that, for many other kinds of optimization problems, such as multi-modular optimization problems, dynamic optimization problems, and multi-objective optimization, the population divergence is also crucial to algorithms’ performances, we will apply the entropy idea to such problems and investigate the roles in divergence maintenance.

Author Contributions

Conceptualization, W.G., L.W. and Q.W.; Methodology, W.G.; Software, W.G. and L.Z.; Validation, W.G. and L.Z.; Formal analysis, W.G. and L.W.; Investigation, W.G. and F.K.; Resources, W.G. and L.Z.; Data curation, W.G.; Writing original draft preparation, W.G. and F.K.; Writing review and editing, W.G., L.W. and Q.W.; Visualization, W.G. and L.Z.; Supervision, L.W. and Q.W.; Project administration, L.W. and Q.W.; Funding acquisition, W.G. and L.Z.

Funding

This work was sponsored by the National Natural Science Foundation of China under Grant Nos. 71771176 and 61503287, and supported by Key Lab of Information Network Security, Ministry of Public Security and Key Laboratory of Intelligent Computing & Signal Processing, Ministry of Education.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shi, H.; Wen, H.; Hu, Y.; Jiang, L. Reactive Power Minimization in Bidirectional DC-DC Converters Using a Unified-Phasor-Based Particle Swarm Optimization. IEEE Trans. Power Electron. 2018, 33, 10990–11006. [Google Scholar] [CrossRef]
  2. Bera, R.; Mandal, D.; Kar, R.; Ghoshal, S.P. Non-uniform single-ring antenna array design using wavelet mutation based novel particle swarm optimization technique. Comput. Electr. Eng. 2017, 61, 151–172. [Google Scholar] [CrossRef]
  3. Osorio, G.J.; Matias, J.C.O.; Catalao, J.P.S. Short-term wind power forecasting using adaptive neuro-fuzzy inference system combined with evolutionary particle swarm optimization, wavelet transform and mutual information. Renew. Energy 2015, 75, 301–307. [Google Scholar] [CrossRef]
  4. Nouiri, M.; Bekrar, A.; Jemai, A.; Niar, S.; Ammari, A.C. An effective and distributed particle swarm optimization algorithm for flexible job-shop scheduling problem. J. Intell. Manuf. 2018, 29, 603–615. [Google Scholar] [CrossRef]
  5. Aliyari, H.; Effatnejad, R.; Izadi, M.; Hosseinian, S.H. Economic Dispatch with Particle Swarm Optimization for Large Scale System with Non-smooth Cost Functions Combine with Genetic Algorithm. J. Appl. Sci. Eng. 2017, 20, 141–148. [Google Scholar] [CrossRef]
  6. Bonyadi, M.R.; Michalewicz, Z. Particle swarm optimization for single objective continuous space problems: A review. Evol. Comput. 2017, 25, 1–54. [Google Scholar] [CrossRef]
  7. Higashi, N.; Iba, H. Particle swarm optimization with gaussian mutation. In Proceedings of the 2003 IEEE Swarm Intelligence Symposium, Indianapolis, IN, USA, 26 April 2003; pp. 72–79. [Google Scholar]
  8. Ling, S.H.; Iu, H.H.C.; Chan, K.Y.; Lam, H.K.; Yeung, B.C.W.; Leung, F.H. Hybrid particle swarm optimization with wavelet mutation and its industrial applications. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2008, 38, 743–763. [Google Scholar] [CrossRef]
  9. Wang, H.; Sun, H.; Li, C.; Rahnamayan, S.; Pan, J.-S. Diversity enhanced particle swarm optimization with neighborhood search. Inf. Sci. 2013, 223, 119–135. [Google Scholar] [CrossRef]
  10. Sun, J.; Xu, W.; Fang, W. A diversity guided quantum behaved particle swarm optimization algorithm. In Simulated Evolution and Learning; Wang, T.D., Li, X., Chen, S.H., Wang, X., Abbass, H., Iba, H., Chen, G., Yao, X., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4247, pp. 497–504. [Google Scholar]
  11. Jin, Y.; Branke, J. Evolutionary optimization in uncertain environments—A survey. IEEE Trans. Evol. Comput. 2005, 9, 303–317. [Google Scholar] [CrossRef]
  12. Lovbjerg, M.; Krink, T. Extending particle swarm optimisers with self-organized criticality. In Proceedings of the 2002 Congress on Evolutionary Computation, Honolulu, HI, USA, 12–17 May 2002; Volume 2, pp. 1588–1593. [Google Scholar]
  13. Blackwell, T.M.; Bentley, P.J. Dynamic search with charged swarms. In Proceedings of the Genetic and Evolutionary Computation Conference, New York, NY, USA, 9–13 July 2002; pp. 19–26. [Google Scholar]
  14. Blackwell, T. Particle swarms and population diversity. Soft Comput. 2005, 9, 793–802. [Google Scholar] [CrossRef]
  15. Zhan, Z.; Zhang, J.; Li, Y.; Chung, H.S. Adaptive particle swarm optimization. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2009, 39, 1362–1381. [Google Scholar] [CrossRef] [PubMed]
  16. Hu, M.; Wu, T.; Weir, J.D. An adaptive particle swarm optimization with multiple adaptive methods. IEEE Trans. Evol. Comput. 2013, 17, 705–720. [Google Scholar] [CrossRef]
  17. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  18. Li, X. Niching without niching parameters: Particle swarm optimization using a ring topology. IEEE Trans. Evol. Comput. 2010, 14, 150–169. [Google Scholar] [CrossRef]
  19. Siarry, P.; Pétrowski, A.; Bessaou, M. A multipopulation genetic algorithm aimed at multimodal optimization. Adv. Eng. Softw. 2002, 33, 207–213. [Google Scholar] [CrossRef]
  20. Yang, Z.; Tanga, K.; Yao, X. Large scale evolutionary optimization using cooperative coevolution. Inf. Sci. 2008, 178, 2985–2999. [Google Scholar] [CrossRef]
  21. Cheng, R.; Jin, Y. A competitive swarm optimizer for large scale optimization. IEEE Trans. Cybern. 2015, 45, 191–204. [Google Scholar] [CrossRef]
  22. Cheng, R.; Jin, Y. A social learning particle swarm optimization algorithm for scalable optimization. Inf. Sci. 2015, 291, 43–60. [Google Scholar] [CrossRef]
  23. Yang, Q.; Chen, W.-N.; Deng, J.D.; Li, Y.; Gu, T.; Zhang, J. A Level-Based Learning Swarm Optimizer for Large-Scale Optimization. IEEE Trans. Evol. Comput. 2018, 22, 578–594. [Google Scholar] [CrossRef]
  24. Li, X.; Tang, K.; Omidvar, M.N.; Yang, Z.; Qin, K. Benchmark Functions for the CEC 2013 Special Session And Competition on Large-Scale Global Optimization; Tech. Rep.; School of Computer Science and Information Technology, RMIT University: Melbourne, Australia, 2013. [Google Scholar]
  25. Omidvar, M.N.; Li, X.; Mei, Y.; Yao, X. Cooperative Co-Evolution with Differential Grouping for Large Scale Optimization. IEEE Trans. Evol. Comput. 2014, 18, 378–393. [Google Scholar] [CrossRef]
  26. Peng, X.; Jin, Y.; Wang, H. Multimodal optimization enhanced cooperative coevolution for large-scale optimization. IEEE Trans. Cybern. 2018. [Google Scholar] [CrossRef]
  27. Corriveau, G.; Guilbault, R.; Tahan, A.; Sabourin, R. Review and Study of Genotypic Diversity Measures for Real-Coded Representations. IEEE Trans. Evol. Comput. 2012, 16, 695–710. [Google Scholar] [CrossRef]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.