You are currently on the new version of our website. Access the old version .
MathematicsMathematics
  • Article
  • Open Access

6 June 2019

An Adaptive Multi-Swarm Competition Particle Swarm Optimizer for Large-Scale Optimization

,
and
1
School of Software Engineering, Tongji University, Shanghai 201804, China
2
Shanghai Development Center of Computer Software Technology, Shanghai 201112, China
3
Shanghai Industrial Technology Institute, Shanghai 201206, China
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Evolutionary Computation

Abstract

As a powerful tool in optimization, particle swarm optimizers have been widely applied to many different optimization areas and drawn much attention. However, for large-scale optimization problems, the algorithms exhibit poor ability to pursue satisfactory results due to the lack of ability in diversity maintenance. In this paper, an adaptive multi-swarm particle swarm optimizer is proposed, which adaptively divides a swarm into several sub-swarms and a competition mechanism is employed to select exemplars. In this way, on the one hand, the diversity of exemplars increases, which helps the swarm preserve the exploitation ability. On the other hand, the number of sub-swarms adaptively changes from a large value to a small value, which helps the algorithm make a suitable balance between exploitation and exploration. By employing several peer algorithms, we conducted comparisons to validate the proposed algorithm on a large-scale optimization benchmark suite of CEC 2013. The experiments results demonstrate the proposed algorithm is effective and competitive to address large-scale optimization problems.

1. Introduction

Particle swarm optimization (PSO), as an active tool in dealing with optimization problems, have been widely applied to various kinds of optimization problems [1,2], such as industry, transportation, economics and so forth. Especially, in current years, with the rising development of the industrial network, many optimization problems with complex properties of nonlinearity, multi-modularity, large-scale and so on are proposed. To well address these problems, PSO exhibits powerful abilities to meet the requirement of practical demands and thus gains much attention in deep research and further applications [3,4]. However, according to the current research, there exists a notorious problem in PSO: the algorithm lacks the ability in diversity maintenance and therefore the solutions trend to local optima, especially for complex optimization problems. In the algorithm design, a large weight on diversity maintenance will break down the convergence process, while a small weight on diversity maintenance will cause a poor ability to get rid of local optima. Therefore, as a crucial aspect in the design of PSO, population diversity maintenance has been a hot issue and drawn much attention globally.
To improve PSO’s ability in population diversity maintenance, many works are conducted, which can be generally categorized into the following aspects: (1) mutation strategy [5,6,7,8]; (2) distance-based indicator [9,10,11]; (3) adaptive control strategy [12,13,14]; (4) multi-swarm strategy [15,16,17,18,19,20,21]; and (5) hybridization technology [22,23,24]. However, in the five categories, there exists many parameters and laws in the design of PSO. Improper parameters or laws will negatively affect an algorithm’s performance. To provide an easy tool for readers in the design of PSO, Spolaor [25] and Nobile [26] proposed reboot strategies based PSO and fuzzy self-tuning PSO, respectively, to provide a simple way to tube PSO parameters. However, for large-scale optimization, it is also a challenge for PSO implementation. To address this issue, in this paper, we propose a novel algorithm named adaptive multi-swarm competition PSO (AMCPSO), which randomly divides the whole swarm into several sub-swarms. In each sub-swarm, a competitive mechanism is adopted to select a winner particle who attracts the poor particles in the same sub-swarm. The number of winners, namely exemplars, is equal to the number of sub-swarms. In multi-swarm strategy, each exemplar is selected from each sub-swarm. In this way, the diversity of exemplars increases, which is helpful for algorithms to eliminate the effects of local optima.
The contributions of this paper are listed as follows. First, in this proposed algorithm, the exemplars are all selected from the current swarm, rather than personal historical best positions. In this way, no historical information is needed in the design of the algorithm. Second, we consider different optimization stages and propose a law to adaptively adjust the size of sub-swarms. For an optimization process, in the beginning, exploration ability is crucial for algorithms to explore searching space. To enhance algorithm’s ability in exploration, we set a large number of sub-swarms, which means a large number of exemplars will be obtained. On the other hand, in the late stage, algorithms should pay more attention to exploitation for enhancing the accuracy of optimization results. Hence, a small number of exemplars is preferred. Third, to increase the ability of diversity maintenance, we do not employ the global best solution, but the mean value of each sub-swarm, which is inspired by Arabas and Biedrzycki [27]. According to Arabas and Biedrzycki [27], the quality of mean value has a higher probability to be better than that of a global best solution. In addition, because the mean value is calculated by the whole swarm, it has a high probability to be updated, unlike the global best solution, which is consistent for several generations.
The rest of this paper is organized as follows. Section 2 introduces related work on diversity maintenance for PSO and some comments and discussions on the works are presented. In Section 3, we design an adaptive multi-swarm competition PSO. The algorithm structure and pseudo-codes are presented in the section. We employed several state-of-the-art algorithms to compare the proposed algorithm on large-scale optimization problems and analyzed the performance, which is presented in Section 4. Finally, we end this paper with conclusions and present our future work in Section 5.

3. Adaptive Multi-Swarm Competition PSO

As mentioned in Section 1, the global best position always attracts all particles until it is updated. When the global best position gets trapped into local optima, it is very difficult for the whole swarm to get rid of it. Hence, the optimization progress stagnates or a premature convergence occurs. To overcome this problem, in this paper, we employ an adaptive multi-swarm strategy and propose a novel swarm optimization algorithm termed adaptive multi-swarm competition PSO (AMCPSO). From the beginning to the end of the optimization process, we uniformly and randomly divide the whole swarm into several sub-swarms. In each sub-swarm, the particles compare with each other. The losers will learn from the winner in the same sub-swarm, while the winners do nothing in the generation. The number of the winners is equal to the number of sub-swarms and the number is adaptive to the optimization process. At the beginning stages, a large number of sub-swarms will help algorithm increase the diversity of exemplars and explore the searching space. With the optimization process, the number of sub-swarms decreases in the algorithm’s exploitation. On the one hand, for each generation, there are several exemplars, i.e., winners, for losers’ updating. Even some exemplars are in local optimal positions, they will not affect the whole swarm. On the other hand, since we divide the swarm randomly, both winners and losers may be different, which means the exemplars are not consistent for many generations. In this way, the ability of diversity maintenance for the whole swarm is improved. Compared with the standard PSO, we abandon the historical information, e.g., the personal historical best position, etc., which is easier for users in implementations. Inspired by Arabas and Biedrzycki [27], the quality of mean value has a higher probability to be better than that of a global best solution. Therefore, we employ the mean position of the whole swarm to attract all particles. On the one hand, the global information can be achieved in particles update. On the other hand, the mean position generally changes for every generation, which has a large probability to avoid local optima. For the proposed algorithm, the velocity and position update mechanisms are given in Equation (3).
v l _ i ( t + 1 ) = ω v l _ i ( t ) + r 1 c 1 ( p w ( t ) p l _ i ( t ) ) + r 2 c 2 ( p m e a n p l _ i ( t ) )
p l _ i ( t + 1 ) = p l _ i ( t ) + v l _ i ( t )
where t records the index of generation, i is the index of losers in each sub-swarm, and v and p are velocity and position, respectively, for each particle. The subscript l means the loser particles, while the subscript w means the winner particle in the same sub-swarm. p m e a n is employed as global information to depict the mean position of the whole swarm. ω , called the inertia coefficient, is used to control the weight of particle’s own velocity property. c 1 and c 2 are cognitive coefficient and social coefficient, respectively. r 1 and r 2 [ 0 , 1 ] are parameters to control the weights of the cognitive component and social component, respectively. In the proposed algorithm, a number of sub-swarms is involved, which also defines the number of exemplars in advance. To help algorithm change the focus from exploration to exploitation, in this paper, we employ an adaptive strategy for the division strategy, which means that the number of sub-swarms are not consistent, but a varying number from a large number of sub-swarms to a small number. The number of the sub-swarms is set according to experimental experience and presented as follows.
m = r o u n d g s i n i 0.9 g s i n i 1 e x p F E s M a x _ G e n D r ;
where r o u n d ( Ω ) is the symbol to round a real value Ω , g s i n i is the initial number of sub-swarms, M a x _ G e n is maximum generation limit, and D r is used to control the decreasing rate. A small value of D r causes a sharp decreasing for the number of sub-swarms, while a large value of D r provides a gentle slope for number decreasing. In Equation (5), the population size of each sub-swarm increases from 5‰ to 5%. For Equation (5), it provides an adaptive way to set the number of sub-swarms, namely m. Meanwhile, the number of exemplars is also adaptive. Considering that, for some generations, the whole population size cannot exact divide m, we randomly select particles with size of the residual and do nothing for them in the generation, which means at most 5% particles are not involved in the update. However, in the algorithm’s implementation, we use maximum fitness evaluations as termination condition to guarantee adequate runs for the algorithm. The pseudo-codes of AMCPSO are given in Algorithm 1.
Algorithm 1: Pseudo-codes of adaptive multi-swarm particle swarm optimization.
   Input: 
Number of particles p s , parameters o m e g a , c 1 , c 2 , Maximum number of fitness evaluations M a x _ F E s
   Output: 
The current global best particle
1 
Randomly generate a swarm P and evaluate their fitness f.;
2 
L o o p : Calculate the mean position according to the whole swarm’s positions;
3 
Calculate m (the size of each sub-swarm) by Equation (5);
4 
Randomly select p s m o d ( p s , m ) particles for update;
5 
In each sub-swarm, compare the particles according to their fitness, and select the local best particle by Equation (3);
6 
Up date F E s ;
7 
If F E s M a x _ F E s , output the current global best particle; Otherwise, goto L o o p ;
For the proposed algorithm, the analysis on the time complexity is presented as follows. According to Algorithm 1, the time complexity for the sub-swarm division is O ( m ) , while the calculation of the mean value in each sub-swarm is also O ( m ) . Since the two operations do not aggravate the burden of computational cost, the main computational cost in the proposed algorithm is the update of each particle, which is O ( m n ) . It is an inevitable cost in most swarm-based evolutionary optimizers. Therefore, the whole time cost of the proposed algorithm is O ( m n ) , where m is the swarm size and n is the search dimensionality.

4. Experiments and Discussions

Experimental Settings

As explained above, large-scale optimization problems demand an algorithm much ability in balancing of diversity maintenance and convergence. To validate the performance of the proposed algorithm, we employed a benchmark suite in CEC 2013 on large-scale optimization problems [33]. In the performance comparisons, four popular algorithms were adopted: CBCC3 [30], DECC-dg [24], DECC-dg2 [29] and SL-PSO [34]. The four algorithms are proposed to address large-scale optimization problems in the corresponding papers. DECC-dg and DECC-dg2 are two improved DECC algorithms with differential evolution strategy [24,29]. SL-PSO was proposed by Cheng [34] to address large-scale optimization problems and achieve competitive performance. For the comparison, we ran each algorithm 25 times to pursue an average performance. The termination condition for each run was set by the maximum number of fitness evaluations (FEs) predefined as 3 × 10 6 . The parameters of the peer algorithms were set as the same as in their reported paper. For the proposed algorithm AMCPSO, we set the parameter as follows. For the g s i n i , we set the value as 20, which means the number of sub-swarms increases from 2 to 20. c 1 and c 2 were set as 1 and 0.001. The performances of the five algorithms are presented in Table 1.
Table 1. The experimental results of 1000-dimensional IEEE CEC’ 2013 benchmark functions with fitness evaluations of 3 × 10 6 .
In Table 1, “Mean” is the mean performance of 25 runs for each algorithm, while “Std” is the standard deviation of the algorithms’ performance. The best performance of “Mean” for each algorithm is marked by bold font. For the 15 benchmark functions, AMCPSO won eight times. For the five other benchmark functions, AMCPSO also had very competitive performance. Therefore, according to the comparison results, the proposed algorithm AMCPSO exhibits powerful ability in addressing large-scale optimization problems, which also demonstrates that the proposed strategy is feasible and effective to help PSO enhance the ability in balancing diversity maintenance and convergence. The convergence figures are presented in Figure 1.
Figure 1. Convergence profiles of different algorithms obtained on the CEC’2013 test suite with 1000 dimensions.
According to the figures, for many benchmark functions, the final optimization performances of AMCPSO was better than the performances of the other algorithms, which demonstrates the proposed algorithm is competitive to address large-scale optimization problems. For some benchmarks, such as F 1 , F 12 , and F 15 , even though AMCPSO was not the best, its performance kept getting better with the increase of FES, as shown in Figure 1a,m,p, which demonstrates the algorithm has a competitive performance in optimization.
As shown in Equation (3), c 1 and c 2 are employed to balance the abilities of exploration and exploitation. In this study, we fixed the value of c 1 as 1 and conducted experiments on tuning the value of c 2 to investigate the sensitivity. For the value of c 2 , we tested 0.001, 0.002, 0.005, 0.008 and 0.01, respectively, and present the results in Table 2. In the table, for the first line, the value of c 2 decreased from 0.01 to 0.001. For other parameter settings, we still employed the same values as in Table 1. According to the results, there were not significant differences in the algorithm’s performance, which demonstrates that the value of c 2 is not too sensitive to the algorithm’s performance.
Table 2. The sensitivity of c 2 to AMCPSO’s performance.

5. Conclusions and Future Work

In this paper, to enhance PSO’s ability in dealing with large-scale optimization problems, we propose a novel PSO named adaptive multi-swarm competition PSO (AMCPSO), which adaptively divides a swarm into several sub-swarms. In each sub-swarm, a local winner is selected, while the local losers will learn from the local winner. In this way, for the whole swarm, not only one position is selected to attract all others and therefore the diversity of exemplars increases. On the other hand, with the process of optimization, the number of sub-swarms decreases, which helps the algorithm adaptively change the swarm’s focus from exploration to exploitation. At the beginning of optimization process, many sub-swarms are adopted, which makes the algorithm focus on exploration, while the number of sub-swarms decreases with the optimization process to help the algorithm enhance the ability in exploitation. By employing benchmark functions in CEC 2013, the performance of the proposed algorithm AMCPSO was validated and the comparison results demonstrate AMCPSO has a competitive ability in dealing with large-scale optimization problems.
In our future work, on the one hand, the proposed algorithm’s structure will be investigated on several different kinds of algorithms [26,35,36,37] to improve their performances in addressing large-scale optimization problems. On the other hand, we will apply the proposed algorithm to real applications, such as optimization problems in an industrial network, traffic network, location problems and so forth [38,39].

Author Contributions

Conceptualization, F.K.; methodology, F.K.; software, F.K.; validation, F.K.; formal analysis, F.K.; investigation, F.K.; resources, J.J.; data curation, Y.H.; writing—original draft preparation, F.K.; writing—review and editing, F.K.; visualization, F.K.; supervision, F.K.; project administration, F.K.; funding acquisition, F.K. and J.J.

Funding

This work was funded by National Key Research and Development Program of China (2018YFB1702300), the National Natural Science Foundation of China under Grant Nos. 61432017 and 61772199.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings, IEEE World Congress on Computational Intelligence, Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar]
  2. Shi, Y.; Eberhart, R. Fuzzy adaptive particle swarm optimization. In Proceedings of the 2001 Congress on Evolutionary Computation, Seoul, Korea, 27–30 May 2001; Volume 1, pp. 101–106. [Google Scholar]
  3. Cao, B.; Zhao, J.; Lv, Z.; Liu, X.; Kang, X.; Yang, S. Deployment optimization for 3D industrial wireless sensor networks based on particle swarm optimizers with distributed parallelism. J. Netw. Comput. Appl. 2018, 103, 225–238. [Google Scholar] [CrossRef]
  4. Wang, L.; Ye, W.; Wang, H.; Fu, X.; Fei, M.; Menhas, M.I. Optimal Node Placement of Industrial Wireless Sensor Networks Based on Adaptive Mutation Probability Binary Particle Swarm Optimization Algorithm. Comput. Sci. Inf. Syst. 2012, 9, 1553–1576. [Google Scholar] [CrossRef]
  5. Sun, J.; Xu, W.; Fang, W. A diversity guided quantum behaved particle swarm optimization algorithm. In Simulated Evolution and Learning; Wang, T.D., Li, X., Chen, S.H., Wang, X., Abbass, H., Iba, H., Chen, G., Yao, X., Eds.; Volume 4247 of Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2006; pp. 497–504. [Google Scholar]
  6. Higashi, N.; Iba, H. Particle swarm optimization with Gaussian mutation. In Proceedings of the 2003 IEEE Swarm Intelligence Symposium, SIS ’03, Indianapolis, IN, USA, 26 April 2003; pp. 72–79. [Google Scholar]
  7. Ling, S.H.; Iu, H.H.C.; Chan, K.Y.; Lam, H.K.; Yeung, B.C.W.; Leung, F.H. Hybrid Particle Swarm Optimization with Wavelet Mutation and Its Industrial Applications. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2008, 38, 743–763. [Google Scholar] [CrossRef] [PubMed]
  8. Wang, H.; Sun, H.; Li, C.; Rahnamayan, S.; Pan, J.S. Diversity enhanced particle swarm optimization with neighborhood search. Inf. Sci. 2013, 223, 119–135. [Google Scholar] [CrossRef]
  9. Lovbjerg, M.; Krink, T. Extending particle swarm optimisers with self-organized criticality. In Proceedings of the 2002 Congress on Evolutionary Computation, CEC ’02, Honolulu, HI, USA, 12–17 May 2002; Volume 2, pp. 1588–1593. [Google Scholar]
  10. Blackwell, T.M.; Bentley, P.J. Dynamic Search With Charged Swarms. In Proceedings of the Genetic and Evolutionary Computation Conference, New York, NY, USA, 9–13 July 2002; pp. 19–26. [Google Scholar]
  11. Blackwell, T. Particle swarms and population diversity. Soft Comput. 2005, 9, 793–802. [Google Scholar] [CrossRef]
  12. Zhan, Z.H.; Zhang, J.; Li, Y.; Chung, H.S.H. Adaptive Particle Swarm Optimization. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2009, 39, 1362–1381. [Google Scholar] [CrossRef] [PubMed]
  13. Hu, M.; Wu, T.; Weir, J.D. An Adaptive Particle Swarm Optimization With Multiple Adaptive Methods. IEEE Trans. Evol. Comput. 2013, 17, 705–720. [Google Scholar] [CrossRef]
  14. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  15. Li, X. Niching Without Niching Parameters: Particle Swarm Optimization Using a Ring Topology. IEEE Trans. Evol. Comput. 2010, 14, 150–169. [Google Scholar] [CrossRef]
  16. Cioppa, A.D.; Stefano, C.D.; Marcelli, A. Where Are the Niches? Dynamic Fitness Sharing. IEEE Trans. Evol. Comput. 2007, 11, 453–465. [Google Scholar] [CrossRef]
  17. Bird, S.; Li, X. Adaptively Choosing Niching Parameters in a PSO. In Proceedings of the Annual Conference on Genetic and Evolutionary Computation, Seattle, WA, USA, 8–12 July 2006; pp. 3–10. [Google Scholar]
  18. Li, C.; Yang, S.; Yang, M. An Adaptive Multi-Swarm Optimizer for Dynamic Optimization Problems. Evol. Comput. 2014, 22, 559–594. [Google Scholar] [CrossRef]
  19. Cheng, R.; Jin, Y. A Competitive Swarm Optimizer for Large Scale Optimization. IEEE Trans. Cybern. 2015, 45, 191–204. [Google Scholar] [CrossRef] [PubMed]
  20. Siarry, P.; Pétrowski, A.; Bessaou, M. A multipopulation genetic algorithm aimed at multimodal optimization. Adv. Eng. Softw. 2002, 33, 207–213. [Google Scholar] [CrossRef]
  21. Zhao, S.; Liang, J.; Suganthan, P.; Tasgetiren, M. Dynamic multi-swarm particle swarm optimizer with local search for Large Scale Global Optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2008 (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; pp. 3845–3852. [Google Scholar]
  22. Parsopoulos, K.; Vrahatis, M. On the computation of all global minimizers through particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 211–224. [Google Scholar] [CrossRef]
  23. van den Bergh, F.; Engelbrecht, A. A cooperative approach to particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 225–239. [Google Scholar] [CrossRef]
  24. Omidvar, M.N.; Li, X.; Mei, Y.; Yao, X. Cooperative Co-Evolution With Differential Grouping for Large Scale Optimization. IEEE Trans. Evol. Comput. 2014, 18, 378–393. [Google Scholar] [CrossRef]
  25. Spolaor, S.; Tangherloni, A.; Rundo, L.; Nobile, M.S.; Cazzaniga, P. Reboot strategies in particle swarm optimization and their impact on parameter estimation of biochemical systems. In Proceedings of the 2017 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), Manchester, UK, 23–25 August 2017; pp. 1–8. [Google Scholar]
  26. Nobile, M.S.; Cazzaniga, P.; Besozzi, D.; Colombo, R.; Mauri, G.; Pasi, G. Fuzzy Self-Tuning PSO: A settings-free algorithm for global optimization. Swarm Evol. Comput. 2018, 39, 70–85. [Google Scholar] [CrossRef]
  27. Arabas, J.; Biedrzycki, R. Improving Evolutionary Algorithms in a Continuous Domain by Monitoring the Population Midpoint. IEEE Trans. Evol. Comput. 2017, 21, 807–812. [Google Scholar] [CrossRef]
  28. Jin, Y.; Branke, J. Evolutionary Optimization in Uncertain Environments—A Survey. IEEE Trans. Evol. Comput. 2005, 9, 303–317. [Google Scholar] [CrossRef]
  29. Omidvar, M.N.; Yang, M.; Mei, Y.; Li, X.; Yao, X. DG2: A Faster and More Accurate Differential Grouping for Large-Scale Black-Box Optimization. IEEE Trans. Evol. Comput. 2017, 21, 929–942. [Google Scholar] [CrossRef]
  30. Omidvar, M.N.; Kazimipour, B.; Li, X.; Yao, X. CBCC3—A Contribution-Based Cooperative Co-evolutionary Algorithm with Improved Exploration/Exploitation Balance. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC) Held as part of IEEE World Congress on Computational Intelligence (IEEE WCCI), Vancouver, BC, Canada, 24–29 July 2016; pp. 3541–3548. [Google Scholar]
  31. Li, X.; Yao, X. Cooperatively Coevolving Particle Swarms for Large Scale Optimization. IEEE Trans. Evol. Comput. 2012, 16, 210–224. [Google Scholar]
  32. Cheng, J.; Zhang, G.; Neri, F. Enhancing distributed differential evolution with multicultural migration for global numerical optimization. Inf. Sci. 2013, 247, 72–93. [Google Scholar] [CrossRef]
  33. Li, X.; Tang, K.; Omidvar, M.N.; Yang, Z.; Qin, K. Benchmark Functions for the CEC’2013 Special Session and Competition on Large-Scale Global Optimization; Technical Report; School of Computer Science and Information Technology, RMIT University: Melbourne, Australia, 2013. [Google Scholar]
  34. Cheng, R.; Jin, Y. A social learning particle swarm optimization algorithm for scalable optimization. Inf. Sci. 2015, 291, 43–60. [Google Scholar] [CrossRef]
  35. Wang, Y.; Wang, P.; Zhang, J.; Cui, Z.; Cai, X.; Zhang, W.; Chen, J. A Novel Bat Algorithm with Multiple Strategies Coupling for Numerical Optimization. Mathematics 2019, 7, 135. [Google Scholar] [CrossRef]
  36. Cui, Z.; Zhang, J.; Wang, Y.; Cao, Y.; Cai, X.; Zhang, W.; Chen, J. A pigeon-inspired optimization algorithm for many-objective optimization problems. Sci. China Inf. Sci. 2019, 62, 070212. [Google Scholar] [CrossRef]
  37. Cai, X.; Gao, X.Z.; Xue, Y. Improved bat algorithm with optimal forage strategy and random disturbance strategy. Int. J. Bio-Inpired Comput. 2016, 8, 205–214. [Google Scholar] [CrossRef]
  38. Cui, Z.; Du, L.; Wang, P.; Cai, X.; Zhang, W. Malicious code detection based on CNNs and multi-objective algorithm. J. Parallel Distrib. Comput. 2019, 129, 50–58. [Google Scholar] [CrossRef]
  39. Cui, Z.; Xue, F.; Cai, X.; Cao, Y.; Wang, G.; Chen, J. Detection of Malicious Code Variants Based on Deep Learning. IEEE Trans. Ind. Inform. 2018, 14, 3187–3196. [Google Scholar] [CrossRef]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.