You are currently viewing a new version of our website. To view the old version click .
Applied Sciences
  • Article
  • Open Access

1 March 2023

A Quantum-Based Beetle Swarm Optimization Algorithm for Numerical Optimization

,
and
1
School of Automation, Nanjing University of Science and Technology, Nanjing 210094, China
2
Institute706, The Second Academy, China Aerospace Science & Industry CORP, Beijing 100854, China
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Intelligent Control Using Machine Learning

Abstract

The beetle antennae search (BAS) algorithm is an outstanding representative of swarm intelligence algorithms. However, the BAS algorithm still suffers from the deficiency of not being able to handle high-dimensional variables. A quantum-based beetle swarm optimization algorithm (QBSO) is proposed herein to address this deficiency. In order to maintain population diversity and improve the avoidance of falling into local optimal solutions, a novel quantum representation-based position updating strategy is designed. The current best solution is regarded as a linear superposition of two probabilistic states: positive and deceptive. An increase in or reset of the probability of the positive state is performed through a quantum rotation gate to maintain the local and global search ability. Finally, a variable search step strategy is adopted to speed up the ability of the convergence. The QBSO algorithm is verified against several swarm intelligence optimization algorithms, and the results show that the QBSO algorithm still has satisfactory performance at a very small population size.

1. Introduction

Population-based intelligence algorithms have been widely used in many fields because of their simple principle, easy implementation, strong scalability, and high optimization efficiency, such as in UAV path planning [1,2,3], combinatorial optimization [4,5], and community detection [6,7]. With the increase in the speed of intelligent computing and the development of artificial intelligence, many excellent intelligent algorithms have been proposed such as the seagull optimization algorithm (SOA) [8], artificial bee colony (ABC) algorithm [9], and gray wolf optimization (GWO) algorithm [10]. In addition, there are several intelligent algorithms that were proposed earlier and developed relatively well, such as the particle swarm optimization (PSO) algorithm [11], genetic algorithm (GA) [12], ant colony optimization (ACO) algorithm [13], starling murmuration optimizer (SWO) [14] algorithm, and simulated annealing (SA) algorithm [15].
In 2017, the BAS algorithm was proposed by Jiang [16]. The largest difference between the BAS algorithm and other intelligent algorithms is that the BAS algorithm only needs one beetle to search. Due to the advantages of having a simple principle, fewer parameters, and less calculation, it has been successfully applied to the following optimization fields. Khan et al., proposed an enhanced BAS with zeroing neural networks for solving constrained optimization problems online [17]. Sabahat et al., solved the shortcomings of the low positioning accuracy of sensors in Internet of Things applications using the BAS algorithm [18]. Khan et al., optimized the trajectory of a five-link biped robot based on the longhorn BAS algorithm [19]. Jiang et al., implemented a dynamic attitude configuration of a wearable wireless body sensor network through a BAS strategy [20]. Khan et al., proposed a strategy based on the BAS algorithm to search for the optimal control parameters of a complex nonlinear system [21].
Although the BAS algorithm exhibits its unique advantages in terms of the calculation amount and principle, the BAS algorithm drastically reduces the optimization performance and even fails to search with high probability when dealing with multidimensional (more than three-dimensional) problems. The reason is that the BAS algorithm is a single search algorithm and, during the search process, the individual can only move towards one extreme point. In multidimensional problems, there is often more than one extreme point, so it is likely to fall into a local extreme point. On the other hand, the step size during the exploration of the beetle decreases exponentially, which means that the beetles may not be able to jump out of local optima. For these reasons, the BAS algorithm is not equipped to handle complex problems with three or more dimensions.
In order to solve the BAS algorithm’s defect of not being able to handle high-dimensional problems, a quantum-based beetle swarm optimization algorithm inspired by quantum evolution is proposed in this paper [22]. On the one hand, quantum bits were used to represent the current best solution as a linear superposition of the probability states of “0” and “1” to improve the early exploration capability of the QBSO algorithm. On the other hand, replacing the individual search with a swarm search and a dynamic step adjustment strategy was introduced to improve the exploitation capability of the beetles. Our work has two main contributions:
  • We solved the shortcoming of the BAS algorithm in that it cannot handle high-dimensional optimization problems, and the designed QBSO algorithm has an excellent performance in solving 30-dimensional CEC benchmark functions.
  • We used quantum representation to deal well with the balance between the population size in terms of the exploratory power and the algorithmic speed, using fewer individuals to represent more information about the population.
The structure of this article is as follows. Section 2 briefly describes the principle of the BAS algorithm, including the implications of the parameters and the procedure of the BAS algorithm. The innovations of the algorithm (i.e., quantum representation (QR) and quantum rotation gate (QRG)) are presented in Section 3. A series of simulation tests are presented in Section 4. The optimization performance of the QBSO algorithm was evaluated by solving four benchmark functions with three comparison algorithms under different populations. Section 5 is the conclusion.

3. Algorithm

3.1. Principle of the BAS Algorithm

The BAS algorithm is inspired by the foraging behavior of beetles in nature (see in Figure 1). Beetles have left and right antennae, which can sense the intensity of food odors in the environment. Beetles move toward food according to the difference in the odor’s strength as perceived by the left and right antennae. When the intensity of an odor that is perceived by the left antenna is greater than that by the right antenna, the beetle moves toward the left. Otherwise, the beetle moves toward to the right. The smell of food can be regarded as an objective function. The higher the value of the objective function, the closer the beetle is to the food. The BAS algorithm simulates this behavioral characteristic of beetles and carries out an efficient search process.
Figure 1. Feeding behavior of beetles.
Similar to other intelligent optimization algorithms, the position of an individual beetle in the D-dimensional solution space is X = ( X 1 , X 2 , X D ) . The positions of the left and right antennae of the beetle are defined in the following formula:
{ X r = X + l d X l = X l d
where l denotes the distance between the beetle’s center of mass and the antennae; d represents a random unit vector that needs to be normalized to:
d = r a n d s ( D , 1 ) r a n d s ( D , 1 ) 2
Based on the comparison of the intensity of an odor by the left and right antennae, the updated adjustment strategy for the next exploration location of the beetle is as follows:
X t + 1 = X t + δ t d s i g n [ f ( X r ) f ( X l ) ]
where t represents the current number of iterations of the algorithm; f ( · ) represents the fitness function; δ t is the exploration step at the tth iteration; ε represents the step decay factor, for which the usual value is 0.95; and s i g n ( · ) denotes the sign function. The specific definitions of the step and sign function are as follows:
δ t + 1 = δ t × ε
s i g n ( x ) = { 1 , i f x > 0 , 0 , i f x = 0 , 1 , o t h e r w i s e
The basic flow of the BAS algorithm is as follows in Figure 2:
Figure 2. BAS algorithm flow chart.

3.2. Principle of the QBSO Algorithm

The BAS algorithm is limited by a single individual search and has poor optimization performance in handing multidimensional complex optimization problems. In order to solve this shortcoming, the QBSO algorithm was designed in this study.

3.2.1. Quantum Representation

The exploration strategy of the BAS algorithm is similar to other intelligent optimization algorithms, in which balanced exploration and exploitation are achieved by controlling the step size. However, this balancing effect is weak. The premature convergence originates from loss of diversity. Herein, we introduce an alternative approach for preserving diversity of the population. We offer a new comprehension of the concept of optimal solution. The current optimal solution is considered a linear superposition of two probabilistic states: “0” state and “1” state. A qubit of a quantum bit string of length n can be defined as follows:
[ α 1 α 2 α n β 2 β 2 β n ]
where α i [ 0 , 1 ] , β i [ 0 , 1 ] , and it satisfies the condition that α i 2 + β i 2 = 1 ( i = 1 , 2 , , n ) ; α 2 represents the amplitude of the probability in the “1” state; β 2 represents the amplitude of the probability in the “0” state. The quantum representation of the current global optimal candidate solution can be summarized as follows:
x g T [ x g , 1 x g , 2 x g , n α 1 α 2 α n ]
To compute the QR observations, a complex function called the wave function ω ( x , y ) is introduced here. | ω ( x , y ) | 2 is the probability density, which represents the probability of a quantum state occurring in the corresponding space and time.
| ω ( x i ) | 2 = 1 2 π σ i exp ( ( x i μ i ) 2 2 σ i ) , i = 1 , 2 , , n
where μ i is the value of the function expectation; σ i represents the standard deviation of the function. The formula for calculating the observed value of the current global optimal solution is as follows:
x ^ g , i = r a n d × | ω ( x i ) | 2 × ( x i , max x i , min )
where the expected value of the wave function calculation process can be expressed as X g , i and the variance value as σ i 2 ( | φ i ) .
σ i 2 ( | ψ i ) = { 1 | α i | 2 , i f | ψ i = | 0 , | α i | 2 , if | ψ i = | 1 ,
The observations of | φ i using a stochastic process are:
| ψ i = { | 0 , if r a n d α i 2 | 1 , if r a n d > α i 2
The direction of the convergence for each beetle is determined by observing the individuals with the current global optimal solution:
d j , c = x ^ g , i x t
X t + 1 = X t + δ t d sign [ f ( X r ) f ( X l ) ] + d j , c

3.2.2. Quantum Rotation Gate

In the quantum genetic algorithm, since the chromosomes under the action of quantum coding are no longer in a single state, the traditional selection, crossover, and mutation operations cannot be continued. Therefore, a QRG is employed to act on the fundamental state of the quantum chromosome to make them interfere with each other and change the phase, thus changing the distribution domain of α i .
Here, QRG is also used to update the probability amplitude of the optimal solution. By increasing the rotation angle, the probability amplitude of α i is improved. In this way, the convergence rate of individuals toward the global optimal solution is accelerated. At the beginning of the algorithm, the corresponding probability amplitudes of α i and β i are set to 2 / 2 . If the global optimal solution changes after the end of the iteration, α i is increased by the QRG. Otherwise, all probability amplitudes are reset to the initial value to prevent the algorithm from falling into the local optimum. The update strategy of the QRG is as follows:
α i ( t + 1 ) = [ cos ( Δ θ ) sin ( Δ θ ) ] [ α i ( t ) 1 [ α i ( t ) ] 2 ]
α i ( t + 1 ) = { η , if α i ( t + 1 ) < η , α i ( t + 1 ) , if η α i ( t + 1 ) 1 η , 1 η , if α i ( t + 1 ) > 1 η ,
where η [ 0 , 1 ] , which is usually a constant; Δ θ is the rotation angle of the QRG, which is equivalent to the step size defining the convergence rate toward the current best solution. Briefly, QRA is considered a variation operator here to enhance the probability of obtaining a positive optimal solution. If successive iterations are still the current optimal solution, α is increased by QRA, while indicating an increase in the probability of the current optimal solution becoming the global optimal solution. Otherwise, α is reset to maintain vigilance against falling into a local optimum.
In addition, the search step size of the BAS algorithm also affects the convergence rate of the algorithm. If the step size is too large, the convergence rate of the QBSO algorithm will be reduced. If the step size is too small, it may lead to search failure. Therefore, this study changed the step size updating strategy: when the global optimal solution changes, the step size is updated according to Formula (16). Otherwise, the decay of the step size accelerates. In order not to affect the search accuracy, the value of ε m i n is set to 0.8 according to the original study. The flow of the QBSO algorithm is shown in Figure 3.
δ t + 1 = { δ t × ε , i f x ^ g n o t c h a n g e d δ t × ε min , i f x ^ g c h a n g e d
Figure 3. Procedure of the QBSO algorithm.

3.3. Computational Complexity Analysis

The main time complexity in the QBSO algorithm is within the while loop step. Let n denote the population size and D denote the number of decision variables. The complexity of calculating the direction of the convergence d i , c is O( D n ). The complexity of updating the location information x is O(n). The complexity of the quantum revolving gate is O( n 2 ). When dealing with large-scale optimization problems, D n . According to the operation rules of the symbol O, the worst-case time complexity for the QBSO can be simplified as O ( T D ) . When dealing with nonlarge-scale optimization problems, D n . The worst-case time complexity for the QBSO algorithm can be simplified as O ( T n ( d + n ) ) .

4. Experiment

Since the BAS algorithm cannot solve high-dimensional complex optimization problems, it cannot be used for simulation comparison experiments with the QBSO algorithm. Therefore, the pigeon-inspired optimization algorithm (PIO), seagull optimization algorithm (SOA), gray wolf optimization algorithm (GWO), and beetle swarm optimization (BSO) algorithm [41] were chosen as the comparison objects. To ensure the validity of the experimental results, the common parameter settings were identical in all algorithms, where the rotation angle in the QBSO was −11° [22]. The other algorithm parameters remained the same as in the original literature. We used trial and error to select the number of iterations. In the context of a population size of 30, the Griewank function was optimized with different numbers of iterations (see Figure 4).
Figure 4. Convergence of the optimal solution of the Griewank function at different iterations. (a) Optimal solution in natural number unit; (b) optimal solution in logarithmic unit.
When the number of iterations is 100, all algorithms basically converge to near the global optimal solution. Each algorithm is comparable. Therefore, we set the iteration number to 100.
To ensure that the PIO, SOA, and GWO algorithms are well explored and developed, researchers usually maintain the population size of the algorithms between 30 and100. If the population is too small, it will affect the searching and convergence abilities of the algorithm. Too large of a population can waste population resources and increase the search time. In order to verify that quantum expression can represent richer population forms with fewer individuals, a comparison experiment with the population size set to 8 and 30 was performed.
We conducted multiple comparison experiments on both the unimodal unconstrained optimization problem and multimodal unconstrained optimization problem. The unimodal benchmark function has only one optimal solution and can be used to detect how quickly the algorithm converges to the vicinity of the optimal solution. The multimodal benchmark function has multiple optimal solutions and is used to detect the ability of the algorithm to jump out of the local optimum.

4.1. Unimodal Unconstrained Optimization

The unimodal function only has a single optimum solution and the benchmark problems can be seen in Table 1 [42], where the decision variables of F 1 and F 3 are 2-dimensional and the other functions are 30-dimensional. The formulation of the functions f ( y ) , their global minima f ( y ) m i n , and the value of the estimated variables y ( t ) are shown in Table 1.
Table 1. Unimodal benchmark functions.
To demonstrate that the QBSO algorithm can exhibit an excellent optimization performance at a relatively small population size, we conducted comparative experiments with population sizes of 8 and 30 under unimodal optimization problems. Each algorithm was run independently 100 times. The best, worst, average, and variance of the results obtained by each algorithm were collected and used to verify the performance of the algorithm. The optimization results of the unimodal benchmark functions are shown in Table 2 and Table 3.
Table 2. Results of the unimodal benchmark function experiments (population size = 30).
Table 3. Results of the unimodal benchmark function experiments (population size = 8).
We randomly chose 1 of the 100 independent runs and plotted the algorithm optimization iteration process as a graph, as shown in Figure 4. Considering that when the population size was eight, the optimization results of the PIO algorithm, SOA, and GWO algorithm were so different from the QBSO that it was easy to compress the QBSO into an approximate horizontal line in the figure, we omitted the iterative curve plot here for the population size of eight.

4.2. Multimodal Unconstrained Optimization

Multimodal functions contain more than one optimal solution, which will also mean that the algorithm is more likely to fall into a local optimum when optimizing these functions. The population-based intelligence optimization algorithm has an upper hand in optimizing these functions, and this is the idea we improved. Collaborative search among multiple individuals is less likely to fall into local optima than single-individual algorithms such as the BAS algorithm. We dealt with these multimodal benchmark functions with the solution space dimensions set to 30. The formulation of the functions f ( y ) , their global minima f ( y ) m i n , and the value of the estimated variables y ( t ) are shown in Table 4.
Table 4. Multimodal benchmark functions.
Each algorithm was run independently 100 times. The optimization results of the multimodal benchmark functions with the population sizes of 30 and 8 are shown in Table 5 and Table 6.
Table 5. Results of the multimodal benchmark function experiments (population size = 30).
Table 6. Results of the multimodal benchmark function experiments (population size = 8).
Similarly, we randomly selected 1 result from the 100 independent runs of the multimodal optimization problem. The iterative process is presented in Figure 5. Here, for the reasons we mentioned above, the iteration curve of the algorithm for a population size of eight is not shown.
Figure 5. The iteration curves when solving unimodal benchmark functions with four algorithms.
For algorithm designers, the accuracy and time consumption of the algorithm are difficult to balance. For population-based optimization algorithms, the larger the dimensionality, the larger the population size that needs to be consumed. It was clear from the computational complexity analysis that the time required to maintain accurate optimization results will grow exponentially. Our goal is to try to trade off dimensionality and time for the decision maker, and to handle the high-dimensional optimization problem with the smallest population size. We conducted a performance comparison of the algorithms in different dimensions with the Rastrigin function, and the results are shown in Table 7.
Table 7. Comparison results of the impact of dimensions on algorithm performance.

4.3. Population Diversity Study

We introduced the population diversity metric to validate the QBSO algorithm diversity metric. The population diversity formula is follows:
D P ( t ) = 1 N ( t ) j = 1 N ( t ) x j ( t ) x ¯ ( t ) 2
where x ¯ ( t ) is the mean value of individuals in the current generation. Considering that the BAS is a single individual search algorithm, it cannot constitute a population. Therefore, we chose the BSO algorithm as the diversity comparison algorithm.

5. Discussion

It can be observed from Table 2 and Table 5 that the QBSO algorithm showed relatively excellent performance in handing both unimodal optimization problems and multimodal optimization problems. This is due to the fact that quantum representation can carry more population information and prevent the loss of diversity. At the same time, the quantum rotation gate as a variational operator can better help the algorithm to jump out of a local optimal solution. The SOA did not perform well because the attack radius of the SOA did not decrease with iteration. This improves the probability of the SOA jumping out of local optimum, but it also loses the fast convergence capability. Therefore, the appropriate iteration size for the QBSO algorithm may not be suitable for the SOA.
As shown by the data in Table 3 and Table 6, the PIO algorithm, SOA, GWO algorithm, and BSO algorithm cannot converge to the optimal solution when the population size is eight. On the contrary, the QBSO algorithm continued to perform well.However, there are still several flaws in the QBSO algorithm. From the curves shown in Figure 5 and Figure 6, it can be found that the QBSO algorithm seems to be unable to trade off accuracy and convergence speed. There are two reasons for this: first, the step size adjustment strategy with feedback leads to a slower convergence of the algorithm; second, the variational operation of the quantum rotation gate maintains the diversity but also slightly sacrifices the convergence speed. This will be the focus of our research in the next phase of work.
Figure 6. The iteration curves when solving the multimodal benchmark functions with four algorithms.
Our design attempted to handle the high-dimensional optimization problem with a minimal population. For further validation, we measured the population diversity and the effect of dimensionality on the performance of the algorithms. Figure 7 shows that the QBSO algorithm had a significant advantage in maintaining population diversity when the number of iterations was less than 40. This was due to the quantum representation of the QBSO algorithm that enriched the population information and the quantum rotation gate as a variational operator that improved the population variability. Table 7 illustrates that, with increasing dimensionality and unchanged population size, the QBSO algorithm shows the best adaptability, verifying the feasibility of the QBSO for high-dimensional optimization problems.
Figure 7. Population diversity at different iterations when optimizing the Griewank function with the population size = 30.

6. Conclusions

In this paper, we propose the QBSO algorithm to address the inability of the BAS algorithm to handle high-dimensional optimization problems. Quantum representation was introduced into the algorithm, which can carry more population information with small-scale populations. To compare the performance with the PIO, SOA, GWO, and BSO algorithms, multiple comparison experiments with population sizes of 8 and 30 were conducted with unimodal benchmark functions and multimodal benchmark functions as the optimization objectives, respectively. The experimental results show that the QBSO algorithm still had satisfactory optimization capability at a population size of eight. The global convergence ability of the algorithm and the feasibility of the quantum representation were verified. The designed QBSO algorithm can handle high-dimensional optimization problems with low population sizes and still have an excellent optimization performance.

Author Contributions

Writing—original draft, L.Y.; revising and critically reviewing for important intellectual content, J.R.; writing—review and editing, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The datasets are available at: https://github.com/P-N-Suganthan (accessed on 27 January 2021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wu, Q.; Shen, X.; Jin, Y.; Chen, Z.; Li, S.; Khan, A.H.; Chen, D. Intelligent beetle antennae search for uav sensing and avoidance of obstacles. Sensors 2019, 19, 1758. [Google Scholar] [CrossRef] [PubMed]
  2. Wu, Q.; Lin, H.; Jin, Y.; Chen, Z.; Li, S.; Chen, D. A new fallback beetle antennae search algorithm for path planning of mobile robots with collision-free capability. Soft Comput. 2019, 24, 2369–2380. [Google Scholar] [CrossRef]
  3. Jiang, X.; Lin, Z.; He, T.; Ma, X.; Ma, S.; Li, S. Optimal path finding with beetle antennae search algorithm by using ant colony optimization initialization and different searching strategies. IEEE Access 2020, 8, 15459–15471. [Google Scholar] [CrossRef]
  4. Zhu, Z.; Zhang, Z.; Man, W.; Tong, X.; Qiu, J.; Li, F. A new beetle antennae search algorithm for multi-objective energy management in microgrid. In Proceedings of the 2018 13th IEEE Conference on Industrial Electronics and Applications (ICIEA), Wuhan, China, 31 May–2 June 2018; pp. 1599–1603. [Google Scholar]
  5. Jiang, X.Y.; Li, S. Beetle Antennae Search without Parameter Tuning (BAS-WPT) for Multi-objective Optimization. FILOMAT 2020, 34, 5113–5119. [Google Scholar] [CrossRef]
  6. Zhao, Y.X.; Li, S.H.; Jin, F. Overlapping community detection in complex networks using multi-objective evolutionary algorithm. Comput. Appl. Math. 2017, 36, 749–768. [Google Scholar]
  7. Pizzuti, C. A multiobjective genetic algorithm to find communities in complex networks. IEEE Trans. Evol. Comput. 2012, 16, 418–430. [Google Scholar] [CrossRef]
  8. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl. Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
  9. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report TR06; Erciyes University: Kayseri, Türkiye, 2005. [Google Scholar]
  10. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  11. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  12. Holland, J. Adaptation in Natural and Artificial Systems; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  13. Dorigo, M.; Gambardella, L.M. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1997, 1, 53–66. [Google Scholar] [CrossRef]
  14. Zamani, H.; Nadimi-Shahraki, M.; Gandomi, A. Starling murmuration optimizer: A novel bio-inspired algorithm for global and engineering optimization. Comput. Methods Appl. Mech. Eng. 2022, 392, 114616. [Google Scholar] [CrossRef]
  15. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  16. Jiang, X.; Shuang, L. BAS: Beetle Antennae Search Algorithm for Optimization Problems. Available online: www.hhtp://arXiv:1710.10724 (accessed on 30 October 2017).
  17. Khan, A.T.; Cao, X.W.; Li, S. Enhanced Beetle Antennae Search with Zeroing Neural Network for online solution of constrained optimization. Neurocomputing 2021, 447, 294–306. [Google Scholar] [CrossRef]
  18. Sabahat, E.; Eslaminejad, M.; Ashoormahani, E. A new localization method in internet of things by improving beetle antenna search algorithm. Wirel. Netw. 2022, 28, 1067–1078. [Google Scholar] [CrossRef]
  19. Khan, A.; Li, S.; Zhou, X. Trajectory optimization of 5-link biped robot using beetle antennae search. IEEE Trans. Circuits Syst. II-Express Briefs 2021, 68, 3276–3280. [Google Scholar] [CrossRef]
  20. Jiang, X.; Lin, Z.; Li, S. Dynamical attitude configuration with wearable wireless body sensor networks through beetle antennae search strategy. Measurement 2020, 167, 108–128. [Google Scholar] [CrossRef]
  21. Khan, A.H.; Cao, X.; Xu, B.; Li, S. A model-free approach for online optimization of nonlinear systems. IEEE Trans. Circuits Syst. II: Express Briefs 2022, 69, 109–113. [Google Scholar] [CrossRef]
  22. Han, K.H.; Kim, J. Quantum-inspired evolutionary algorithm for a class of combinatorial optimization. IEEE Trans. Comput. 2002, 6, 580–593. [Google Scholar]
  23. Wang, J.; Chen, H. BSAS: Beetle Swarm Antennae Search Algorithm for Optimization Problems. 2018. Available online: https://arxiv.org/abs/1807.10470 (accessed on 27 July 2018).
  24. Khan, A.H.; Cao, X.; Li, S.; Katsikis, V.N.; Liao, L. BAS-ADAM: An ADAM based approach to improve the performance of beetle antennae search optimizer. IEEE/CAA J. Autom. Sin. 2020, 7, 461–471. [Google Scholar] [CrossRef]
  25. Lin, M.; Li, Q.; Wang, F.; Chen, D. An improved beetle antennae search algorithm and its application on economic load distribution of power system. IEEE Access 2020, 8, 99624–99632. [Google Scholar] [CrossRef]
  26. Zhou, T.J.; Qian, Q.; Fu, Y. An Improved Beetle Antennae Search Algorithm. Recent Dev. Mechatron. Intell. Robot. Proc. ICMIR 2019, 2019, 699–706. [Google Scholar]
  27. Shao, X.; Fan, Y. An Improved Beetle Antennae Search Algorithm based on the Elite Selection Mechanism and the Nieghbor Mobility Strategy for Global Optimization Problems. IEEE Access 2021, 9, 137524–137542. [Google Scholar] [CrossRef]
  28. Yu, X.W.; Huang, L.P.; Liu, Y.; Zhang, K.; Li, P.; Li, Y. WSN node location based on beetle antennae search to improve the gray wolf algorithm. Wirel. Netw. 2022, 28, 539–549. [Google Scholar] [CrossRef]
  29. Lin, M.; Li, Q.; Wang, F.; Chen, D. An improved beetle antennae search algorithm with mutation crossover in TSP and engineering application. Appl. Res. Comput. 2021, 38, 3662–3666. [Google Scholar]
  30. An, J.; Liu, X.; Song, H. Survey of Quantum Swarm Intelligence Optimization Algorithm. Comput. Eng. Appl. 2022, 7, 31–42. [Google Scholar]
  31. Kundra, H.; Khan, W.; Malik, M. Quantum-inspired firefly algorithm integrated with cuckoo search for optimal path planning. Mod. Pyhsics C 2022, 33, 2250018. [Google Scholar] [CrossRef]
  32. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. QANA: Quantum-based avian navigation optimizer algorithm. Eng. Appl. Artif. Intell. 2021, 104, 104314. [Google Scholar] [CrossRef]
  33. Nadimi-Shahraki, M.H.; Fatahi, A.; Zamani, H.; Mirjalili, S. Binary Approaches of Quantum-Based Avian Navigation Optimizer to Select Effective Features from High-Dimensional Medical Data. Mathematics 2022, 10, 2770. [Google Scholar] [CrossRef]
  34. Zhou, N.-R.; Xia, S.-H.; Ma, Y.; Zhang, Y. Quantum particle swarm optimization algorithm with the truncated mean stabilization strategy. Quantum Inf. Process. 2022, 21, 21–42. [Google Scholar] [CrossRef]
  35. Sun, J.; Feng, B.; Xu, W. Particle swarm optimization with particles having quantum behavior. In Proceedings of the 2004 Congress on Evolutionary Computation, Portland, OR, USA, 19–23 June 2004; pp. 325–331. [Google Scholar]
  36. Hao, T.; Huang, X.; Jia, C.; Peng, C. A quantum-inspired tensor network algorithm for constrained combinatorial optimization problems. Frontiers 2022, 10, 1–8. [Google Scholar] [CrossRef]
  37. Amaro, D.; Modica, C.; Rosenkranz, M.; Fiorentini, M.; Benedetti, M.; Lubasch, M. Filtering variational quantum algorithms for combinatorial optimization. Quantum Sci. Technol. 2022, 7, 015021. [Google Scholar] [CrossRef]
  38. Fallahi, S.; Taghadosi, M. Quantum-behaved particle swarm optimization based on solitons. Sci. Rep. 2022, 12, 13977. [Google Scholar] [CrossRef] [PubMed]
  39. Soloviev, V.; Bielza, C.; Larrañaga, P. Quantum Approximate Optimization Algorithm for Bayesian network structure learning. Quantum Inf. Process. 2022, 22, 19. [Google Scholar] [CrossRef]
  40. Li, M.W.; Wang, Y.T.; Geng, J.; Hong, W.C. Chaos cloud quantum bat hybrid optimization algorithm. Nonlinear Dyn. 2021, 103, 1167–1193. [Google Scholar] [CrossRef]
  41. Wang, T.; Yang, L.; Liu, Q. Beetle Swarm Optimization Algorithm: Theory and Application. Filomat 2020, 34, 5121–5137. [Google Scholar] [CrossRef]
  42. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.