Next Article in Journal
Asymptotic Information-Theoretic Detection of Dynamical Organization in Complex Systems
Previous Article in Journal
Multiscale Free Energy Analysis of Human Ecosystem Engineering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Diversity Model Based on Dimension Entropy and Its Application to Swarm Intelligence Algorithm

School of Software, Yunnan University, Kunming 650000, China
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(4), 397; https://doi.org/10.3390/e23040397
Submission received: 13 February 2021 / Revised: 14 March 2021 / Accepted: 19 March 2021 / Published: 27 March 2021

Abstract

:
The swarm intelligence algorithm has become an important method to solve optimization problems because of its excellent self-organization, self-adaptation, and self-learning characteristics. However, when a traditional swarm intelligence algorithm faces high and complex multi-peak problems, population diversity is quickly lost, which leads to the premature convergence of the algorithm. In order to solve this problem, dimension entropy is proposed as a measure of population diversity, and a diversity control mechanism is proposed to guide the updating of the swarm intelligence algorithm. It maintains the diversity of the algorithm in the early stage and ensures the convergence of the algorithm in the later stage. Experimental results show that the performance of the improved algorithm is better than that of the original algorithm.

1. Introduction

1.1. Optimization Problems and Swarm Intelligence Algorithms

Optimization problems have a long history. They involve the need to determine specific performance requirements for a certain problem if there are multiple alternative solutions, and to select one of them to maximize or minimize the determined performance requirement index [1]. In real life, optimization problems exist widely in engineering design [2], image segmentation [3], power systems [4], and other fields. In order to better solve optimization problems, evolutionary algorithms simulating the process and mechanism of biological evolution and swarm intelligence algorithms simulating the foraging mechanism of biological populations have gradually become a research hotspot in recent years.
Swarm intelligence refers to the behavior of group cooperation and collective intelligence presented by a group composed of many simple individuals in nature [5]. Swarm intelligence is a kind of group-based computing method with self-organization, self-adaptation, and self-learning characteristics, which is put forward by referring to and utilizing various mechanisms of natural phenomena or organisms in nature. After years of development, a large number of swarm intelligence optimization algorithms have been born, among which the classic swarm intelligence optimization algorithms include the artificial bee colony algorithm [6], the ant colony algorithm [7], and the particle swarm optimization algorithm [8].

1.2. An Overview of the Diversity of Swarm Intelligence Algorithms

A major problem in swarm intelligence algorithms is premature convergence [9,10,11,12]; i.e., algorithms lose diversity prematurely. The root cause of premature convergence is the imbalance between local exploration and global development [13]. Too much local exploration leads to premature convergence of the algorithm into a local optimum, while too much global development causes the algorithm to lack precision and become difficult to converge [14].
To some extent, exploration and development can be seen as a pair of contradictory concepts, and the increase of one will inevitably reduce the other [15]. Population size, search strategy, and restart strategy are all effective methods to control exploration and development, but how to balance them scientifically is the key for a swarm intelligence algorithm to achieve excellent results.
Evaluating the diversity of algorithms can fully detect the exploration and development of algorithms. As an important index of the swarm intelligence algorithm [16], diversity measures the richness of particle position, cognition, direction, and other properties in the swarm intelligence algorithm. Existing studies on diversity and entropy include: Folino et al. proposes a method to evaluate swarm intelligence algorithm using entropy value [17]; González et al. proposes a natural inspired strategy for optimization [18]; Muhammad et al. proposed a design with entropy evolution of optimal power flow problem [19]. Da et al. proposed a simplex crossover evolutionary algorithm aiming at genetic diversity [20].
According to the benchmark and representation method of the diversity model, there can be many diversity models. Therefore, the scope of this paper should be determined first. The diversity studied in this paper is a measure related to the position of individuals in a population. We believe that a measure model that can measure the true diversity of a population should have the following properties, besides being true and effective:
(1)
It is robust to parameters such as population size and problem dimensions.
(2)
It has repeatability for different populations.
(3)
It can give feedback directly to changes in the population.
For this reason, we have designed a new diversity model. The model calculates population entropy based on particle position, which is named “dimensional entropy”. Compared with other methods, dimensional entropy can clearly and intuitively define the diversity of a population and thus control the iteration of the algorithm. This paper is organized as follows: This section introduces some basic knowledge about the swarm intelligence algorithm and its diversity; in the second section, we describe a variety of diversity models and discuss the dimensional entropy model. Section 3 shows the method of updating the swarm intelligence algorithm guided by the dimensional entropy model; Section 4 shows the results; Section 5 presents our conclusions.

2. Diversity Model Based On Dimension Entropy

2.1. General Concept

Generally speaking, diversity [21,22] can be defined as the degree of individual heterogeneity between populations [23]. In swarm intelligence algorithms, there are many kinds of diversity evaluation models, which can be divided into two categories. The first type is the measurement based on the distance between particles [24,25]. This distance can be based on a central particle [26], the maximum distance between two particles in space [27,28], or the average distance between particles [29]. Euclidean distance is a common calculation method, because Euclidean space extends two-dimensional or three-dimensional space to any dimension, and at higher dimensions, populations exist and are defined in Euclidean space.
The second type is based on the entropy measure. Entropy is a concept in thermodynamics. In 1948, Shannon put forward the concept [30]. The idea is that calculating the probability of several independent random discrete events can measure how much of an information system is uncertain. When only one event occurs in an information system, information entropy is the minimum; when the probability of occurrence of multiple discrete events is the same, the information entropy is the maximum.
When applied to the swarm intelligence algorithm, because the population itself is a continuous concept, the first step is to segment the population, i.e., to discretize it. Thus, each segmented interval is abstracted as a random event, and the ratio of the number of particles contained in each interval to the total population is the probability of this “random event” occurring. Therefore, how to discretize and which standard to discretize become a key, difficult point in applying the entropy standard in the swarm intelligence algorithm. At the same time, the number of intervals divided by discretization has a direct impact on the estimation of diversity. When the ethnic scale is too small, the entropy method cannot be used to divide enough intervals. However, in the case of high dimensions, it is difficult to consider the situation of all dimensions when dividing the interval.
In addition, the combination of entropy values between all the particles must be considered. For example, Gouvea Jr. and Araujo used a representative particle to represent population diversity [28]; i.e., they used the individual characteristics of a particle to represent population diversity. They mentioned that the chosen particle has to be important. Collins and Jefferson used average entropy values to determine population diversity [31].

2.2. Research Status of Diversity

To save space, a summary of the symbols used in this section is first given in Table 1 below.
The most basic range-based diversity measure only considers the diameter of the population, i.e., the distance between the two farthest particles in the population [30]. The formula is shown in Equation (1):
D d = m a x k = 1 n x i , k x j , k 2
The second method [32] can be obtained by changing the farthest distance to the radius of the population and calculating the distance between the farthest particle and the average position of the population. The formula is shown in Equation (2):
D r = m a x k = 1 n x i , k x k ¯ 2
There are other extended methods of this method, such as calculating the average radius, which will not be detailed here.
The third method was proposed by Olorunda and Engelbrecht. The idea is to consider the mean value of the mean distance around the population particles [27], as shown in Equation (3):
D a l l = 1 N i = 1 N ( 1 N j = 1 N k = 1 n x i , k x j , k 2 )
This method has a huge calculation cost. In order to save the calculation cost, Wineberg and Opacher proposed a calculation method named “true diversity” [33], which represents the mean standard deviation of each particle in the population, as shown in Equation (4):
D t d = 1 n k = 1 n x k 2 ¯ x k ¯ 2
where:
x k 2 ¯ = 1 N i = 1 N x i , k 2
The last diversity based on distance measure compared in this paper was proposed by Herrera and Lozano [26]. This diversity measure requires pre-determination of the most suitable particle in the population, because it uses this particle as a reference to measure the distance with other particles, as shown in Equation (5):
D e = d ¯ d m i n d m a x d m i n
where:
d ¯ = 1 N i = 1 N k = 1 n x i , k x b e s t , k 2 d m a x =   max i 1 , 2 , , N k = 1 n x i , k x b e s t , k 2 d m i n =   min i 1 , 2 , , N , i b e s t k = 1 n x i , k x b e s t , k 2
In terms of the entropy value measurement, Shannon’s entropy is the most basic measurement method [34], which measures the disorder degree of the population [35]. Its entropy definition is shown in Equation (6):
E = m = 1 M p m l o g p m
If we want to measure the entropy value of the swarm intelligence algorithm, it is important to establish a discrete measurement model. Chen et al. put forward an idea based on entropy calculation methods of fitness [36]. The idea is to investigate the historically best position. The interval is defined as the current particle fitness range, which can be divided by the same number of particles of M between communities, so as to calculate the distribution of the current fitness [37]. The formula is shown in (7):
E = m = 1 M p m l o g p m
where
p m = k 3 M
where k m . represents the number of particles contained in interval m .
Wang and Lei put forward the intuitionistic fuzzy population entropy as a measure of diversity [38,39]. In this method, the P B e s t position of each generation is selected as the aggregation point, the distance from particle j to the aggregation point i is D j i , and the scope radius of each aggregation point is R i = K *   M a x   D j , where K is a random number between 0 , 0.5 . If the distance D j i from particle j to the aggregation point i is less than the scope radius R i of the aggregation point i , then particle   j is considered to belong to the aggregation point i and the counter T i is increased by 1. If particle j does not belong to any of the scopes, then particle j is considered as an “lone point” and the global counter T x is added 1. The formula is shown in (8):
E = 1 M i = 1 M min μ j i , γ j i + π j i max μ j i , γ j i + π j i
where:
μ j i = T j i M π j i = T x M γ j i = 1 μ j i π j i
The intuitionistic fuzzy population entropy reflects the aggregation degree of particles in the algorithm solution process.

2.3. Some Fundamental Flaws in Current Metrics

First of all, population diameter is not an ideal diversity measurement method, because this method only considers the two farthest particles and ignores the distribution of the remaining particles, which cannot explain the true diversity of a population.
A similar deficiency exists in population radius, where the diversity is based on the position of the particles furthest from the population center. When this indicator describes a fully diversified population, the value is close to 0.5. As the value approaches 1, it describes a population that is mostly clustered around one corner and has an outlier near the other corner. This diversity is also wrong.
The diversity based on the distance measure needs to determine a reference particle in advance, but in fact, a suitable reference particle is difficult to choose. Meanwhile, for a linearly shrinking population, its diversity will remain unchanged because of the simultaneous contraction of the numerator and denominator, and the real decreased diversity cannot be described.
It is difficult to determine an appropriate segmentation standard using the diversity measure based on entropy. The entropy calculation based on the fitness of particles has the same fitness and the same interval. It may work with a single peak function, but in the face of two likely multimodal functions for different peak particles with the same fitness, a simple classification by fitness is rigorous.
Intuitionistic fuzzy population entropy considers the scope division of particles in space, but the difficulty of this division increases greatly with the increase of dimensions. In high dimensions, a particle is likely to belong to different scopes in different dimensions, so it is difficult to clearly divide. Finally, as we will see later, most of the metrics fail to deal with population dynamics.

2.4. Diversity Model Based on Dimension Entropy

Xu and Cui [40] confirmed that the swarm intelligence algorithm in the iteration is relatively independent of each dimension in the process of change. Inspired by this, this paper puts forward dimension entropy to measure the diversity of the swarm intelligence algorithm. Unlike other entropy value methods, we abandoned the concept of space. We have an independent view of the entropy value of each dimension. Dimension entropy is put forward. We provide relevant definitions:
Dimension interval. The maximum and minimum values of each dimension were selected as the upper and lower limits and divided into M intervals ( M is the total number of particles) on an average basis. Each dimension was divided into intervals independently without interference.
Dimensional entropy. In each dimension, the number of particles falling into each dimension interval is counted independently, and the dimension entropy is calculated, as shown in Equation (9):
E d i m = 1 n k = 1 n m = 1 M p m , k l o g p m , k
where:
p m , k = k m , k M
where k m , k is the number of particles contained in the m interval of dimension   k .
In the worst-case scenario, the population is trapped in the same interval in all dimensions; that is, the population completely converges, and the dimensional entropy is then 0. The larger the dimensional entropy, the greater the diversity of the population.
Compared with the previous entropy method, our dimension entropy method perfectly overcomes the disadvantages of the usual entropy methods. With a low population size, even if some dimensions cannot be effectively divided, the entropy can be calculated as long as one of the dimensions can complete the interval division. At the same time, the independent thinking among the dimensions also enables us to face the challenges of higher dimensions simply and intuitively.

2.5. A Comparative Study

In order to compare the differences between different diversities, the various methods should be normalized first so as to evaluate on the same standard. In this study, the maximum value was used as the normalization factor. After normalization, the value range of each model was 0~1.
In addition to the dimension of entropy, we selected the diversity model to participate in six kinds of contrast: the maximum distance [32], the radius of the population [32], the individual average distance [27], and the average standard deviation [33] , two entropy value methods: the fitness [36] and the intuitionistic fuzzy entropy [38]. These are marked, respectively, as D d p , D r p , D a l l , D t d , E f i t , E f u z z y .

2.5.1. Population Expansion Experiment

The practical application scenario of the swarm intelligence algorithm is complex, which will inevitably face population expansion or decline. A good diversity model should have a correct and timely response to changes of population size.
In order to compare the robustness of various diversity models of population size, we designed the following experiments.
First, we designed a complete population of 20 particles with a population range of [−100, 100]. This population contains two dimensions and has the following characteristics:
(1)
All particles in the population are uniformly distributed in all dimensions.
(2)
No two particles are the same.
The complete population is shown in Figure 1.
We randomly selected the composition of the initial population and the population expansion after the simulation operation from a population of 20 particles. The initial population consists of 10 particles. We randomly selected 1 out of 10 particles and joined the initial population. After 10 iterations, the entire population was produced, as shown in Figure 1. The process of population change is in Figure 2.
Since there is a significant difference between each particle in the complete population, it can be seen that, in the population expansion process in the figure above, each new particle is different from the old particle, so the diversity of the population must continue to increase in this process.
We used the above several diversity models to evaluate the population diversity in this process. The entropy value method is based on the population fitness. We used a simple Sphere function ( f x = i = 1 k ( x i ) 2 ) to calculate the entropy. Because the fuzzy population entropy needs to use the PBest value of the past dynasties as the aggregation point in the calculation process, it was not used in this experiment.
The normalized results of various models are shown in Figure 3.
According to the above figure, the population diversity of D d p increased only in the fourth iteration, because the particles added in the fourth iteration happened to be outside the existing population, while in the remaining iterations, the particles were added inside the population, which means that the simple method to calculate the population diameter could not judge the changes within the population at all.
Although D r p shows an overall upward trend, its value is always at a high level, which means that, if the new particle is not a significant outlier, then D r p cannot make a clear response to it. Throughout all 10 expansion processes, the diversity value of D r p decreased four times, because, while the new particles did not expand the maximum radius of the population, the enlarged denominator decreased the value of D r p .
D a l l has a similar problem. If the new particles do not increase the average distance between the particles, D a l l will not reflect the population change correctly.
D t d kept a downward trend during the iteration, because the increase in particles resulted in a decrease in the standard deviation, which proved that D t d could not be used to describe a changing population.
E f i t overall performs better, but still showed a diversity, which reduced the phenomenon. This is because the eighth iteration was added on the right, and existing particles around the zero point were symmetrical. The two particles were completely different; however, in our set, the fitness function is similar. The visible fitness does not have the unique attributes of a particle. How unique and accurate properties are chosen as division standards is a key part of the entropy value method.
Finally, among all the methods, only E d i m keeps an upward trend at all times, and the upward trend is visible in each iteration, which is sufficient to prove that our proposed method can accurately observe and describe a changing population.

2.5.2. Dimensional Change Experiment

The swarm intelligence algorithm also faces a complex dimension problem. We hope that the diversity of an ideal method is robust for different dimensions; namely, if the population itself is fully diversified, then regardless of dimensions, its diversity values should be stable; otherwise, it would be difficult to assess or apply in the swarm intelligence algorithm.
To this end, we designed an experiment. First, we built a completely differentiated network with 1 dimension and 100 particles, and then continuously expanded the dimensions on this basis, ensuring that each expanded dimension was also completely differentiated. This step was repeated until it reached 30 dimensions. In this process, we used the above diversity model to compare the values of each model for different dimensions, and the results are shown in Figure 4.
As can be seen in Figure 4, with the increase in dimensions, all distance diversity models show an increasing trend of diversity. This is because, with the increase in dimensions, the spatial span calculated by the diversity model based on spatial distance also increases proportionally, which is almost an inevitable defect of the distance diversity model. In contrast, the entropy method has better resistance to dimensional changes, and compared with E f i t , the method we defined is more accurate and stable.

2.5.3. Practical Problem Testing

The purpose of this experiment is to test the performance of each model under the actual test function environment. First of all, we are given a specific Rastrigin function test. The test function exists, and there is only one global optimal value at the zero point; therefore, while the algorithm is run, almost all particles, according to certain trends, will move toward the optimal point 0 mobile. As the iteration times begin to approach zero, and when the late algorithm flocks to near zero, all particles moving toward the zero distance sum partly reflect the diversity of the population under the test function. The formula of the Rastrigin test function is shown in Equation (10):
F x = i = 1 k x i 2 10 × cos 2 × π × x i + 10
The standard PSO algorithm was used to carry out the experiment. The experimental dimension was 5 dimensions, there were 50,000 fitness evaluations, the population size was set as 100, and the diversity models involved in the experiment were D d p , D r p , D a l l , D t d , E f i t , E f u z z y , and E d i m . The PSO setup is described in the next section.
The Spearman correlation coefficient is a nonparametric index to measure the dependence of two variables. It uses monotone equations to evaluate the correlation between two statistical variables. When the two variables are completely monotone and positively correlated, the Spearman correlation coefficient is 1; when the two variables are completely monotone and negatively correlated, the Spearman correlation coefficient is −1. That is to say, the closer the Spearman correlation coefficient is to 1, the closer the two variables are.
The diversity model mentioned above was used to compare the distance changes of all particles to the point 0. After each iteration, we calculate and record the sum of the distances of the all particles from zero point and the values of the above diversity model. At the end of the algorithm, we selected the values of each diversity model in each iteration and compared them with the sum of the distances of the all particles from zero point to calculate Spearman correlation coefficient. The experiment was repeated ten times, the results are shown in Table 2.
In the case of fixed population size and dimension, D a l l performs best, followed by E d i m . Among all entropy methods, E d i m has a significant advantage.

3. Swarm Intelligence Algorithm Control Method Based on Dimension Entropy

3.1. Introduction to Swarm Intelligence Algorithms

Firstly, the comparison algorithm used in this study is briefly introduced.
PSO is derived from an analogy of bird flight [41]. In PSO, each particle flies in a D -dimensional solution space by learning its own experience and the experience of its neighbor. The position and velocity of each particle are expressed by X i = x i 1 , x i 2 , , x i D and V i = v i 1 , v i 2 , , v i D , respectively. X i represents the current position of the i particle, and it also represents a possible solution of the optimization problem. V i represents the current velocity of the i particle, which determines the direction and step size of the particle’s movement. Each particle learns from its own and global historical best positions, represented by p b e s t i = p b e s t i 1 , p b e s t i 2 , , p b e s t i D and g b e s t i = g b e s t i 1 , g b e s t i 2 , , g b e s t i D , respectively. The position and velocity of each particle are dynamically adjusted according to equations (11) and (12):
v i d t + 1 = ω × v i d t + c 1 × r a n d 1 i d × p b e s t i d t x i d t + c 2 × r a n d 2 i d × g b e s t i d t x i d t
x i d t + 1 = x i d t + v i d t + 1
where ω is the inertial weight, c 1 and c 2 are the acceleration factors, and r a n d 1 i d and r a n d 2 i d are two uniformly distributed random numbers in the range of [0,1].
Bare-bones PSO eliminates the velocity attribute of particles, and the position update formula of particles is obtained by random sampling in accordance with Gaussian distribution, which can be described in a mathematical language as follows:
The particle is searched randomly in the M -dimensional space, the position of particle i is X i = x i 1 , x i 2 , , x i D , and the optimal flight position of particle i is p b e s t i = p b e s t i 1 , p b e s t i 2 , , p b e s t i D . P B e s t is the global optimal position of the particle, and the symbol N(0,1) is the standard Gaussian distribution. The position update formula of particle i is shown in Equation (13):
X i t + 1 = μ i t + N 0 , 1 σ i t μ i t = 0.5 p b e s t t + g b e s t t σ i t = pbest t gbest t

3.2. The Swarm Intelligence Algorithm Control Method Based on Dimension Entropy

The disadvantage of traditional swarm intelligence algorithms is the premature loss of diversity. We want to come up with a mechanism: control the diversity to decrease slowly, maintain higher diversity in the early stage, and reduce the diversity in the late stage to promote convergence. So we propose a method to calculate and control population diversity according to dimensional entropy. Firstly, the following definitions are proposed:
Redundant particle: Divide M ( M is the population size) intervals on average in each dimension d 1 , 2 , , n , count the number of particles in each interval. If a particle i 1 , 2 , M is in the largest set of dimensions d , let count k i +1, the particle with the largest count in a population is redundant particle.
It should be noted that redundant particles are not necessarily repeated particles or redundant particles, but according to the definition of dimensional entropy, we know that deleting redundant particles will inevitably increase dimensional entropy, while copying redundant particles will inevitably reduce dimensional entropy.
Based on this, we propose a strategy to control the diversity. When the diversity is too large, we copy a redundant particle and delete a particle with the worst fitness to reduce the diversity. On the contrary, when the diversity is too low, a redundant particle is deleted and a new random particle is added to achieve the purpose of increasing the diversity. The relevant algorithm pseudocode is shown in Algorithm 1:
Algorithm 1 Diversity control
 Input:
E d i m : The dimensional entropy of the population in this iteration; E s t a n d a r d : The expected entropy of this iteration; Coord: The population
 Output:
 The new population
 1 if E d i m < E s t a n d a r d
 2  delete a redundant particle;
 3  add a new particle;
 4 else if E d i m > E s t a n d a r d
 5  delete a worst particle;
 6  copy a redundant particle;
 7 end
We want to set a standard for diversity to go down on a trajectory, E s t a n d a r d is the entropy value we expect to achieve in each iteration. This value is determined by the base curve.
We have experimented with four different kinds of base curves: straight line, convex curve, concave curve and broken line. These curves are shown in Figure 5.
In order to compare the differences between different convergence criteria, we set a simple test function set for experiment. This test function set contains 7 basic test functions, the information is shown in Table 3.
In the above test function, taking the linear diversity reduction standard as shown in Figure 5 as an example, the PSO algorithm uses the strategy of Algorithm 1, and its diversity changes are shown in Figure 6.
According to the diversity change curve, the diversity of the original algorithm will be reduced to the lowest value after about 50 iterations, and then the algorithm will stagnate. This phenomenon of premature loss of diversity is one of the reasons why swarm intelligence algorithm cannot achieve better results.
After using our strategy, diversity slowly declines on a curve that we have defined. In the early stage of the algorithm, there is still a phenomenon of a sharp decrease in the diversity. This is due to the rapid aggregation of particles at the beginning of the algorithm. In the middle and late stages of the algorithm, our method can achieve precise control.
We take these four convergent curves as input to the algorithm, PSO algorithm was used as the experimental algorithm, the experimental dimension is 10 dimensions. The experiment was repeated for 30 times. Compare the difference of optimization performance and population diversity brought by different curves on different functions. Advantageous test function results are marked in bold. The result is shown in Table 4.
According to the table, first of all, the concave curve has the highest average diversity, followed by the straight line, followed by the broken line, and the concave curve has the lowest average diversity.
In terms of the optimization results, the convex curve has higher pre - and mid-term diversity, which guarantees its wider development capability in the pre - and mid-term. This ability enables it to better discover hidden global optimal values, which may be the reason for its excellent performance in multi-modal functions. In the case of unimodal function, appropriately accelerating the convergence can enhance the exploration ability of the algorithm, so the linear curve with slightly faster multiplicity convergence can achieve better results. And for any function, a rapid loss of diversity is not a good strategy, as concave curves demonstrate.

4. Experiment and Discussion

In this section, we apply the improved strategy to the three algorithms mentioned in the previous chapter. CEC17 is selected as the test function, which is a test function set containing 29 functions. Among them, f 1 and f 3 are single-peak functions, f 4 ~ f 10 are simple multi-peak functions, f 11 ~ f 20 are mixed functions, and f 21 ~ f 30 are compound functions. CEC17 functions are shown in Table 5.
The algorithms before and after the improvement are respectively referred to as PSO and PSOG. BBPSO and BBPSOG. The dimension of the experiment is 10, and the experiment was repeated 30 times. Parameter setting of the algorithm is shown in Table 6.
Considering that the swarm intelligence algorithm is a random algorithm, we repeated the experiments of each algorithm 30 times and recorded the optimal value, the worst value, the average value and the variance. Advantageous test function results are marked in bold. Statistical results are shown in Table 7, Table 8, Table 9 and Table 10.
In summary, the algorithm using a population diversity control strategy achieves a better optimization effect than the original algorithm in most test functions, which indicates that our strategy of “controlling the population diversity according to dimensional entropy to maintain diversity in the early stage while controlling convergence in the late stage is effective”. From the vertical perspective, the algorithm improved by the strategy has a more obvious optimization effect in the high dimension, which indicates that our concept of dimensional entropy is robust and adaptable in the high dimension.

5. Conclusions

This paper proposes a species diversity measure based on the dimension entropy mechanism, which creatively combines dimension learning and entropy. An independent view of the different dimensions of entropy affords strong robustness, while the entropy value method enjoys strong adaptability to changes in population size and an ability to have a clear response. Dimension entropy controls the population diversity of the swarm intelligence algorithm update strategy, which relies on the dimension entropy calculation of population diversity. By controlling the redundant control particle diversity, the slow algorithm, under the control of a lower diversity, improved the previous rapid loss of diversity in the swarm intelligence algorithm caused by local optimal results of stagnation of the algorithm, and maintained the diversity to ensure algorithm convergence. The 29 on the cec17 test function verified the effectiveness of this strategy.

Author Contributions

Conceptualization, H.K.; methodology, H.K.; project administration, F.B. and Y.S.; software, X.S.; validation, X.S. and Q.C.; visualization, F.B. and Q.C.; formal analysis, H.K.; investigation, Q.C.; resources, Y.S.; data curation, F.B.; writing—original draft preparation, F.B.; writing—review and editing, H.K. and X.S.; supervision, X.S.; funding acquisition, Y.S. All authors have read and agreed to the published version of the manuscript

Funding

This research was funded by National Natural Science Foundation of China, grant number 61663046, 1876166. This research was funded by Open Foundation of Key Laboratory of Software Engineering of Yunnan Province, grant number 2020SE308, 2020SE309.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jin, Y.; Dexian, Z. Summaries on Some Novel Bionic Optimization Algorithms. Softw. Guide 2019, 18, 49–51. [Google Scholar]
  2. Limei, B. Application of Particle Swarm Optimization Algorithm in Engineering Optimization Design. Electron. Technol. Softw. Eng. 2016, 17, 157. [Google Scholar]
  3. Chuntian, S.; Yanyang, Z.; Shouming, H. Summary of the Application of Swarm Intelligence Algorithms in Image Segmentation. Comput. Eng. Appl. 2021, 1–17. Available online: http://kns.cnki.net/kcms/detail/11.2127.TP.20210126.1016.004.html (accessed on 29 January 2021).
  4. Chenbin, W.; Haiming, L.; Dong, L.; Zhengyang, W.; Lei, W. Application of improved particle swarm optimization algorithm to power system economic load dispatch. Power Syst. Prot. Control 2016, 44, 44–48. [Google Scholar]
  5. Mei, W.; Yunlong, Z.; Xiaoxian, H. A Survey of Swarm Intelligence. Comput. Eng. 2005, 22, 204–206. [Google Scholar]
  6. Karaboga, D.; Akay, B. A comparative study of Artificial Bee Colony algorithm. Appl. Math. Comput. 2009, 214, 108–132. [Google Scholar] [CrossRef]
  7. Qinghong, W.; Ying, Z.; Zongmin, M. Overview of ant colony algorithms. Microcomput. Inf. 2011, 27, 1–2. [Google Scholar]
  8. Naigang, Z.; Jingshun, D. A review of particle swarm optimization algorithms. Sci. Technol. Innov. Guide 2015, 12, 216–217. [Google Scholar]
  9. De Jong, K.A. An Analysis of the Behavior of a Class of Genetic Adaptive Systems. Ph.D. Thesis, Department of Computer Science Central Michigan University, Ann Arbor, MI, USA, 1975. [Google Scholar]
  10. Mauldin, M.L. Maintaining diversity in genetic search. In Proceedings of the 4th National Conference on Artificial Intelligence, Austin, TX, USA, 6–10 August 1984; pp. 247–250. [Google Scholar]
  11. Goldberg, D.E. Genetic Algorithms in Search, Optimization and Machine Learning; Addison-Wesley: Reading, MA, USA, 1989. [Google Scholar]
  12. Eshelman, L.J.; Schaffer, J.D. Preventing Premature Convergence in Genetic Algorithms by Preventing Incest. In Proceedings of the 4th International Conference on Genetic Algorithms and Their Applications, San Mateo, CA, USA, 13–16 July 1991; pp. 115–122. [Google Scholar]
  13. Eiben, A.E.; Schippers, C.A. On evolutionary exploration and exploitation. Fundam. Inform. 1998, 2, 35–50. [Google Scholar] [CrossRef]
  14. Gu, Q.; Wang, Q.; Xiong, N.N.; Jiang, S.; Chen, L. Surrogate-assisted evolutionary algorithm for expensive constrained multi-objective discrete optimization problems. Complex Intell. Syst. 2021, 302, 1–20. [Google Scholar]
  15. Gupta, A.K.; Smith, K.G.; Shalley, C.E. The interplay between exploration and exploitation. Acad. Manag. J. 2006, 49, 693–706. [Google Scholar] [CrossRef]
  16. Zhiping, T.; Kangshun, L.; Yi, W. Differential evolution with adaptive mutation strategy based on fitness landscape analysis. Inf. Sci. 2021, 549, 142–163. [Google Scholar]
  17. Folino, G.; Forestiero, A. Using Entropy for Evaluating Swarm Intelligence Algorithms[C]//. Nature Inspired Cooperative Strategies for Optimization, NICSO 2010, Granada, Spain, 12–14 May 2010. [Google Scholar]
  18. González, J.R.; Pelta, D.A.; Cruz, C.; Terrazas, G.; Krasnogor, N. (Eds.) Studies in computational intelligence. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2011; Volume 284. [Google Scholar] [CrossRef]
  19. Muhammad, Y.; Khan, R.; Raja, M.A.Z.; Ullah, F.; Chaudhary, N.I.; He, Y. Design of Fractional Swarm Intelligent Computing With Entropy Evolution for Optimal Power Flow Problems. IEEEE Access 2020, 8, 111401–111419. [Google Scholar] [CrossRef]
  20. Lieberson, S. Measuring population diversity. Am. Sociol. Rev. 1969, 34, 850–862. [Google Scholar] [CrossRef]
  21. Da Ronco, C.C.; Benini, E. GeDEA-II: A Simplex Crossover Based Evolutionary Algorithm Including the Genetic Diversity as Objective. Eng. Lett. 2013, 21, 23–35. [Google Scholar]
  22. Patil, G.P.; Taillie, C. Diversity as a concept and its measurement. J. Am. Stat. Assoc. 1982, 77, 548–561. [Google Scholar] [CrossRef]
  23. Lu, A.; Ling, H.; Ding, Z. How Does the Heterogeneity of Members Affect the Evolution of Group Opinions? Discret. Dyn. Nat. Soc. 2021, 2021. [Google Scholar] [CrossRef]
  24. Ursem, R.K. Diversity-guided evolutionary algorithms. In International Conference on Parallel Problem Solving from Nature; Springer: Berlin/Heidelberg, Germany, 2002; Volume 2439, pp. 462–471. [Google Scholar]
  25. Morrison, R.W.; de Jong, K.A. Measurement of population diversity. In International Conference on Artificial Evolution (Evolution Artificielle); Springer: Berlin/Heidelberg, Germany, 2002; Volume 2310, pp. 31–41. [Google Scholar]
  26. Herrera, F.; Lozano, M. Adaptation of genetic algorithm parameters based on fuzzy logic controllers. In Genetic Algorithms and Soft Computing, 1st ed.; Herrera, F., Verdegay, J.L., Eds.; Physica-Verlag: Heidelberg, Germany, 1996; Volume 8, pp. 95–125. [Google Scholar]
  27. Olorunda, O.; Engelbrecht, A.P. Measuring exploration/exploitation in particle swarms using swarm diversity. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; pp. 1128–1134. [Google Scholar]
  28. Barker, A.L.; Martin, W.N. Dynamics of a distance-based population diversity measure. In Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No. 00TH8512), La Jolla, CA, USA, 16–19 July 2000; pp. 1002–1009. [Google Scholar]
  29. Gouvea, M.M., Jr.; Araujo, A.F.R. Diversity control based on population heterozygosity dynamics. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; pp. 3671–3678. [Google Scholar]
  30. Aliaga, J.; Crespo, G.; Proto, A.N. Squeezed states and Shannon entropy. Phys. Rev. A At. Mol. Opt. Phys. 1994, 49, 5146–5148. [Google Scholar] [CrossRef]
  31. Collins, R.J.; Jefferson, D.R. Selection in massively parallel genetic algorithms. In Proceedings of the 4th International Conference on Genetic Algorithms and Their Applications, San Mateo, CA, USA, 13–16 July 1991; pp. 249–256. [Google Scholar]
  32. Corriveau, G.; Guilbault, R.; Tahan, A.; Sabourin, R. Review and Study of Genotypic Diversity Measures for Real-Coded Representations. Trans. Evollutionary Comput. 2012, 16, 695–710. [Google Scholar] [CrossRef]
  33. Wineberg, M.; Oppacher, F. The underlying similarity of diversity measures used in evolutionary computation. In Genetic and Evolutionary Computation Conference; Springer: Berlin/Heidelberg, Germany, 2003; Volume 2724, pp. 1493–1504. [Google Scholar]
  34. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  35. Rosca, J.P. Entropy-driven adaptive representation. In Proceedings of the Workshop on Genetic Programming: From Theory to Real-World Applications, Tahoe City, CA, USA, 9 July 1995; pp. 23–32. [Google Scholar]
  36. Chen, Z.; He, Z.; Zhang, C. Particle swarm optimization algorithm using dynamic neighborhood adjustment. Pattern Recognit. Artif. Intell. 2010, 23, 586–592. [Google Scholar]
  37. Zhang, A.; Sun, G.; Ren, J.; Li, X.; Wang, Z.; Jia, X. A Dynamic Neighborhood Learning-Based Gravitational Search Algorithm. IEEE Trans. Cybern. 2018, 48, 436–447. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Wang, Y.; Lei, Y. Adaptive particle swarm optimization algorithm based on intuitionistic fuzzy population entropy. J. Comput. Appl. 2008, 11, 2871–2873. [Google Scholar] [CrossRef]
  39. Baoye, S.; Zidong, W.; Lei, Z. An improved PSO algorithm for smooth path planning of mobile robots using continuous high-degree Bezier curve. Appl. Soft Comput. 2021, 100, 106960. [Google Scholar]
  40. Xu, G.; Cui, Q.; Shi, X.; Ge, H.; Zhan, Z.H.; Lee, H.P.; Liang, Y.; Tai, R.; Wu, C. Particle swarm optimization based on dimensional learning strategy. Swarm Evol. Comput. 2019, 45, 33–51. [Google Scholar] [CrossRef]
  41. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95–International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar]
Figure 1. The complete population.
Figure 1. The complete population.
Entropy 23 00397 g001
Figure 2. Schematic diagram of population change.
Figure 2. Schematic diagram of population change.
Entropy 23 00397 g002
Figure 3. Results of population expansion experiments.
Figure 3. Results of population expansion experiments.
Entropy 23 00397 g003
Figure 4. Dimensional robustness testing.
Figure 4. Dimensional robustness testing.
Entropy 23 00397 g004
Figure 5. Four different kinds of base curves.
Figure 5. Four different kinds of base curves.
Entropy 23 00397 g005
Figure 6. Comparison of diversity.
Figure 6. Comparison of diversity.
Entropy 23 00397 g006
Table 1. Symbol summary.
Table 1. Symbol summary.
SymbolDefinition
i , j Variable
k Dimension number { 1 , 2 , , n }
m Interval number
M Total number of intervals
n Total number of dimension
X i , k The position of the i particle on the k dimension
X k - Average value of the population on the k dimension
p m , k Fraction of N that belongs to interval m on the dimension k
Table 2. Rastrigin tests.
Table 2. Rastrigin tests.
Time D d p D r p D a l l D t d E f i t E f u z z y E d i m
10.782 0.692 0.997 0.983 0.967 0.779 0.977
20.925 0.855 1.000 0.980 0.950 0.889 0.984
30.708 0.889 0.997 0.973 0.947 0.840 0.976
40.715 0.642 0.997 0.978 0.942 0.758 0.977
50.821 0.714 0.998 0.986 0.961 0.747 0.989
60.931 0.852 0.992 0.942 0.951 0.875 0.979
70.949 0.618 1.000 0.990 0.978 0.907 0.980
80.925 0.844 0.998 0.989 0.975 0.907 0.980
90.799 0.699 0.996 0.976 0.952 0.805 0.984
100.818 0.712 0.937 0.982 0.953 0.807 0.972
Mean0.837 0.752 0.991 0.978 0.958 0.831 0.980
Rank5713462
Table 3. test functions. U:(unimodal),M:(multimodal).
Table 3. test functions. U:(unimodal),M:(multimodal).
NumFunction NamePropertyBest Value
1Sphere’s FunctionU0
2Rosenbrock’s FunctionM0
3Rastrigin’s FunctionM0
4Griewank’s FunctionM0
5Ackley’s FunctionM0
6Schwefel’s Problem 2.22M0
7Schwefel’s Problem 1.2M0
Table 4. Comparison of the diversity and optimization results of the four curves.
Table 4. Comparison of the diversity and optimization results of the four curves.
No LineConvex CurveConcave CurveBroken Line
1Min9.47 × 10−573.26 × 10−576.86 × 10−586.69 × 10−58
Max3.45 × 10−516.43 × 10−512.48 × 1025.81 × 10−48
Mean1.93 × 10−523.51 × 10−528.27 × 1001.94 × 10−49
DimEnt0.8201.039 0.574 0.756
2Min8.83 × 10−19.40 × 10−19.00 × 10−19.45 × 10−1
Max4.80 × 1012.26 × 1021.69 × 1056.99 × 102
Mean8.30 × 1002.23 × 1015.65 × 1035.65 × 101
DimEnt0.8311.019 0.596 0.744
3Min8.53 × 10−149.95 × 10−10.00 × 1008.27 × 10−12
Max6.81 × 1006.96 × 1002.22 × 1024.97 × 100
Mean1.94 × 10−02.89 × 1002.00 × 1012.04 × 100
DimEnt0.8341.039 0.626 0.771
4Min1.23 × 10−21.23 × 10−21.97 × 10−21.23 × 10−2
Max1.26 × 10−11.43 × 10−11.65 × 10−11.68 × 101
Mean6.08 × 10−25.83 × 10−26.69 × 10−26.15 × 10−1
DimEnt0.792 1.0120.540 0.750
5Min4.44 × 10−154.44 × 10−158.88 × 10−164.44 × 10−15
Max1.21 × 1014.44 × 10−151.42 × 1014.44 × 10−15
Mean4.03 × 10−14.44 × 10−158.08 × 10−14.44 × 10−15
DimEnt0.830 1.0360.585 0.753
6Min3.98 × 10−331.33 × 10−321.09 × 10−323.11 × 10−32
Max1.20 × 10−273.46 × 10−291.32 × 1014.30 × 10−29
Mean4.34 × 10−293.78 × 10−301.71 × 1005.57 × 10−30
DimEnt0.818 1.0470.664 0.751
7Min7.57 × 10−16.30 × 10−12.19 × 10−21.60 × 10−1
Max2.18 × 1033.08 × 1015.34 × 1031.23 × 103
Mean9.04 × 1017.58 × 1005.60 × 1025.27 × 101
DimEnt1.003 1.1880.886 0.964
Table 5. CEC17 functions. U: (unimodal),M: (multimodal),H: (hybrid),C: (composition).
Table 5. CEC17 functions. U: (unimodal),M: (multimodal),H: (hybrid),C: (composition).
NumFunction NamePropertyBest Value
F01Shifted and Rotated Bent Cigar FunctionU100
F03Shifted and Rotated Zakharov FunctionM300
F04Shifted and Rotated Rosenbrock’s FunctionM400
F05Shifted and Rotated Rastrigin’s FunctionM500
F06Shifted and Rotated Expanded Scaffer’s F6 FunctionM600
F07Shifted and Rotated Lunacek Bi_Rastrigin FunctionM700
F08Shifted and Rotated Non-Continuous Rastrigin’s FunctionM800
F09Shifted and Rotated Levy FunctionM900
F10Shifted and Rotated Schwefel’s FunctionM1000
F11Hybrid Function 1 (N = 3)H1100
F12Hybrid Function 2 (N = 3)H1200
F13Hybrid Function 3 (N = 3)H1300
F14Hybrid Function 4 (N = 4)H1400
F15Hybrid Function 5 (N = 4)H1500
F16Hybrid Function 6 (N = 4)H1600
F17Hybrid Function 6 (N = 5)H1700
F18Hybrid Function 6 (N = 5)H1800
F19Hybrid Function 6 (N = 5)H1900
F20Hybrid Function 6 (N = 6)H2000
F21Composition Function 1C2100
F22Composition Function 2C2200
F23Composition Function 3C2300
F24Composition Function 4C2400
F25Composition Function 5C2500
F26Composition Function 6C2600
F27Composition Function 7C2700
F28Composition Function 8C2800
F29Composition Function 9C2900
F30Composition Function 10C3000
Table 6. Parameter setting.
Table 6. Parameter setting.
AlgorithmParameter Setting
PSO ω : 0.5 , c 1 = c 2 = 2
Table 7. PSO 10-dimensional improvement comparison.
Table 7. PSO 10-dimensional improvement comparison.
funPSO(Dim = 10)PSOG(Dim = 10)
minmaxmeanstdminmaxmeanstd
f 1 1.02 × 1022.54 × 1031.16 × 1038.87 × 1021.01 × 1021.53 × 1035.89 × 1024.20 × 102
f 3 3.00 × 1023.00 × 1023.00 × 1020.00 × 1003.00 × 1023.00 × 1023.00 × 1020.00 × 100
f 4 4.00 × 1024.35 × 1024.26 × 1021.46 × 1014.00 × 1024.35 × 1024.09 × 1021.36 × 101
f 5 5.07 × 1025.34 × 1025.18 × 1026.35 × 1005.04 × 1025.17 × 1025.13 × 1023.95 × 100
f 6 6.00 × 1026.07 × 1026.00 × 1021.22 × 1006.00 × 1026.00 × 1026.00 × 1020.00 × 100
f 7 7.13 × 1027.38 × 1027.21 × 1025.54 × 1007.12 × 1027.21 × 1027.18 × 1022.12 × 100
f 8 8.06 × 1028.36 × 1028.16 × 1026.94 × 1008.07 × 1028.21 × 1028.14 × 1024.62 × 100
f 9 9.00 × 1029.00 × 1029.00 × 1021.63 × 10−29.00 × 1029.00 × 1029.00 × 1020.00 × 100
f 10 1.13 × 1031.85 × 1031.48 × 1031.96 × 1021.13 × 1031.45 × 1031.31 × 1039.45 × 101
f 11 1.10 × 1031.14 × 1031.12 × 1038.54 × 1001.10 × 1031.12 × 1031.11 × 1034.58 × 100
f 12 2.05 × 1034.35 × 1052.75 × 1047.79 × 1041.88 × 1032.13 × 1049.67 × 1036.94 × 103
f 13 1.34 × 1039.10 × 1033.39 × 1032.16 × 1031.31 × 1033.43 × 1032.06 × 1036.48 × 102
f 14 1.43 × 1031.77 × 1031.49 × 1036.32 × 1011.44 × 1031.47 × 1031.45 × 1037.40 × 100
f 15 1.51 × 1031.76 × 1031.56 × 1036.06 × 1011.51 × 1031.53 × 1031.52 × 1035.95 × 100
f 16 1.60 × 1031.86 × 1031.72 × 1036.14 × 1011.60 × 1031.72 × 1031.64 × 1035.27 × 101
f 17 1.73 × 1031.78 × 1031.75 × 1031.39 × 1011.71 × 1031.75 × 1031.73 × 1038.66 × 100
f 18 1.84 × 1031.29 × 1045.07 × 1032.91 × 1031.93 × 1035.67 × 1033.66 × 1031.04 × 103
f 19 1.90 × 1031.96 × 1031.92 × 1031.07 × 1011.91 × 1031.92 × 1031.91 × 1033.93 × 100
f 20 2.01 × 1032.20 × 1032.07 × 1035.31 × 1012.00 × 1032.04 × 1032.03 × 1037.21 × 100
f 21 2.20 × 1032.20 × 1032.20 × 1032.67 × 10−132.20 × 1032.20 × 1032.20 × 1032.09 × 10−13
f 22 2.30 × 1032.30 × 1032.30 × 1032.39 × 10−132.21 × 1032.30 × 1032.30 × 1032.06 × 101
f 23 2.40 × 1032.82 × 1032.71 × 1037.80 × 1012.40 × 1032.67 × 1032.62 × 1039.54 × 101
f 24 2.50 × 1032.80 × 1032.61 × 1035.79 × 1012.50 × 1032.60 × 1032.59 × 1033.08 × 101
f 25 2.89 × 1032.95 × 1032.94 × 1032.04 × 1012.90 × 1032.95 × 1032.93 × 1032.32 × 101
f 26 2.80 × 1033.49 × 1032.94 × 1032.04 × 1022.60 × 1032.90 × 1032.83 × 1037.33 × 101
f 27 3.10 × 1033.50 × 1033.29 × 1031.15 × 1023.10 × 1033.23 × 1033.16 × 1034.45 × 101
f 28 3.10 × 1033.23 × 1033.15 × 1032.52 × 1013.10 × 1033.15 × 1033.13 × 1032.40 × 101
f 29 3.15 × 1033.30 × 1033.18 × 1033.45 × 1013.14 × 1033.17 × 1033.16 × 1031.05 × 101
f 30 3.49 × 1033.85 × 1049.59 × 1037.45 × 1033.71 × 1037.49 × 1035.22 × 1031.13 × 103
count024
Table 8. PSO 30-dimensional improvement comparison.
Table 8. PSO 30-dimensional improvement comparison.
funPSO(Dim = 30)PSOG(Dim = 30)
minmaxmeanstdminmaxmeanstd
f 1 1.00 × 1021.22 × 1091.37 × 1083.26 × 1081.00 × 1021.01 × 1021.00 × 1022.95 × 10−1
f 3 3.05 × 1023.93 × 1023.34 × 1022.29 × 1013.09 × 1023.53 × 1023.31 × 1021.51 × 101
f 4 4.00 × 1026.38 × 1024.89 × 1025.06 × 1014.04 × 1024.71 × 1024.66 × 1021.48 × 101
f 5 5.73 × 1026.71 × 1026.05 × 1022.42 × 1015.64 × 1025.95 × 1025.80 × 1029.27 × 100
f 6 6.00 × 1026.23 × 1026.08 × 1025.93 × 1006.00 × 1026.08 × 1026.04 × 1022.85 × 100
f 7 7.68 × 1028.46 × 1028.10 × 1022.12 × 1017.77 × 1028.22 × 1027.99 × 1021.40 × 101
f 8 8.67 × 1029.89 × 1029.18 × 1022.93 × 1018.75 × 1029.39 × 1029.09 × 1021.90 × 101
f 9 9.08 × 1024.85 × 1032.61 × 1031.02 × 1039.31 × 1022.88 × 1031.87 × 1036.15 × 102
f 10 2.80 × 1035.20 × 1034.07 × 1036.36 × 1022.96 × 1034.15 × 1033.63 × 1033.18 × 102
f 11 1.18 × 1031.43 × 1031.25 × 1035.91 × 1011.17 × 1031.27 × 1031.23 × 1033.01 × 101
f 12 2.64 × 1033.35 × 1081.12 × 1076.12 × 1073.63 × 1031.50 × 1047.33 × 1033.51 × 102
f 13 1.35 × 1031.02 × 1042.05 × 1031.80 × 1031.41 × 1032.71 × 1031.85 × 1034.06 × 102
f 14 1.48 × 1031.97 × 1031.68 × 1031.12 × 1021.53 × 1031.71 × 1031.64 × 1035.51 × 101
f 15 1.53 × 1031.92 × 1031.59 × 1037.02 × 1011.52 × 1031.61 × 1031.57 × 1032.55 × 101
f 16 1.86 × 1032.91 × 1032.35 × 1032.68 × 1021.96 × 1032.44 × 1032.23 × 1031.67 × 102
f 17 1.80 × 1032.52 × 1032.09 × 1031.83 × 1021.79 × 1032.02 × 1031.89 × 1037.44 × 101
f 18 5.96 × 1031.26 × 1053.88 × 1042.60 × 1049.55 × 1034.17 × 1042.33 × 1041.11 × 104
f 19 1.98 × 1032.93 × 1046.50 × 1036.01 × 1031.95 × 1034.41 × 1032.77 × 1038.62 × 102
f 20 2.20 × 1032.71 × 1032.42 × 1031.08 × 1022.13 × 1032.44 × 1032.31 × 1031.05 × 102
f 21 2.20 × 1032.20 × 1032.20 × 1032.25 × 10−12.25 × 1032.25 × 1032.25 × 1034.67 × 10−13
f 22 2.30 × 1032.30 × 1032.30 × 1032.24 × 10−12.35 × 1032.35 × 1032.35 × 1034.55 × 10−13
f 23 3.04 × 1034.20 × 1033.54 × 1033.24 × 1022.83 × 1032.88 × 1032.87 × 1031.30 × 101
f 24 2.60 × 1032.61 × 1032.60 × 1031.55 × 1002.60 × 1032.60 × 1032.60 × 1034.43 × 10−13
f 25 2.90 × 1033.05 × 1032.94 × 1034.21 × 1012.90 × 1032.97 × 1032.94 × 1032.79 × 101
f 26 2.80 × 1032.90 × 1032.80 × 1031.83 × 1012.80 × 1032.80 × 1032.80 × 1035.00 × 10−13
f 27 3.78 × 1035.06 × 1034.39 × 1033.41 × 1023.38 × 1033.59 × 1033.51 × 1035.25 × 101
f 28 3.17 × 1033.95 × 1033.31 × 1031.39 × 1023.17 × 1033.28 × 1033.24 × 1033.29 × 101
f 29 3.35 × 1034.11 × 1033.59 × 1032.12 × 1023.29 × 1033.65 × 1033.49 × 1031.10 × 102
f 30 4.19 × 1031.88 × 1051.60 × 1043.39 × 1044.44 × 1031.60 × 1049.79 × 1033.31 × 103
count224
Table 9. BBPSO 10-dimensional improvement comparison.
Table 9. BBPSO 10-dimensional improvement comparison.
funBBPSO(Dim = 10)BBPSOG(Dim = 10)
minmaxmeanstdminmaxmeanstd
f 1 1.28 × 1022.54 × 1031.28 × 1036.85 × 1021.50 × 1022.12 × 1031.21 × 1035.45 × 102
f 3 3.00 × 1023.00 × 1023.00 × 1020.00 × 1003.00 × 1023.00 × 1023.00 × 1020.00 × 100
f 4 4.00 × 1025.21 × 1024.30 × 1022.23 × 1014.00 × 1024.35 × 1024.17 × 1021.65 × 101
f 5 5.04 × 1025.27 × 1025.13 × 1025.83 × 1005.05 × 1025.12 × 1025.09 × 1022.08 × 100
f 6 6.00 × 1026.01 × 1026.00 × 1021.76 × 10−16.00 × 1026.00 × 1026.00 × 1023.69 × 10−14
f 7 7.08 × 1027.26 × 1027.18 × 1024.38 × 1007.13 × 1027.22 × 1027.18 × 1022.75 × 100
f 8 8.05 × 1028.22 × 1028.12 × 1024.40 × 1008.05 × 1028.13 × 1028.09 × 1022.63 × 100
f 9 9.00 × 1029.02 × 1029.00 × 1024.54 × 10−19.00 × 1029.00 × 1029.00 × 1020.00 × 100
f 10 1.03 × 1031.77 × 1031.34 × 1032.04 × 1021.04 × 1031.35 × 1031.17 × 1031.10 × 102
f 11 1.10 × 1031.12 × 1031.11 × 1035.17 × 1001.10 × 1031.11 × 1031.10 × 1031.91 × 100
f 12 2.40 × 1034.36 × 1053.59 × 1047.77 × 1043.97 × 1032.58 × 1041.19 × 1045.72 × 103
f 13 1.31 × 1039.37 × 1034.41 × 1032.86 × 1031.32 × 1034.04 × 1032.27 × 1039.62 × 102
f 14 1.43 × 1031.54 × 1031.46 × 1032.87 × 1011.43 × 1031.44 × 1031.43 × 1035.87 × 100
f 15 1.51 × 1031.69 × 1031.59 × 1035.23 × 1011.51 × 1031.59 × 1031.54 × 1032.34 × 101
f 16 1.60 × 1031.81 × 1031.68 × 1037.19 × 1011.60 × 1031.64 × 1031.61 × 1031.26 × 101
f 17 1.71 × 1031.85 × 1031.75 × 1033.17 × 1011.72 × 1031.74 × 1031.73 × 1035.67 × 100
f 18 1.86 × 1032.34 × 1046.48 × 1035.58 × 1032.09 × 1035.37 × 1033.03 × 1038.68 × 102
f 19 1.90 × 1032.12 × 1031.94 × 1035.16 × 1011.90 × 1031.92 × 1031.91 × 1034.37 × 100
f 20 2.00 × 1032.08 × 1032.03 × 1031.79 × 1012.00 × 1032.04 × 1032.03 × 1031.05 × 101
f 21 2.20 × 1032.20 × 1032.20 × 1032.53 × 10−132.25 × 1032.27 × 1032.26 × 1037.21 × 100
f 22 2.21 × 1032.30 × 1032.30 × 1031.70 × 1012.24 × 1032.39 × 1032.35 × 1034.83 × 101
f 23 2.65 × 1032.71 × 1032.68 × 1031.31 × 1012.65 × 1032.67 × 1032.67 × 1035.07 × 100
f 24 2.50 × 1032.82 × 1032.74 × 1031.21 × 1022.50 × 1032.81 × 1032.72 × 1031.39 × 102
f 25 2.89 × 1032.97 × 1032.93 × 1032.54 × 1012.89 × 1032.94 × 1032.91 × 1031.86 × 101
f 26 2.90 × 1033.62 × 1033.21 × 1032.42 × 1022.60 × 1033.37 × 1033.00 × 1031.92 × 102
f 27 3.14 × 1033.31 × 1033.17 × 1034.30 × 1013.12 × 1033.15 × 1033.14 × 1035.87 × 100
f 28 3.10 × 1033.37 × 1033.18 × 1036.51 × 1013.10 × 1033.15 × 1033.13 × 1032.50 × 101
f 29 3.14 × 1033.29 × 1033.18 × 1033.12 × 1013.14 × 1033.17 × 1033.16 × 1037.45 × 100
f 30 3.97 × 1032.32 × 1051.58 × 1044.10 × 1044.66 × 1031.03 × 1047.68 × 1031.52 × 103
count222
Table 10. BBPSO 30-dimensional improvement comparison.
Table 10. BBPSO 30-dimensional improvement comparison.
funBBPSO(Dim = 30)BBPSOG(Dim = 30)
minmaxmeanstdminmaxmeanstd
f 1 1.00 × 1025.10 × 1091.58 × 1091.52 × 1091.00 × 1022.91 × 1031.22 × 1031.41 × 103
f 3 9.72 × 1033.63 × 1042.08 × 1046.19 × 1036.85 × 1032.74 × 1042.05 × 1045.65 × 103
f 4 4.06 × 1029.89 × 1026.01 × 1021.48 × 1024.63 × 1024.76 × 1024.69 × 1024.71 × 100
f 5 5.52 × 1027.29 × 1026.31 × 1023.50 × 1015.51 × 1025.91 × 1025.76 × 1021.04 × 101
f 6 6.00 × 1026.26 × 1026.06 × 1025.35 × 1006.00 × 1026.01 × 1026.00 × 1022.04 × 10−1
f 7 7.72 × 1029.96 × 1028.53 × 1025.20 × 1017.81 × 1028.34 × 1028.16 × 1021.54 × 101
f 8 8.69 × 1021.02 × 1039.28 × 1023.52 × 1018.53 × 1028.85 × 1028.71 × 1021.00 × 101
f 9 1.39 × 1031.06 × 1042.93 × 1031.90 × 1039.16 × 1021.22 × 1031.03 × 1039.76 × 101
f 10 2.63 × 1035.80 × 1034.24 × 1037.55 × 1022.76 × 1033.82 × 1033.40 × 1032.87 × 102
f 11 1.21 × 1031.97 × 1031.43 × 1031.49 × 1021.18 × 1031.30 × 1031.25 × 1033.68 × 101
f 12 2.25 × 1045.70 × 1085.68 × 1071.31 × 1086.28 × 1039.46 × 1043.70 × 1042.61 × 104
f 13 1.92 × 1039.56 × 1054.38 × 1041.73 × 1053.58 × 1031.03 × 1046.82 × 1031.86 × 103
f 14 1.46 × 1031.38 × 1064.90 × 1042.51 × 1051.50 × 1031.64 × 1031.55 × 1034.16 × 101
f 15 1.88 × 1033.36 × 1049.47 × 1038.20 × 1031.83 × 1033.81 × 1032.37 × 1034.74 × 102
f 16 1.83 × 1033.09 × 1032.53 × 1033.53 × 1021.97 × 1032.53 × 1032.28 × 1031.73 × 102
f 17 1.89 × 1032.51 × 1032.07 × 1031.41 × 1021.87 × 1032.06 × 1031.95 × 1035.21 × 101
f 18 2.58 × 1047.94 × 1051.65 × 1051.55 × 1052.09 × 1041.25 × 1057.43 × 1042.93 × 104
f 19 2.02 × 1034.68 × 1041.18 × 1041.19 × 1042.00 × 1031.23 × 1045.57 × 1033.89 × 103
f 20 2.09 × 1032.57 × 1032.30 × 1031.22 × 1022.09 × 1032.29 × 1032.21 × 1036.24 × 101
f 21 2.10 × 1032.82 × 1032.25 × 1031.33 × 1022.25 × 1032.25 × 1032.25 × 1034.30 × 10−13
f 22 2.26 × 1032.40 × 1032.31 × 1033.30 × 1012.35 × 1032.35 × 1032.35 × 1034.55 × 10−13
f 23 2.88 × 1033.06 × 1032.93 × 1033.57 × 1012.85 × 1032.88 × 1032.86 × 1038.17 × 100
f 24 3.43 × 1033.55 × 1033.47 × 1033.19 × 1013.38 × 1033.41 × 1033.40 × 1039.35 × 100
f 25 2.90 × 1033.25 × 1033.00 × 1038.65 × 1012.91 × 1032.98 × 1032.93 × 1032.68 × 101
f 26 5.26 × 1036.77 × 1035.93 × 1033.56 × 1023.54 × 1035.50 × 1035.13 × 1035.39 × 102
f 27 3.44 × 1033.79 × 1033.58 × 1038.82 × 1013.41 × 1033.50 × 1033.46 × 1032.89 × 101
f 28 3.28 × 1035.56 × 1034.34 × 1039.07 × 1023.18 × 1035.16 × 1033.80 × 1039.12 × 102
f 29 3.41 × 1034.13 × 1033.70 × 1031.82 × 1023.31 × 1033.57 × 1033.46 × 1037.66 × 101
f 30 7.84 × 1039.17 × 1051.52 × 1052.42 × 1055.25 × 1034.65 × 1042.07 × 1041.24 × 104
count127
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kang, H.; Bei, F.; Shen, Y.; Sun, X.; Chen, Q. A Diversity Model Based on Dimension Entropy and Its Application to Swarm Intelligence Algorithm. Entropy 2021, 23, 397. https://doi.org/10.3390/e23040397

AMA Style

Kang H, Bei F, Shen Y, Sun X, Chen Q. A Diversity Model Based on Dimension Entropy and Its Application to Swarm Intelligence Algorithm. Entropy. 2021; 23(4):397. https://doi.org/10.3390/e23040397

Chicago/Turabian Style

Kang, Hongwei, Fengfan Bei, Yong Shen, Xingping Sun, and Qingyi Chen. 2021. "A Diversity Model Based on Dimension Entropy and Its Application to Swarm Intelligence Algorithm" Entropy 23, no. 4: 397. https://doi.org/10.3390/e23040397

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop