Next Article in Journal
On the Quadratization of the Integrals for the Many-Body Problem
Previous Article in Journal
Competing Risks Step-Stress Model with Lagged Effect under Gompertz Distribution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Adaptive Covariance Scaling Estimation of Distribution Algorithm

1
School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
2
College of Computer and Information Engineering, Henan Normal University, Xinxiang 453007, China
3
Department of Electrical and Electronic Engineering, Hanyang University, Ansan 15588, Korea
4
Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung 413310, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(24), 3207; https://doi.org/10.3390/math9243207
Submission received: 8 November 2021 / Revised: 30 November 2021 / Accepted: 7 December 2021 / Published: 11 December 2021
(This article belongs to the Topic Soft Computing)

Abstract

:
Optimization problems are ubiquitous in every field, and they are becoming more and more complex, which greatly challenges the effectiveness of existing optimization methods. To solve the increasingly complicated optimization problems with high effectiveness, this paper proposes an adaptive covariance scaling estimation of distribution algorithm (ACSEDA) based on the Gaussian distribution model. Unlike traditional EDAs, which estimate the covariance and the mean vector, based on the same selected promising individuals, ACSEDA calculates the covariance according to an enlarged number of promising individuals (compared with those for the mean vector). To alleviate the sensitivity of the parameters in promising individual selections, this paper further devises an adaptive promising individual selection strategy for the estimation of the mean vector and an adaptive covariance scaling strategy for the covariance estimation. These two adaptive strategies dynamically adjust the associated numbers of promising individuals as the evolution continues. In addition, we further devise a cross-generation individual selection strategy for the parent population, used to estimate the probability distribution by combing the sampled offspring in the last generation and the one in the current generation. With the above mechanisms, ACSEDA is expected to compromise intensification and diversification of the search process to explore and exploit the solution space and thus could achieve promising performance. To verify the effectiveness of ACSEDA, extensive experiments are conducted on 30 widely used benchmark optimization problems with different dimension sizes. Experimental results demonstrate that the proposed ACSEDA presents significant superiority to several state-of-the-art EDA variants, and it preserves good scalability in solving optimization problems.

1. Introduction

Optimization problems are ubiquitous in daily life and industrial engineering [1,2], such as protein structure prediction [3], community detection [4], control of pollutant spreading [5] and multi-compartment electric vehicle routing [6]. These optimization problems often preserve characteristics such as non-convex, discontinuous, and non-differentiable [7,8,9,10], which greatly challenge the effectiveness of traditional gradient-based optimization algorithms or even make them infeasible [11]. In particular, in the era of big data and the Internet of Things, optimization problems are becoming more and more complex due to the increase in dimensionality [12,13,14]. For instance, some unimodal problems become multimodal with many local optima [15], while some multimodal problems become more complicated with an increasing number of wide and flat local areas [16,17,18]. Such complicated optimization problems are becoming more and more common nowadays, and thus, it is urgent to develop effective optimization algorithms to solve them, so as to promote the development of related fields.
As a kind of gradient-free meta-heuristic algorithm, estimation of distribution algorithm (EDA) mainly maintains a population of individuals to iteratively search the solution space, with each individual representing a feasible solution [19]. During each generation, it selects a number of promising individuals to estimate the probability distribution of the population, and it then randomly samples a new population of solutions based on the estimated probability distribution [20,21]. Due to the randomness in sampling the offspring, EDA preserves high diversity and strong global search ability [22]. Therefore, a lot of researchers have paid extensive attention to developing effective EDAs, and, consequently, not only have EDAs been applied to solve various optimization problems, such as multimodal optimization problems [23] and multi-objective optimization problems [24], but also they have been employed to solve many real-world problems, such as multi-policy insurance investment planning [25] and multi-source heterogeneous user-generated content-driven interactive [22].
In the literature, most EDAs utilize the Gaussian distribution model to evaluate the probability distribution of the population, which is then adopted to sample new solutions [26,27]. During the estimation of the distribution, based on whether the correlation between variables is considered, the current Gaussian estimations of distribution algorithms (GEDAs) are mainly divided into two categories [20,27,28], namely univariate GEDAs (UGEDAs) [29,30,31] and multivariate GEDAs (MGEDAs) [32,33,34,35,36].
UGEDAs [30] consider that each variable is independent on each other. Therefore, the probability distribution of each variable is estimated individually. The most advantageous property of UGEDAs is that the computational cost of the distribution estimation and the offspring sampling is low [31]. However, their effectiveness deteriorates drastically when confronted with optimization problems with interacted variables [29].
Different from UGEDAs, MGEDAs take the correlation among variables into consideration [34], which is realized by estimating the covariance among all variables. With the covariance matrix, MGEDAs could capture the structure of the optimization problem and thus implicitly offer useful information to direct the search of the population [35]. Due to this advantage, MGEDAs achieve much better performance than UGEDAs, especially on problems with many interactive variables [20]. As a result, MGEDAs have been extensively researched in the literature [37,38,39]. However, such superiority of MGEDAs is at the sacrifice of efficiency, as calculating the covariance among all variables is very time-consuming [32].
In MGEDAs, the parameters (i.e., the mean vector and the covariance matrix) of the probability distribution are usually estimated based on a certain number of promising individuals [28]. Specifically, in the probability distribution, the mean vector plays a key role in controlling the center of the offspring to be sampled, while the covariance takes charge of the range of the offspring around the center. In other words, the mean vector makes a crucial influence on the convergence of the population to the optimal areas, while the covariance affects the population diversity [40]. Therefore, to maintain high search diversity for EDAs, many researchers have designed variance or covariance scaling methods [37,41,42,43] to enlarge the sampling range of the estimated probability distribution. However, on the one side, the basic covariance (variance) is still estimated on the same promising individuals selected for the estimation of the mean vector in most existing GEDAs. Therefore, after the covariance (variance) scaling, implemented by multiplying a scaling factor on the estimated covariance (variance) in most covariance scaling methods, the learned structure of the optimization problem, by them, remains unchanged; on the other side, most existing covariance (variance) scaling methods enlarge the estimated covariance (variance) the same degree in different directions.
To remedy the above shortcomings, this paper devises an adaptive covariance scaling method for MGEDAs, leading to an adaptive covariance scaling estimation of distribution algorithm (ACSEDA). Specifically, for the estimation of the mean vector in the probability distribution, it is the same as existing MGEDAs, namely estimating it based on a certain number of promising individuals. However, for the estimation of the covariance in the probability distribution, different from existing MGEDAs, ACSEDA first adaptively enlarges the number of promising individuals, and then, it calculates the covariance on the basis of the scaled promising individuals. In this way, the sampling range of the estimated probability distribution could be enlarged, which is helpful for sampling diversified offspring. As a result, the search diversity of EDA could be amplified, and thus, the chance of falling into local areas could be declined.
As a whole, the main contributions of this paper are summarized as follows:
(1)
An adaptive covariance scaling method is proposed to adaptively enlarge the sampling range of the estimated probability distribution. Different from most existing covariance scaling methods, we scale the covariance by calculating the covariance based on an amplified number of promising individuals. As a result, not only could the learned structure of the optimization problem captured by the algorithm be improved but the covariance in different directions is also scaled differently. In this way, it is expected that the sampled offspring are not only of high quality, but they are also diversified in different areas.
(2)
An adaptive selection of promising individuals for the estimation of the mean vector is further designed by adaptively decreasing the selection ratio, which is the number of the selected promising individuals out of the whole population. In this way, the estimated mean vector, namely the center of the offspring to be sampled, is gradually close to the promising areas that the current population covers. Therefore, the search process is gradually biased toward exploiting the solution space to refine the solution accuracy. However, it should be mentioned that such a bias is not greedy and not at the serious sacrifice of the population diversity because of the aforementioned covariance scaling technique.
(3)
A cross-generation individual selection scheme, for the parent population to estimate the probability distribution, is devised by combining the sampled offspring in the last generation and the one in the current generation to select parent individuals for the next generation. Instead of directly utilizing the sampled offspring as the parent population for the next generation in most existing MGEDAs, the proposed ACSEDA combines the sampled offspring in the last generation and the one in the current generation to select the top half best individuals to form the parent population for the next generation. In this way, the parent population formed is neither too crowded nor too scattered, and thus, the estimated probability distribution is of high quality to sample slightly diversified offspring to approach the optimal areas.
(4)
With the above mechanisms, the proposed ACSEDA is expected to compromise intensification and diversification of the search process well to explore and exploit the solution space and could thus achieve promising performance.
To verify the effectiveness of the proposed ACSEDA, this paper conducts extensive experiments on the widely used CEC2014 [44] benchmark optimization problems, with different dimension sizes, by comparing ACSEDA to 7 state-of-the-art GEDAs. In addition, deep investigations on the components of ACSEDA are also taken to observe what contributes to its promising performance in solving optimization problems.
The remainder of this study is organized as follows: Section 2 reviews related works on GEDAs; then, the proposed ACSEDA is elucidated in detail in Section 3; in Section 4, extensive experiments are conducted to verify the effectiveness of the developed ACSEDA; last, in Section 5, conclusions are presented.

2. Related Work

2.1. Basic GEDA

The overall framework of a general GEDA is outlined in Algorithm 1. As a whole, the basic principle of GEDA is to iteratively build a Gaussian probability distribution model, based on a certain number of promising individuals selected from the current population, and then sample new individuals based on the built probability model for the next generation [21].
Algorithm1: The Procedure of GEDA
Input: population size PS, selection ratio sr;
1: Set g = 0, and randomly initialize the population Pg;
2: Obtain the global best solution Gbest;
3: Repeat
4:   Select s r P S promising solutions Sg from Pg;
5:   Build a Gaussian probability distribution model Gg based on Sg;
6:   Randomly generate a new population Pg+1 by sampling from Gg;
7:   Update the global best solution Gbest;
8:   g = g + 1;
9: Until the stopping criterion is met.
Output: the global best solution Gbest;
Specifically, as shown in Algorithm 1, given that the population size is PS and the selection ratio is sr (which is the number of selected promising individuals out of the whole population), the number of selected promising individuals in each generation is s = s r P S . After PS individuals are initialized randomly and evaluated accordingly, as shown in Line 1, the global best solution found so far is obtained (as shown in Line 2). Subsequently, it comes to the main iteration of the algorithm. First, a set (S) of s promising individuals are selected from the current population (Line 4). Then, a Gaussian probability distribution model is estimated based on the selected individuals (Line 5). After that, PS new individuals are randomly sampled based on the estimated Gaussian distribution to form a new population (Line 6). Subsequently, the newly generated individuals are evaluated, and the global best solution is updated. The above process proceeds repeatedly until the termination condition is met. At last, the found global best solution is output.
In GEDAs, the key component is the way to estimate the probability distribution. Different manners of probability distribution estimation result in different kinds of GEDAs. In the literature, based on whether the linkage between variables is considered, existing GEDAs are mainly classified into two categories [20,27,28], namely univariate GEDAs (UGEDAs) [29,30,31,45,46] and multivariate GEDAs [27,32,33,34,35,36].
(1)
UGEDAs: In UGEDAs [29,45,46], each variable is considered to be separable and independent on each other. As a result, the probability distribution of D variables can be estimated separately, and the joint probability distribution of D variables is computed as follows:
P ( x 1 , x 2 , , x D ) = i = 1 D P ( x i ) .
where P(xi) is the probability distribution of the ith variable, which is estimated as:
P ( x i ) = 1 σ i 2 π e ( x i μ i ) 2 2 σ i
where ui and σi are the mean value and the variance of the ith variable respectively, which are calculated as follows:
μ i = 1 s j = 1 s S j i
σ i = ( 1 s 1 j = 1 s ( S j i μ i ) 2 )
where S is the set of the selected promising individuals, Sji is the ith dimension of the jth promising individual in S, and D denotes the dimension size of the optimization problem. Based on the estimated probability distribution of each variable, a new solution can be constructed by randomly sampling a new value of each variable separately, based on the associated probability distribution.
(2)
MGEDAs: In MGEDAs [33,34,35,36], the correlations between variables are taken into consideration to estimate the probability distribution. Consequently, different from UGEDAs, the probability distribution of D variables in MGEDAs is estimated together, and the joint probability distribution of D variables is computed as follows:
P ( x 1 , x 2 , , x D ) = 1 ( 2 π ) D | C | e ( 1 2 ( X μ ) T C 1 ( X μ ) )
where u is the mean vector of the multivariate Gaussian distribution, which is calculated by Equation (3). C is the covariance matrix, which is calculated as follows:
C = 1 s 1 ( S μ ) ( S μ ) T
Based on the estimated joint probability distribution, a new solution is constructed by jointly sampling values for all variables, randomly, from the multivariate Gaussian distribution model. In general, to make the sampling of new solutions simple, a modified version presented below is usually utilized to generate the offspring in most MGEDAs [32,35]:
X = μ + A Λ Z Z N ( 0 , 1 )
C = A Λ 2 A T
where A is the eigenvector matrix of C, and Λ is the diagonal matrix whose entries are the square root of the eigenvalues of C. Z is a real number vector, each value of which is randomly sampled from a standard normal distribution separately.
With respect to the computational cost, UGEDAs are less time-consuming, while MGEDAs take more computational cost due to the calculation of the covariance matrix [33,34,35]. However, in terms of the optimization performance, MGEDAs show much better performance, especially on problems with many interacted variables, while UGEDAs only present promising performance on separable optimization problems [29,30,47]. This is because MGEDAs could capture the interaction between variables and thus evolve the population more effectively than UGEDAs [27,37,48].

2.2. Recent Advance of GEDAs

During the optimization, one crucial challenge that most existing GEDAs encounter is the rapid shrinkage of the variance (or the covariance) [20,42,43], which leads to the quickly narrowed sampling range of the probability distribution. This may lead to a quick loss of the search diversity and thus may result in premature convergence and falling into local areas. To remedy this shortcoming, researchers have devoted plenty of attention to designing novel mechanisms to improve the quality of the probability distribution in GEDAs [38,49,50,51].
In [51], the authors demonstrated empirically that high diversity maintenance is very crucial for EDAs to achieve satisfactory performance. Then, based on the findings, they further developed a novel three-step method by combining clustering methods with EDAs to search for the optimal areas with high diversity. To prevent premature convergence in EDAs, Pošík [52] directly multiplied a constant factor on the estimated variance of the Gaussian distribution in each generation to enlarge the sampling range. In [43], Grahl et al. proposed a correlation-triggered adaptive variance scaling strategy to reduce the risk of premature convergence and then embedded it into the iterated density–estimation evolutionary algorithm (IDEA). Specifically, similar to [52], the proposed method multiplies a factor to the estimated variance. The difference lies in such a factor not being constant but dynamically adjusted during the evolution, based on whether the global best solution is improved or not. In addition, such adjustment is triggered based on the correlation between the ranks of the normal density and the fitness of the selected solutions. To further trigger the dynamic adjustment of the scaling factor properly, in [42], Bosman et al. proposed a novel indicator, named Standard–Deviation Ratio (SDR), to trigger the adjustment adaptively. Specifically, based on this indicator, the variance scaling is triggered only when the improvements are found to be far away from the mean vector. In [53], a cross-entropy based adaptive variance scaling method was proposed. In this method, the difference between the sampled population and the prediction of the probabilistic model is first measured, and the scaling factor on the variance is then computed by minimizing the cross-entropy between the two distributions.
Different from the above variance scaling methods that directly multiply a scaling factor on the estimated variance, in [49], the authors proposed a novel probability density estimator based on the new mean vector obtained by the anticipated mean shift strategy. Then, once the new mean vector gets better, the variance estimator adaptively enlarges the variance without using an explicit factor, but rather, by using the new better mean vector to calculate the variance. Furthermore, they also developed a reflecting sampling strategy to further improve the search efficiency of GEDA. Accompanied with these two schemes, a new GEDA variant named EDAVERS [49] was developed. Subsequently, a novel anisotropic adaptive variance scaling (AAVS) method was proposed in [41], and a new GEDA named AAVS-EDA was designed. Specifically, in this algorithm, a topology-based detection method was devised to detect the landscape characteristics of the optimization problems, and then, based on the captured characteristics, the variances along different eigen-directions are anisotropically scaled. In this way, the variances and the main search direction of GEDA could be simultaneously adjusted. Recently, Liang et al. proposed a new GEDA variant, named EDA2 [37], to improve the optimization performance of EDA. Specifically, instead of only utilizing promising individuals in the current generation to estimate the Gaussian model, this algorithm stores historical high-quality individuals generated in the previous generations into an archive and adopts these individuals to collaboratively estimate the covariance of the Gaussian model. In this manner, valuable historical evolution information could be integrated into the estimated model.
The above mentioned MGEDAs usually adopt the full rank covariance matrix to estimate the covariance. Since the calculation of the full-rank covariance matrix is very time-consuming, the computational complexity of most MGEDAs is usually high. To alleviate this shortcoming, researchers turn to seeking efficient covariance matrix adaption (CMA) techniques for EDA [54]. As for the covariance matrix, a direct and simple method to speed up its computation is to reduce the degrees of freedom. To this end, Ros and Hansen [55] proposed to only update the elements in the diagonal of the covariance matrix, leading to a de-randomized evolution strategy, named sep-CMA-ES. This method reduces the updating time and space complexity of the covariance matrix from quadratic to linear. In [56], the authors devised an adaptive diagonal decoding scheme to accelerate covariance matrix adaptation. Further, ref. [57] developed a matrix-free CMA strategy by employing combinations of difference vectors between archived individuals and random vectors, generated by the univariate Gaussian distribution along directions of the past shifts of the mean vector. In [58], Beyer and Sendhoff proposed a matrix adaptation evolution strategy (MA-ES) by removing one evolution path in the calculation of the covariance matrix, leading to that the covariance update is no longer needed. In [59], Li and Zhang first designed a rank one evolution strategy by using a single principal search direction, which is of linear complexity. Then, they developed a rank-m evolution strategy by employing multiple search directions. In particular, these two evolution strategies mainly adopt principal search directions to seek for the optimal low rank approximation to the covariance matrix. In [60], He et al. put forward a search direction adaptation evolution strategy (SDA-ES) with linear time and space complexity. Specifically, this algorithm first models the covariance matrix with an identity matrix along with multiple search directions. Then, it uses a heuristic to update the search directions such as the principal component analysis.
Besides the advance of GEDAs in covariance (or variance) scaling and adaption, some researchers have also attempted to design new ways to shift the mean vector of the Gaussian distribution model. For instance, in [61], Bosman et al. proposed an anticipated mean shift to update the mean vector and then used the updated mean vector to calculate the variance. In addition, some researchers have also attempted to adopt other distribution models, instead of the Gaussian distribution model, to estimate the probability distribution. For example, in [62], a probabilistic graphical model was designed to consider the dependencies between multivariate variables. Specifically, in this algorithm, a parallel of a certain number of subgraphs, with a smaller number of variables, is estimated separately to capture the dependencies among variables in each subgraph. Then, each estimated graph model associated with the subgraph samples new values for the associated variables separately. In [39], the authors utilized the Boltzmann distribution to build the probability distribution model in EDA, leading to BUMDA. In particular, the distribution parameters are derived from the analytical minimization of the Kullback–Leibler divergence. In [50], the authors devised a novel multiple sub-models maintenance technique for EDA, leading to a new EDA variant, named maintaining and processing sub-models (MAPS). Specifically, this algorithm maintains multiple sub-models to detect promising areas.
Since EDAs utilize the estimated probability distribution model to sample new solutions, they generally lack subtle refinement to improve the solution accuracy [63]. To fill this gap, local search methods are commonly accompanied with EDAs to refine the found promising solutions [38,49,50]. For instance, in [64], simulated annealing (SA) based local search operator was incorporated into EDA to balance the exploration and exploitation to search the solution space properly. Specifically, the SA-based local search is probabilistically executed on some good solutions to improve their accuracy. To improve the solution accuracy, Zhou et al. [38] developed cheap and expensive local search methods for EDA, leading to a new EDA variant named EDA/LS. In particular, this EDA variant adopts a modified univariate histogram probabilistic model to sample a part of individuals, and it then utilizes a cheap local search method to sample the rest of the individuals. Besides, it also employs an expensive local search method to refine the found promising solutions. Along this direction, an extension of EDA/LS, named EDA/LS-MS, was developed in [65] by introducing a mean shift strategy to replace the cheap local search method in EDA/LS to refine some good parent solutions.
Though a lot of remarkable GEDA variants have emerged and shown promising performance in solving optimization problems, they still encounter limitations, such as falling into local areas and premature convergence. In particular, it is found that most existing GEDAs estimate the variance (or covariance) based on the same selected promising individuals used for the estimation of the mean vector. Although various variance (or covariance) scaling methods [37,43,49,53,66] and covariance matrix adaption methods [56,57,58,59,60] have been proposed to improve the sampling range of the estimated probability distribution model, on the one hand, the structure of the optimization problem captured by most existing GEDA variants remains unchanged after the scaling; on the other hand, most existing variance (covariance) scaling methods scale the estimated variance (covariance) equally in different directions. This is actually not beneficial to effectively sample new promising individuals.
To alleviate the above concern, this paper devises an adaptive covariance scaling EDA by adaptively enlarging the number of promising individuals (as compared with those for the mean vector estimation) to estimate the covariance. In this way, not only does the structure of the optimization problem captured by the algorithm become better, but also the covariance is scaled differently in different directions.

3. Proposed ACSEDA

To improve the effectiveness of EDA in solving optimization problems, this paper proposes an adaptive covariance scaling EDA (ACSEDA) by introducing more promising individuals to calculate the covariance. Furthermore, to alleviate the sensitivity of the proposed ACSEDA to parameters, this paper further devises two adaptive strategies for the two key parameters in ACSEDA. The components of ACSEDA are elucidated as follows.

3.1. Adaptive Covariance Scaling

In traditional GEDAs [27], both the mean vector and the covariance of the multivariate Gaussian distribution model are estimated based on the selected promising individuals. Then, on the basis of the estimated probability distribution model, the offspring are sampled randomly. In particular, we can see that the mean vector has a great influence on the convergence speed of GEDAs to promising areas, while the covariance mainly takes charge of the sampling range of the distribution model, which plays a significant role in high diversity maintenance.
During the evolution, the population gradually approaches the promising areas and the selected promising individuals used for probability distribution estimation are gradually aggregated together as well. In this situation, the estimated covariance would become smaller and smaller. Once the estimated mean vector falls into local areas, the sampled offspring could hardly escape from local areas. As a consequence, the population falls into local areas, and premature convergence occurs. Such a predicament is encountered by many existing GEDAs [20].
To alleviate this issue, this paper proposes a covariance scaling strategy to enlarge the covariance by introducing more promising individuals on the basis of the selected individuals for the estimation of the mean vector. Specifically, given the population size is PS, s = s r P S promising individuals are first selected from the population to estimate the mean vector of the probability distribution, where sr is the selection ratio, defined as the number of selected individuals out of the population. Then, different from most existing GEDAs [67], which estimate the covariance based on the sc = c s P S individuals as well, this paper selects sc promising individuals to estimate the covariance, where cs is the covariance scaling parameter, which is the number of the promising individuals out of the population and is usually larger than sr. In this way, more promising individuals are selected to participate in the estimation of the covariance and thus, the covariance is enlarged.
As shown in Figure 1, after the population is sorted from the best to the worst with respect to the fitness, s best individuals are selected to form the promising individual set S, and then, the mean vector u is estimated based on S according to Equation (3). Subsequently, different from most existing GEDAs, the proposed covariance scaling method selects sc best individuals to form the promising individual set SC to estimate the covariance. It should be mentioned that cs is usually larger than sr, which also indicates that SC is larger than S. In this way, S is a subset of SC. Subsequently, instead of using S to calculate the covariance according to Equation (6), the proposed method utilizes SC, namely an enlarged individual set, to estimate the covariance as follows:
C = 1 s c 1 ( S C μ ) ( S C μ ) T
As shown in Figure 1, since more promising individuals participate in the estimation of the covariance, the sampling range of the estimated probability distribution model is enlarged. On the one hand, the sampled offspring based on this model are more diversified, which is very beneficial for the population to avoid falling into local areas. On the other hand, it might also be likely to generate more promising individuals close to the promising areas, and thus, the convergence could also be strengthened to some extent.
Remark 1.
It deserves attention that different from existing covariance scaling methods, which scales the covariance directly with a fixed scalar, the proposed covariance scaling method estimates the covariance based on an enlarged number of promising individuals as compared to those for the estimation of the mean vector. This brings the following two benefits for the estimated probability distribution model:
(1) 
By introducing more promising individuals, the proposed scaling method enlarges the covariance differently in different directions between variables and thus it implicitly takes the difference between variables into consideration. However, existing scaling methods [37] enlarge the covariance with a same scalar and hence they do not consider the difference between variables.
(2) 
The proposed scaling method is likely to better capture the structure of the optimization problem with respect to the correlations between variables by introducing more promising individuals. Nevertheless, the structure captured by existing scaling methods remains unchanged after the scaling.
Taking a deep investigation on the parameter cs, in the proposed covariance scaling method, we find that neither a too large cs, nor a too small cs are proper to aid EDA to achieve promising performance. On the one side, a too-large cs may lead to a too large sampling range of the probability distribution. This may result in too diversified offspring sampled from the distribution model. In particular, it is found that, in the early stage of the evolution, a large cs may be beneficial to maintain a large sampling range and thus sample diversified offspring. This is helpful for EDA to explore the solution space in very different directions, whereas, in the late stage, such a setting of cs is not appropriate because it is not beneficial for the population to extensively exploit the found promising areas to refine the solution accuracy. On the other side, a too-small cs may bring in a too-small sampling range of the distribution model, which may sample concentrated offspring. Though it is desirable in the late stage of the evolution, it is not suitable during the whole evolution, because it may increase the risk of EDA in falling into local areas. Consequently, based on the above analysis, it is found that cs should not be fixed, but dynamically adjusted during the evolution process.
To the above end, this paper further designs an adaptive strategy for cs as follows:
c s = 1 ( 1 s r min ) ( F E s F E s max ) 2
where srmin denotes the lower bound of the selection ratio sr used for the estimation of the mean vector of the multivariate Gaussian distribution model, FEsmax is the maximum number of fitness evaluations, while FEs denotes the used number of fitness evaluations up to the current generation.
From Equation (10), we can see that cs decreases from 1.0 to srmin as the evolution continues. Specifically, it is found that, in the early stage, most individuals in the population are used to estimate the covariance. This brings two benefits for EDA: (1) the sampling range of the probability distribution model is large and thus the sampled offspring are diversified and scatter dispersedly to explore the solution space. It is not only beneficial for the population to find more promising areas, but it is also very profitable for the population to avoid falling into local areas. (2) The captured structure of the optimization problem tends to be global and accurate with a large number of promising individuals. In the early stage, the individuals are usually scattered diversely in the solution space. In this situation, the captured structure of the optimization problem is usually global. Therefore, to accurately capture the correlations between variables globally, a large number of promising individuals are usually needed. Consequently, in the early stage, it is helpful to capture an accurate structure of the optimization problem when cs is large.
Conversely, in the late stage, from Equation (10), it is found that cs becomes smaller and smaller. This leads to a narrow sampling range of the probability distribution. Therefore, the sampled offspring are concentrated and surrounded around the mean vector. In this situation, the population exploits the found promising areas, and thus, the accuracy of the solution can be improved.
To summarize, with the above adaptive covariance scaling scheme, the proposed EDA variant is expected to obtain a promising balance between diversification and intensification of the population. Therefore, the algorithm could explore and exploit the complicated solution space properly to obtain promising performance in solving complicated optimization problems.

3.2. Adaptive Promising Individuals Selection

In GEDAs, the number (s = s r P S ) of selected promising individuals, for the estimation of the mean vector, makes a significant influence on the convergence speed of EDAs. As shown in Figure 1, the mean vector mainly takes control of the center of the sampled offspring. A too-large sr may lead to that a large number of promising individuals being used to estimate the mean vector. As a result, the estimated mean vector may be too far away from the promising areas. In the early stage of the evolution, this is beneficial for EDAs to maintain high search diversity. Nevertheless, in the late stage of the evolution, a large sr may slow down the convergence of the population to find high-quality solutions. On the contrary, a too-small sr may result in the estimated mean vector being too close to the promising areas. This case is suitable, in the late stage of the evolution, to exploit the found promising areas. However, it may lead to premature convergence if we keep sr small during the whole evolution, especially when the selected promising individuals all fall into local areas.
Based on the above analysis, it might as well dynamically adjust sr during the evolution. To this end, this paper devises a simple adaptive strategy for sr as follows:
s r = s r max ( s r max s r min ) ( F E s F E s max ) 0.1
where srmax and srmin represent the maximum selection ratio and the minimum selection ratio, which accordingly determine the maximum number (smax = s r max P S ) and the minimum number (smin = s r min P S ) of the selected promising individuals. In this paper, we set them as 0.35 and 0.05, respectively.
From Equation (11), we can see that in the early stage, sr is large, and then, it decreases gradually as the evolution goes. This indicates that during the evolution, the mean vector of the estimated probability distribution is becoming closer and closer to the promising areas. In this way, the population gradually tends to exploit the found promising areas.
Remark 2.
In particular, compared Equation (11) with Equation (10), as shown in Figure 2, the following findings can be obtained:
(1) 
srdecreases dramatically in the early stage, and mildly in the late stage, while cs decreases mildly in the early stage, and dramatically in the late stage. This actually matches the expectation that the proposed ACSEDA should explore the solution space in the early stage without serious loss of convergence, while it should exploit the search space in the late stage without serious sacrifice of search diversity. For one thing, in the early stage, sr decreases rapidly and thus the estimated mean vector is close to the promising areas that the current population lies. However, it should be mentioned that in such a situation, the sampling diversity of the estimated probability distribution is not declined, because the estimated covariance is large due to the large cs. On the contrary, in this situation, the sampling quality of the estimated probability distribution could be improved due to the high-quality mean vector and thus the population could effectively explore the search space to find promising areas in the early stage. In the late stage, sr decreases mildly, while cs descends quickly. In this situation, the quality of the mean vector is gradually promoted by approaching the promising areas closer and closer. At the same time, the sampling range of the estimated distribution gradually shrinks due to the covariance estimated on the reduced number of promising individuals. Therefore, in the late stage, ACSEDA gradually biases to exploiting the found promising areas to improve the solution quality. However, it should be mentioned that such a bias is not at the serious sacrifice of the search diversity because of the proposed covariance scaling technique.
(2) 
csis always larger than sr during the evolution and the gap between cs and sr gradually shrinks as the evolution goes. This indicates that during the evolution, compared with traditional GEDAs, the covariance is always amplified, so that the estimated probability distribution could sample diversified offspring around the estimated mean vector with high quality. In addition, the gradually narrowed difference between sr and cs indicates that the scaling of the covariance is gradually declined. This implies that the proposed ACSEDA gradually concentrates on exploiting the solution space to refine the solution accuracy.

3.3. Cross-Generation Individual Selection for Parent Population

In traditional EDAs [28], the offspring is directly utilized as the parent population for the next generation to estimate the probability distribution model. Since the quality of the sampled offspring is uncertain, the quality of the estimated probability distribution model may not be improved or even degrade compared with the estimated probability distribution in the last generation. This may slow down the convergence of the population to promising areas. Therefore, to remedy this shortcoming, some EDA variants [68] combine the offspring and the parent population together and then select the best PS individuals as the parent population for the next generation. However, such selection is too greedy and thus may lead to premature convergence and falling into local areas.
To alleviate the above predicament, and to further make a promising compromise between convergence and diversity, this paper further devises a cross-generation individual selection strategy for the parent population.
As Figure 3 shows, this paper combines the sampled offspring in the last generation and the sampled offspring in the current generation and then selects the best PS individuals as the parent population for the next generation to estimate the probability distribution model. In this way, the historical information in the last generation can be utilized to build the probability distribution model.
Remark 3.
Different from existing individual selection methods [26,48,69], the proposed cross-generation individual selection strategy takes advantage of the sampled offspring in the two consecutive generations to select individuals for the parent population in the next generation. Such a selection strategy brings the following benefits to ACSEDA:
(1) 
Individuals in the parent populationare diversified.The sampled offspring in the last generationusually have big difference with the offspring in the current generation.Combining them together to select individuals is less likely to generate crowded individuals for the parent population. As a result, the estimated probability distribution model is less likely to fall into local areas andat the same time, has a wide sampling range to generate diversified offspring. In this way, ACSEDA could preserve high search diversity during the evolution.
(2) 
With this strategy, the latest historical promising individuals in the last generation could be integrated with those in the current offspring. As a consequence, individuals in the parent population are not only diversified, but also of high quality. Hence, the estimated probability distribution is of high quality to generate more promising offspring. By this means, the convergence of ACSEDA could be guaranteed.
(3) 
With this selection strategy, ACSEDA is further expected to preserve a good compromise between exploration and exploitation to search the solution space effectively.Experiments conducted in Section 4will demonstrate the effectiveness of the proposed cross-generation individual selection strategy.

3.4. Overall Procedure of ACSEDA

Combining the above three schemes together, the proposed ACSEDA is outlined in Algorithm 2. Specifically, after the initialization of the population (Line 2), the algorithm goes to the main iteration loop for evolution (Lines 5–16). In the main loop, it first executes the proposed adaptive promising individual selection strategy for the estimation of the mean vector (Lines 6 and 7). Then, it comes to the proposed adaptive covariance scaling strategy to estimate the covariance of the probability distribution model (Lines 8 and 9). Subsequently, the offspring are randomly sampled based on the estimated probability distribution model (Line 10). Hereafter, it arrives at the proposed cross-generation individual selection strategy to select individuals for the parent population in the next generation (Line 12). At last, a local search method is conducted on the global best solution to refine its accuracy (Line 14).
Algorithm 2: The Procedure of ACSEDA
Input: population size PS;
1: Set FEs = 0;
2: Initialize PS individuals randomly and evaluate their fitness;
3: FEs = FEs + PS;
4: Obtain the global best solution Gbest and store the current population;
5: While (FEs < FEsmax)
6:     Calculate the selection ratio sr according to Equation (11);
7:     Select s r P S promising solutions from the population and calculate the mean value μ using Equation (3);
8:     Calculate the covariance scaling parameter cs according to Equation (10);
9:     Estimate the covariance matrix C according to Equation (9);
10:   Randomly sample PS new individuals based on the estimated multivariate Gaussian model, evaluate their fitness and store them;
11:   FEs = FEs + PS;
12:   Combine the offspring in the last generation and the offspring in the current generation to select PS better individuals to form the parent population for the next generation;
13:   Update the global best solution Gbest;
14:   Execute local search 2 times on Gbest;
15:   FEs = FEs +2;
16:   End While
Output: the global best solution Gbest;
In Algorithm 2, it should be noticed that a local search strategy is additionally added to improve the solution accuracy of the global best solution. This is because EDAs are probability distribution model based optimization algorithms, and as a consequence, EDAs usually lack strong local exploitation [26,38,70]. Therefore, in the literature [23,38,70], local search methods are generally accompanied by EDAs to improve the solution quality. Hence, the same as most existing EDA variants [23,26,38,70,71], ACSEDA also adopts a local search method to refine the global best solution, as shown in Line 14.
For simplicity and keeping consistent with the probability distribution model in ACSEDA, this paper applies the univariate Gaussian distribution with a small variance to execute the local search on the global best solution. In this paper, the small variance is set as 1.0 × 10−4. In addition, for saving computational resources, we execute the local search method on the global best solution only two times, as shown in Line 14.
As a whole, with the proposed three main techniques and the local search method, ACSEDA is expected to explore and exploit the solution space properly to locate the optima of optimization problems.

4. Experimental Studies

This section mainly conducts extensive experiments to verify the effectiveness of the proposed ACSEDA. Specifically, the commonly used CEC 2014 benchmark problem set [44] is adopted in this paper. This benchmark set contains 30 various complicated optimization problems, such as unimodal problems, multimodal problems, hybrid problems, and composition problems. For detailed information on this benchmark set, please refer to [44].

4.1. Experimental Settings

First, to comprehensively demonstrate the effectiveness of the proposed ACSEDA, we select several state-of-the-art EDA variants to make comparisons in solving the complicated CEC 2014 benchmark problems. Specifically, the selected state-of-the-art EDA variants are EDA2 [37], EDAVERS [49], EDA/LS [38], EDA/LS-MS [65], MA-ES [58], and BUMDA [39]. In addition, as a baseline method, the traditional multivariate Gaussian model based EDA [28] is also utilized as a compared method. To tell it apart from the others, we denote it as TRA-EDA in the experiments.
Second, to make comprehensive comparisons between the proposed ACSEDA and the above compared EDA variants, we compare their optimization performance in solving the CEC 2014 problems with three different dimension sizes, namely 30-D, 50-D, and 100-D. For fairness, the maximum number of fitness evaluations (FEsmax) is set as 10,000 D for all algorithms.
Third, for fair comparisons, the key parameter settings of the compared algorithms are set, as recommended, in the associated papers. For the population size, we tune the settings of all algorithms on the CEC 2014 benchmark set with different dimension sizes. Specifically, after preliminary experiments, the parameter settings of all algorithms are shown in Table 1.
Fourth, to comprehensively evaluate the optimization performance of each algorithm, we execute each algorithm independently for 30 runs and utilize the median, mean, and standard deviation values over the 30 independent runs to evaluate its optimization performance. Furthermore, to tell the statistical significance, we conduct the Wilcoxon rank-sum test, at the significance level of α = 0.05, to compare the proposed ACSEDA with each associated EDA variant. In addition, to compare the overall optimization performance of all algorithms on the whole CEC 2014 benchmark set, we further conduct the Friedman test at the significance level of α = 0.05 by taking advantage of the mean value of each algorithm on each function in the benchmark set.
At last, it deserves attention that all algorithms are programmed under MATLAB R2018a, and they are run on the same computer with Intel(R) Core(TM) i7-10700T CPU @ 2.90 GHz 2.90 GHz and 8 G RAM.

4.2. Comparison with State-of-the-Art EDAs

Table 2, Table 3, and Table 4 display the comparison results between ACSEDA and the compared EDA variants on the 30-D, 50-D, and 100-D CEC 2014 benchmark problems, respectively. In these tables, the symbols “+”, ”−” and “=” represent that ACSEDA is significantly better than, significantly worse than, and equivalent to the associated compared algorithms on the associated problems, respectively. Besides, “w/t/l” denotes the numbers of the problems where ACSEDA achieves significantly better performance, equivalent performance, and significantly worse performance than the compare algorithms, respectively. Actually, “w/t/l” is equal to the numbers of “+”, ”=” and “−”, respectively. Additionally, in the last rows of this table, the averaged rank of each algorithm obtained from the Friedman test is presented.
From Table 2, the comparison results between ACSEDA and the compared state-of-the-art EDAs on the 30-D CEC 2014 benchmark problems can be summarized as follows:
(1)
As shown in the last row of Table 2, in view of the Friedman test, it is found that the proposed ACSEDA obtains the smallest rank and this rank value is much smaller than those of the other algorithms. This indicates that ACSEDA achieves the best overall performance on the 30-D CEC 2014 benchmark set and obtains significant superiority to the compared algorithms.
(2)
As shown in the second-to-last row of Table 2, from the perspective of the Wilcoxon rank-sum test, we can see that ACSEDA achieves significantly better performance than the compared algorithms on at least 23 problems, except for EDA2 and MA-ES. Compared with EDA2, ACSEDA shows significant superiority on 13 problems, and only presents inferiority on 6 problems. Competing with MA-ES, ACSEDA presents significant dominance on 19 problems, and it loses the competition on 10 problems.
(3)
With respect to the optimization performance on different kinds of problems, on the three unimodal problems, both ACSEDA and EDA2 achieve the true global optima of these three problems and thus show significantly better performance than the other 6 EDA variants. On the 13 simple multimodal problems, ACSEDA is significantly superior to EDAVESR, EDA/LS, EDA/LS-MS, and TRA-EDA on 10 problems, and it also beats EDA2, BUMDA, and MA-ES down on 8, 9, and 9 problems, respectively. In terms of the six hybrid problems, the optimization performance of ACSEDA is significantly better than the compared EDA variants on all these functions, except for EDA2. In comparison with EDA2, ACSEDA shows great superiority on three problems and achieves equivalent performance with EDA2 on three problems. In particular, on these six hybrid problems, ACSEDA shows no inferiority to all the compared EDA variants. As for the eight composition problems, it is observed that ACSEDA is significantly better than EDA/LS and EDA/LS-MS on all these problems. In comparison with TRA-EDA and BUMDA, it outperforms them on six and five problems, respectively. Particularly, on this kind of problem, ACSEDA is a litter inferior to EDA2 and MA-ES.
(4)
To sum up, it is observed that ACSEDA shows very competitive, or even significantly better, performance in solving the 30-D CEC 2014 benchmark problems, as compared with the selected state-of-the-art EDA variants. In particular, encountered with complicated optimization problems, such as multimodal problems, hybrid problems, and composition problems, the proposed ACSEDA shows great superiority to the compared algorithms, which indicates that it is very promising for complicated problems.
Subsequently, from Table 3, we can get the following findings, with respect to the comparison results between ACSEDA and the compared state-of-the-art EDA variants, on the 50-D CEC 2014 benchmark problems:
(1)
In terms of the Friedman test, as shown in the last row of Table 3, it is found that ACSEDA still achieves the lowest rank among all algorithms. This verifies that ACSEDA still obtains the best overall performance on the 50-D CEC 2014 problems.
(2)
With respect to the Wilcoxon rank sum test, as shown in the second last row of Table 3, ACSEDA outperforms EDAVERS, EDA/LS, EDA/LS-MS, TRA-EDA, and BUMDA significantly on 24, 28, 28, 28, and 21 problems, respectively. Compared with EDA2, ACSEDA attains much better performance on 13 problems and equivalent performance on 9 problems. Competing with MA-ES, ACSEDA significantly outperforms it on 19 problems and only shows inferiority on 11 problems.
(3)
Regarding the performance on different kinds of optimization problems, on the three unimodal problems, except for EDA2 and MA-ES, ACSEDA still presents great superiority to the other compared EDA variants on all the three problems. In particular, both ACSEDA and EDA2 locate the true global optimum of F3, while ACSEDA displays great dominance over EDA2 on the other two problems. Compared with MA-ES, ACSEDA is much better on two problems, and it obtains worse performance on only one problem. On the 13 simple multimodal functions, ACSEDA significantly outperforms EDAVERS, EDA/LS, EDA/LS-MS, and TRA-EDA on 11 problems, performs significantly better than MA-ES on 10 problems, and wins the competition on 7 problems, as competed with both EDA2 and BUMDA. On the 6 hybrid problems, ACSEDA is significantly superior to the compared EDA variants on all the six problems, except for EDA2. In competition with EDA2, ACSEDA loses the competition on five problems. On the 8 composition problems, ACSEDA is better than EDA/LS, EDA/LS-MS, TRA-EDA on all eight problems. At the same time, it achieves equivalent or even much better performance than EDA2, EDAVERS, and BUMDA on at least six problems. However, ACSEDA shows inferior performance to MA-ES on seven problems.
(4)
In summary, encountered with the 50-D CEC 2014 problems, ACSEDA still exhibits significantly better performance than the compared EDA variants. This further demonstrates that ACSEDA is promising for both simple optimization problems, such as unimodal problems, and complicated optimization problems, such as hybrid problems and composition problems.
At last, from Table 4, the following observations can be achieved from the comparison results between ACSEDA and the compared state-of-the-art EDA variants on the 100-D CEC 2014 benchmark problems:
(1)
From the averaged rank obtained from the Friedman test, it is observed that ACSEDA still obtains the smallest rank value among all algorithms. This means that ACSEDA consistently achieves the best overall optimization performance on the 100-D CEC 2014 benchmark set.
(2)
According to the results of the Wilcoxon rank sum test, ACSEDA presents great dominance to EDAVERS, EDA/LS, EDA/LS-MS, and TRA-EDA on 23, 26, 24, and 28 problems, respectively. In comparison with EDA2 and BUMDA, ACSEDA obtains competitive or even better performance on 20 and 22 problems, respectively. Compared with MA-ES, ACSEDA achieves much better performance on 15 problems and presents inferiority on 15 problems as well. This indicates that ACSEDA is very competitive to MA-ES on the 100-D CEC2014 benchmark problems.
(3)
Concerning the optimization performance on different kinds of optimization problems, on the three unimodal problems, ACSEDA outperforms EDAVERS, EDA/LS-MS, TRA-EDA, and BUMDA on all these three problems, and it performs much better than EDA/LS on two problems. However, it loses the competition on these three problems to both EDA2 and MA-ES. When it comes to the 13 simple multimodal functions, ACSEDA shows significantly better performance than EDAVERS, EDA/LS, EDA/LS-MS, TRA-EDA, and MA-ES on at least 10 problems, and presents great superiority to EDA2 on 8 problems. On these 13 problems, ACSEDA and BUMDA achieve very similar performance. Encountered with the six hybrid problems, ACSEDA exhibits much better performance than the compared EDA variants on at least five problems, except for EDA2. Faced with the eight composition problems, ACSEDA is better than EDA/LS, EDA/LS-MS, TRA-EDA, and BUMDA on at least five problems, presents very competitive performance with EDA2 and EDAVERS, and is only inferior to MA-ES.
(4)
To conclude, encountered with the 100-D CEC 2014 problems, ACSEDA still shows great superiority to the compared state-of-the-art EDA variants in solving such high-dimensional problems. In particular, on the complicated problems with such high dimensionality, such as the hybrid problems and the composition problems, ACSEDA still presents significant dominance to most of the compared algorithms. This further demonstrates that the proposed ACSEDA is promising for optimization problems.
Comprehensively speaking, from the above comparisons, we can see that ACSEDA consistently exhibits great superiority to the compared state-of-the-art EDA variants on the CEC 2014 benchmark problem set with different dimension sizes. This demonstrates that ACSEDA is promising for both simple unimodal problems and complicated multimodal problems. Besides, it also preserves good scalability in solving optimization problems. The above demonstrated superiority of the proposed ACSEDA mainly benefits from the proposed three techniques. With the cohesive cooperation of them, ACSEDA could strike a promising balance between exploration and exploitation to search the complicated solution space properly.

4.3. Deep Investegation on ACSEDA

From the above comparison experiments, we can see that ACSEDA shows great dominance over the compared state-of-the-art EDA variants. In this section, we take a deep observation on ACSEDA to investigate the influence of each component, so it is clear to see what contributes to the promising performance of ACSEDA.

4.3.1. Effectiveness of the Covariance Scaling Strategy

First, we conduct experiments to verify the effectiveness of the proposed covariance scaling strategy, which is realized by setting a larger cs (to estimate the covariance) than the selection ratio sr (to estimate the mean vector). To this end, first, we fix different sr. Then, based on each fixed sr, we set different cs, each of which is larger than the associated sr. Subsequently, based on the above settings of sr and cs, we conduct experiments on the 50-D CEC 2014 benchmark problems. Table 5 shows the comparison results among ACSEDA with different settings of the selection ratio (sr) and different settings of the covariance scaling (cs) parameter on the 50-D CEC 2014 benchmark problems. In this table, the best results are highlighted in bold in each part associated with each fixed sr. In addition, the averaged ranks of each cs, in each part obtained from the Friedman test, are listed in the last row of the table.
From Table 5, we can get the following findings:
(1)
With respect to the comparison results of each part, it is found that a larger cs than sr helps ACSEDA achieve much better performance than the one with cs = sr. In particular, it is interesting to find that the superiority of the ACSEDA with a larger cs (than sr) to the one without the covariance scaling (cs = sr) is particularly significant in solving complicated problems such as F20-F30. This demonstrates the covariance scaling technique is helpful for ACSEDA to obtain promising performance in solving optimization problems, especially on complicated problems.
(2)
It is also interesting to find that neither a too small cs, nor a too large cs (compared with sr) are suitable for ACSEDA to achieve promising performance. For instance, when sr = 0.1, we find that, though ACSEDA with a too-small cs (cs ≤ 0.4) achieves much better performance than the one with cs = sr (namely without covariance scaling), its performance is much worse than the ones with a larger cs (0.4 < cs < 0.8). On the contrary, when ACSEDA has a too large cs (cs ≥ 0.8), its performance degrades dramatically, as compared with the ones with a proper cs. A similar situation occurs to ACSEDA with the other settings of sr when cs is either too large or too small. Such experimental results verify the analysis presented in Section 3.1.
To sum up, it is found that the proposed covariance scaling strategy is effective to help EDA achieve promising performance in solving optimization problems, especially complicated problems.

4.3.2. Effectiveness of the Proposed Adaptive Covariance Scaling Strategy

Then, we conduct experiments to verify the effectiveness of the proposed adaptive covariance scaling method, which is realized by dynamically adjusting cs according to Equation (10). Since the proposed adaptive covariance scaling method is related to sr, as can be seen from Equation (10), we first fix sr as 0.1 and 0.2 to investigate the effectiveness of the proposed adaptive method. These two settings of sr are utilized because, from the experimental result in the last subsection as shown in Table 5, when sr is larger than 0.3, ACSEDA achieves much worse performance and thus, it is meaningless to investigate the effectiveness of the proposed adaptive cs when sr is larger than 0.3. Subsequently, for each set of sr, we set different fixed cs for ACSEDA and then compare them with the ACSEDA with the adaptive cs strategy.
Table 6 presents the comparison between ACSEDA with the adaptive covariance scaling method and the ones with different fixed settings of cs on the 50-D CEC 2014 benchmark problems. From this table, we attain the following observations:
(1)
As a whole, no matter if it is from the perspective of the averaged rank obtained from the Friedman test or from the perspective of the number of problems where the algorithm achieves the best results, the ACSEDA with the proposed adaptive cs obtains much better performance than those with different fixed cs. This verifies that the proposed adaptive cs strategy is effective to help ACSEDA achieve promising performance.
(2)
Under the same set of sr, we find that the optimal fixed cs for ACSEDA to achieve the best performance is different on different problems. This indicates that the optimal setting of cs is not consistent for all problems. With the proposed adaptive strategy, we can see that not only is the sensitivity of ACSEDA to cs alleviated, but its optimization performance is also largely promoted.
In summary, based on the above experiments, we can see that the proposed covariance scaling strategy is very beneficial for ACSEDA to achieve promising performance. This is mainly because the proposed adaptive strategy helps ACSEDA bias to explore the solution space in the early stage and, gradually, bias to exploit the found promising areas without serious loss of diversity as the evolution iterates. As a result, with this adaptive strategy, ACSEDA could explore and exploit the solution space properly to find the optima of optimization problems.

4.3.3. Effectiveness of the Proposed Adaptive Promising Selection Strategy

Subsequently, we conduct experiments to verify the effectiveness of the proposed promising individual selection strategy, which is realized by dynamically adjusting the parameter sr, based on Equation (11). Since the setting of sr influences the covariance scaling parameter cs, we first fix cs as 0.6 and then accordingly set different fixed sr. It should be noticed that cs = 0.6 is adopted here because, in the last subsection, as shown in Table 6, ACSEDA obtains the best overall performance when cs = 0.6 under the two settings of sr. Then, we compare the ACSEDA with the proposed adaptive sr and the ones with different fixed sr under the same set of cs (namely cs = 0.6).
Table 7 shows the comparison results between the ACSEDA with the proposed adaptive sr and the ones with different fixed settings of sr on the 50-D CEC 2014 benchmark problems. From this table, the following findings can be attained:
(1)
In view of the averaged rank obtained from the Friedman test, the ACSEDA with the adaptive sr achieves the best overall performance than the ones with different fixed sr. This verifies the effectiveness of the proposed adaptive promising individual selection strategy for the estimation of the mean vector.
(2)
For different problems, the optimal sr is different for ACSEDA to achieve the best performance. In particular, we find that a small sr tends to help ACSEDA obtain better performance than a large sr. The proposed adaptive strategy, based on Equation (11), matches this observation that sr is dramatically decreased to a small value in the early stage, and then, it mildly declines as the evolution goes as stated in Section 3.2.
Based on the above experiments, it is demonstrated that the proposed adaptive promising individual selection strategy, for the estimation of the mean vector, is very useful for ACSEDA to not only achieve promising performance but also alleviate the sensitivity to the parameter sr.

4.3.4. Effectiveness of the Proposed Cross-Generation Individual Selection Strategy

At last, we conduct experiments to verify the usefulness of the proposed cross-generation individual selection strategy for the parent population. To this end, we first develop three other ACSEDA variants by using some existing typical selection strategies for the parent population. The first is to directly utilize the generated offspring as the parent population, such as in some traditional EDAs [20,21,28,30]. This variant of ACSEDA is denoted as “ACSEDA-O”. The second is to combine the parent population in the last generation and the generated offspring and then, select the best half of the combined population as the parent population for the next generation, as in some EDA variants [51,68]. This variant of ACSEDA is represented as “ACSEDA-OP”. The last one is to maintain an archive l in some EDA variants [37] to store the historical useful individuals and then they are combined with the generated offspring to select the best half of the combined population as the parent population. This ACSEDA variant is denoted as “ACSEDA-OA”.
After the preparation of the compared methods, we conduct experiments on the 50-D CEC 2015 benchmark problems to compare the ACSEDA with the proposed cross-generation individual selection strategy and the ones with the above mentioned three compared strategies. Table 8 presents the comparison results among these different variants of ACSEDA.
From Table 8, we can see that, from the perspective of the averaged rank obtained from the Friedman test and the number of problems where the algorithm achieves the best results, the ACSEDA with the proposed cross-generation individual selection strategy obtains the best overall performance. In particular, not only is the averaged rank is much smaller than those of the compared methods but the number of problems where the proposed ACSEDA achieves the best results is also much larger than those of the compared methods.
The above observations demonstrate that the proposed cross-generation individual selection strategy for the parent population is very helpful for ACSEDA to obtain promising performance. This is because, by combining the generated offspring in the last generation and in the current generation, this strategy is less likely to generate crowded individuals for the parent population in the next generation and could, thus, aid ACSEDA to preserve high search diversity during the evolution, as analyzed in Section 3.3.

5. Conclusions

This paper has proposed an adaptive covariance scaling estimation of distribution algorithm (ACSEDA) to solve optimization problems. First, instead of estimating the mean vector and the covariance, based on the same selected promising individuals like traditional EDAs, the proposed ACSEDA estimates the covariance based on an enlarged number of promising individuals. In this way, the sampling range of the estimated probability distribution model is enlarged and thus, the estimated model could generate more diversified offspring, which is helpful to avoid falling into local areas. To alleviate the sensitivity of the associated parameter, we further devise an adaptive covariance scaling method to dynamically adjust the covariance scaling parameter during the evolution. Second, to further help ACSEDA to explore and exploit the solution space properly, this paper further devises an adaptive promising individual selection strategy for the estimation of the mean vector. By dynamically adjusting the selection ratio parameter related to the estimation of the mean vector, the proposed ACSEDA gradually biases to exploit the found promising areas without serious loss of diversity as the evolution goes. At last, to further promote the diversity of the proposed ACSEDA, we develop a cross-generation individual selection strategy for the parent population. Different from existing selection methods, the proposed selection method combines the randomly sampled offspring in the last generation and the one in the current generation, together, and then selects the best half of the combined population as the parent population to estimate the probability distribution model. With the cohesive collaboration among the three devised techniques, the proposed ACSEDA is expected to explore and exploit the solution space appropriately and thus, is likely to achieve promising performance.
Extensive comparison experiments have been conducted on the widely used CEC 2014 benchmark problem set with different dimension sizes (30-D, 50-D, and 100-D). Experimental results have demonstrated that the proposed ACSEDA achieves very competitive, or even much better, performance than several state-of-the-art EDA variants. The comparison results also show that ACSEDA preserves good scalability to solve higher-dimensional optimization problems. In addition, deep investigations on the effectiveness of the three proposed techniques have also been performed. The investigation results have demonstrated that the three proposed mechanisms make great contributions to helping ACSEDA to achieve promising performance.

Author Contributions

Q.Y.: Conceptualization, supervision, methodology, formal analysis, and writing—original draft preparation. Y.L.: Implementation, formal analysis, and writing—original draft preparation. X.-D.G.: Methodology, and writing—review and editing. Y.-Y.M.: Writing—review and editing. Z.-Y.L.: Writing—review and editing, and funding acquisition. S.-W.J.: Writing—review and editing. J.Z.: Conceptualization and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62006124, U20B2061, 62002103, and 61873097, in part by the Natural Science Foundation of Jiangsu Province under Project BK20200811, in part by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant 20KJB520006, in part by the National Research Foundation of Korea (NRF-2021H1D3A2A01082705), and in part by the Startup Foundation for Introducing Talent of NUIST.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hasan, M.Z.; Al-Rizzo, H. Optimization of Sensor Deployment for Industrial Internet of Things Using a Multiswarm Algorithm. IEEE Internet Things J. 2019, 6, 10344–10362. [Google Scholar] [CrossRef]
  2. Li, H.; Yu, J.; Yang, M.; Kong, F. Secure Outsourcing of Large-scale Convex Optimization Problem in Internet of Things. IEEE Internet Things J. 2021, 1. [Google Scholar] [CrossRef]
  3. Zhou, X.G.; Peng, C.X.; Liu, J.; Zhang, Y.; Zhang, G.J. Underestimation-Assisted Global-Local Cooperative Differential Evolution and the Application to Protein Structure Prediction. IEEE Trans. Evol. Comput. 2020, 24, 536–550. [Google Scholar] [CrossRef]
  4. Zeng, X.; Wang, W.; Chen, C.; Yen, G.G. A Consensus Community-Based Particle Swarm Optimization for Dynamic Community Detection. IEEE Trans. Cybern. 2020, 50, 2502–2513. [Google Scholar] [CrossRef]
  5. Chen, W.N.; Tan, D.Z.; Yang, Q.; Gu, T.; Zhang, J. Ant Colony Optimization for the Control of Pollutant Spreading on Social Networks. IEEE Trans. Cybern. 2020, 50, 4053–4065. [Google Scholar] [CrossRef]
  6. Shen, Y.; Li, W.; Li, J. An Improved Estimation of Distribution Algorithm for Multi-compartment Electric Vehicle Routing Problem. J. Syst. Eng. Electron. 2021, 32, 365–379. [Google Scholar]
  7. Li, X.; Tang, K.; Omidvar, M.N.; Yang, Z.; Qin, K.; China, H. Benchmark Functions for the CEC 2013 Special Session and Competition on Large-scale Global Optimization. 2013. Available online: https://www.tflsgo.org/assets/cec2018/cec2013-lsgo-benchmark-tech-report.pdf (accessed on 6 December 2021).
  8. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-parameter Optimization. 2017. Available online: https://moam.info/problem-definitions-and-evaluation-criteria-for-the-_5bad2530097c479e798b46a8.html (accessed on 6 December 2021).
  9. Wei, F.F.; Chen, W.N.; Yang, Q.; Deng, J.; Luo, X.N.; Jin, H.; Zhang, J. A Classifier-Assisted Level-Based Learning Swarm Optimizer for Expensive Optimization. IEEE Trans. Evol. Comput. 2021, 25, 219–233. [Google Scholar] [CrossRef]
  10. Yang, Q.; Chen, W.N.; Gu, T.; Jin, H.; Mao, W.; Zhang, J. An Adaptive Stochastic Dominant Learning Swarm Optimizer for High-Dimensional Optimization. IEEE Trans. Cybern. 2020, 1–17. [Google Scholar] [CrossRef]
  11. Tanweer, M.R.; Suresh, S.; Sundararajan, N. Dynamic Mentoring and Self-regulation Based Particle Swarm Optimization Algorithm for Solving Complex Real-world Optimization Problems. Inf. Sci. 2016, 326, 1–24. [Google Scholar] [CrossRef]
  12. Wu, X.; Zhao, J.; Tong, Y. Big Data Analysis and Scheduling Optimization System Oriented Assembly Process for Complex Equipment. IEEE Access 2018, 6, 36479–36486. [Google Scholar] [CrossRef]
  13. Yang, Q.; Chen, W.; Deng, J.D.; Li, Y.; Gu, T.; Zhang, J. A Level-Based Learning Swarm Optimizer for Large-Scale Optimization. IEEE Trans. Evol. Comput. 2018, 22, 578–594. [Google Scholar] [CrossRef]
  14. Yang, Q.; Chen, W.; Gu, T.; Zhang, H.; Deng, J.D.; Li, Y.; Zhang, J. Segment-Based Predominant Learning Swarm Optimizer for Large-Scale Optimization. IEEE Trans. Cybern. 2017, 47, 2896–2910. [Google Scholar] [CrossRef] [Green Version]
  15. Yang, Q.; Chen, W.; Yu, Z.; Gu, T.; Li, Y.; Zhang, H.; Zhang, J. Adaptive Multimodal Continuous Ant Colony Optimization. IEEE Trans. Evol. Comput. 2017, 21, 191–205. [Google Scholar] [CrossRef] [Green Version]
  16. Tanabe, R.; Ishibuchi, H. A Review of Evolutionary Multimodal Multiobjective Optimization. IEEE Trans. Evol. Comput. 2020, 24, 193–200. [Google Scholar] [CrossRef]
  17. Yang, Q.; Chen, W.; Zhang, J. Evolution Consistency Based Decomposition for Cooperative Coevolution. IEEE Access 2018, 6, 51084–51097. [Google Scholar] [CrossRef]
  18. Yang, Q.; Chen, W.N.; Gu, T.; Zhang, H.; Yuan, H.; Kwong, S.; Zhang, J. A Distributed Swarm Optimizer with Adaptive Communication for Large-Scale Optimization. IEEE Trans. Cybern. 2020, 50, 3393–3408. [Google Scholar] [CrossRef]
  19. Doerr, B.; Krejca, M.S. Significance-Based Estimation-of-Distribution Algorithms. IEEE Trans. Evol. Comput. 2020, 24, 1025–1034. [Google Scholar] [CrossRef] [Green Version]
  20. Hauschild, M.; Pelikan, M. An Introduction and Survey of Estimation of Distribution Algorithms. Swarm Evol. Comput. 2011, 1, 111–128. [Google Scholar] [CrossRef] [Green Version]
  21. Larrañaga, P.; Lozano, J.A. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  22. Bao, L.; Sun, X.; Gong, D.; Zhang, Y. Multi-source Heterogeneous User Generated Contents-driven Interactive Estimation of Distribution Algorithms for Personalized Search. IEEE Trans. Evol. Comput. 2021, 1. [Google Scholar] [CrossRef]
  23. Yang, Q.; Chen, W.; Li, Y.; Chen, C.L.P.; Xu, X.; Zhang, J. Multimodal Estimation of Distribution Algorithms. IEEE Trans. Cybern. 2017, 47, 636–650. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Shao, W.; Pi, D.; Shao, Z. A Pareto-Based Estimation of Distribution Algorithm for Solving Multiobjective Distributed No-Wait Flow-Shop Scheduling Problem with Sequence-Dependent Setup Time. IEEE Trans. Autom. Sci. Eng. 2019, 16, 1344–1360. [Google Scholar] [CrossRef]
  25. Shi, W.; Chen, W.; Lin, Y.; Gu, T.; Kwong, S.; Zhang, J. An Adaptive Estimation of Distribution Algorithm for Multipolicy Insurance Investment Planning. IEEE Trans. Evol. Comput. 2019, 23, 1–14. [Google Scholar] [CrossRef]
  26. Wang, X.; Han, T.; Zhao, H. An Estimation of Distribution Algorithm with Multi-Leader Search. IEEE Access 2020, 8, 37383–37405. [Google Scholar] [CrossRef]
  27. Krejca, M.S.; Witt, C. Theory of Estimation-of-distribution Algorithms. In Theory of Evolutionary Computation; Springer: Berlin/Heidelberg, Germany, 2020; pp. 405–442. [Google Scholar]
  28. Pelikan, M.; Hauschild, M.W.; Lobo, F.G. Estimation of Distribution Algorithms. In Springer Handbook of Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2015; pp. 899–928. [Google Scholar]
  29. Montaño, O.D.L.; Gómez-Castro, F.I.; Gutierrez-Antonio, C. Design and Optimization of a Shell-and-tube Heat Exchanger Using the Univariate Marginal Distribution Algorithm. In Computer Aided Chemical Engineering; Elsevier: Amsterdam, The Netherlands, 2021; Volume 50, pp. 43–49. [Google Scholar]
  30. Muelas, S.; Mendiburu, A.; LaTorre, A.; Peña, J.-M. Distributed Estimation of Distribution Algorithms for Continuous Optimization: How Does the Exchanged Information Influence Their Behavior? Inf. Sci. 2014, 268, 231–254. [Google Scholar] [CrossRef] [Green Version]
  31. Zhang, Q. On Stability of Fixed Points of Limit Models of Univariate Marginal Distribution Algorithm and Factorized Distribution Algorithm. IEEE Trans. Evol. Comput. 2004, 8, 80–93. [Google Scholar] [CrossRef]
  32. Dong, W.; Yao, X. Unified Eigen Analysis on Multivariate Gaussian Based Estimation of Distribution Algorithms. Inf. Sci. 2008, 178, 3000–3023. [Google Scholar] [CrossRef] [Green Version]
  33. Gao, B.; Wood, I. TAM-EDA: Multivariate T Distribution, Archive and Mutation Based Estimation of Distribution Algorithm. Anziam J. 2012, 54, C720–C746. [Google Scholar] [CrossRef]
  34. Gao, K.; Harrison, J.P. Multivariate Distribution Model for Stress Variability Characterisation. Int. J. Rock Mech. Min. Sci. 2018, 102, 144–154. [Google Scholar] [CrossRef]
  35. Gao, Y.; Hu, X.; Liu, H. Estimation of Distribution Algorithm Based on Multivariate Gaussian Copulas. In Proceedings of the IEEE International Conference on Progress in Informatics and Computing, Shanghai, China, 10–12 December 2010; pp. 254–257. [Google Scholar]
  36. Yang, G.; Li, H.; Yang, W.; Fu, K.; Sun, Y.; Emery, W.J. Unsupervised Change Detection of SAR Images Based on Variational Multivariate Gaussian Mixture Model and Shannon Entropy. IEEE Geosci. Remote. Sens. Lett. 2019, 16, 826–830. [Google Scholar] [CrossRef]
  37. Liang, Y.; Ren, Z.; Yao, X.; Feng, Z.; Chen, A.; Guo, W. Enhancing Gaussian Estimation of Distribution Algorithm by Exploiting Evolution Direction with Archive. IEEE Trans. Cybern. 2020, 50, 140–152. [Google Scholar] [CrossRef]
  38. Zhou, A.; Sun, J.; Zhang, Q. An Estimation of Distribution Algorithm with Cheap and Expensive Local Search Methods. IEEE Trans. Evol. Comput. 2015, 19, 807–822. [Google Scholar] [CrossRef]
  39. Valdez, S.I.; Hernández, A.; Botello, S. A Boltzmann Based Estimation of Distribution Algorithm. Inf. Sci. 2013, 236, 126–137. [Google Scholar] [CrossRef]
  40. Ceberio, J.; Irurozki, E.; Mendiburu, A.; Lozano, J.A. A review on estimation of distribution algorithms in permutation-based combinatorial optimization problems. Prog. Artif. Intell. 2012, 1, 103–117. [Google Scholar] [CrossRef]
  41. Ren, Z.; Liang, Y.; Wang, L.; Zhang, A.; Pang, B.; Li, B. Anisotropic Adaptive Variance Scaling for Gaussian Estimation of Distribution Algorithm. Knowl.-Based Syst. 2018, 146, 142–151. [Google Scholar] [CrossRef]
  42. Bosman, P.A.; Grahl, J.; Rothlauf, F. SDR: A Better Trigger for Adaptive Variance Scaling in Normal EDAs. In Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, New York, NY, USA, 7–11 July 2007; pp. 492–499. [Google Scholar] [CrossRef]
  43. Grahl, J.; Bosman, P.A.; Rothlauf, F. The Correlation-Triggered Adaptive Variance Scaling IDEA. In Proceedings of the Annual Conference on Genetic and Evolutionary Computation, Seattle, WA, USA, 8–12 July 2006; pp. 397–404. [Google Scholar]
  44. Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2014 Special Session and Competition on Single Objective Real-parameter Numerical Optimization. In Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report; Nanyang Technological University: Singapore, 2013. [Google Scholar]
  45. Bronevich, A.G.; De Oliveira, J.V. On the Model Updating Operators in Univariate Estimation of Distribution Algorithms. Nat. Comput. 2016, 15, 335–354. [Google Scholar] [CrossRef]
  46. Rastegar, R. On the Optimal Convergence Probability of Univariate Estimation of Distribution Algorithms. Evol. Comput. 2011, 19, 225–248. [Google Scholar] [CrossRef] [Green Version]
  47. Krejca, M.S. Theoretical Analyses of Univariate Estimation-of-Distribution Algorithms; Universität Potsdam: Potsdam, Germany, 2019. [Google Scholar]
  48. Wang, X.; Zhao, H.; Han, T.; Wei, Z.; Liang, Y.; Li, Y. A Gaussian Estimation of Distribution Algorithm with Random Walk Strategies and Its Application in Optimal Missile Guidance Handover for Multi-UCAV in Over-the-Horizon Air Combat. IEEE Access 2019, 7, 43298–43317. [Google Scholar] [CrossRef]
  49. Ren, Z.; He, C.; Zhong, D.; Huang, S.; Liang, Y. Enhance Continuous Estimation of Distribution Algorithm by Variance Enlargement and Reflecting Sampling. In Proceedings of the IEEE Congress on Evolutionary Computation, Vancouver, BC, Canada, 24–29 July 2016; pp. 3441–3447. [Google Scholar]
  50. Yang, P.; Tang, K.; Lu, X. Improving Estimation of Distribution Algorithm on Multimodal Problems by Detecting Promising Areas. IEEE Trans. Cybern. 2015, 45, 1438–1449. [Google Scholar] [CrossRef]
  51. Yuan, B.; Gallagher, M. On the Importance of Diversity Maintenance in Estimation of Distribution Algorithms. In Proceedings of the Annual Conference on Genetic and Evolutionary Computation, Washington, DC, USA, 25–29 June 2005; pp. 719–726. [Google Scholar]
  52. Pošík, P. Preventing Premature Convergence in A Simple EDA Via Global Step Size Setting. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Lecture Notes in Computer Science, Technische Universität, Dortmund, Germany, 13–17 September 2008; pp. 549–558. [Google Scholar]
  53. Cai, Y.; Sun, X.; Xu, H.; Jia, P. Cross Entropy and Adaptive Variance Scaling in Continuous EDA. In Proceedings of the Annual Conference on Genetic and Evolutionary Computation, New York, NY, USA, 7–11 July 2007; pp. 609–616. [Google Scholar]
  54. Emmerich, M.; Shir, O.M.; Wang, H. Evolution Strategies. In Handbook of Heuristics; Martí, R., Panos, P., Resende, M.G.C., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 1–31. [Google Scholar]
  55. Ros, R.; Hansen, N. A Simple Modification in CMA-ES Achieving Linear Time and Space Complexity. In Proceedings of the Parallel Problem Solving from Nature–PPSN X, Berlin/Heidelberg, Germany, 13–17 September 2008; pp. 296–305. [Google Scholar]
  56. Akimoto, Y.; Hansen, N. Diagonal Acceleration for Covariance Matrix Adaptation Evolution Strategies. Evol. Comput. 2020, 28, 405–435. [Google Scholar] [CrossRef] [Green Version]
  57. Arabas, J.; Jagodziński, D. Toward a Matrix-Free Covariance Matrix Adaptation Evolution Strategy. IEEE Trans. Evol. Comput. 2020, 24, 84–98. [Google Scholar] [CrossRef]
  58. Beyer, H.; Sendhoff, B. Simplify Your Covariance Matrix Adaptation Evolution Strategy. IEEE Trans. Evol. Comput. 2017, 21, 746–759. [Google Scholar] [CrossRef]
  59. Li, Z.; Zhang, Q. A Simple Yet Efficient Evolution Strategy for Large-Scale Black-Box Optimization. IEEE Trans. Evol. Comput. 2018, 22, 637–646. [Google Scholar] [CrossRef]
  60. He, X.; Zhou, Y.; Chen, Z.; Zhang, J.; Chen, W.N. Large-Scale Evolution Strategy Based on Search Direction Adaptation. IEEE Trans. Cybern. 2021, 51, 1651–1665. [Google Scholar] [CrossRef]
  61. Bosman, P.A.; Grahl, J.; Thierens, D. Enhancing the Performance of Maximum–likelihood Gaussian EDAs Using Anticipated Mean Shift. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Technische Universität, Dortmund, Germany, 13–17 September 2008; pp. 133–143. [Google Scholar]
  62. PourMohammadBagher, L.; Ebadzadeh, M.M.; Safabakhsh, R. Graphical Model Based Continuous Estimation of Distribution Algorithm. Appl. Soft Comput. 2017, 58, 388–400. [Google Scholar] [CrossRef]
  63. Yang, Q.; Chen, W.-N.; Zhang, J. Probabilistic Multimodal Optimization. In Metaheuristics for Finding Multiple Solutions; Preuss, M., Epitropakis, M.G., Li, X., Fieldsend, J.E., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 191–228. [Google Scholar]
  64. Huang, X.; Jia, P.; Liu, B. Controlling Chaos by an Improved Estimation of Distribution Algorithm. Math. Comput. Appl. 2010, 15, 866–871. [Google Scholar] [CrossRef] [Green Version]
  65. Fang, H.; Zhou, A.; Zhang, G. An Estimation of Distribution Algorithm Guided by Mean Shift. In Proceedings of the IEEE Congress on Evolutionary Computation, Vancouver, BC, Canada, 24–29 July 2016; pp. 3268–3275. [Google Scholar]
  66. Liu, J.; Wang, Y.; Teng, H. Variance Analysis and Adaptive Control in Intelligent System Based on Gaussian Model. Int. J. Model. Identif. Control 2013, 18, 26–33. [Google Scholar] [CrossRef]
  67. Santana, R.; Larranaga, P.; Lozano, J.A. Adaptive Estimation of Distribution Algorithms. In Adaptive and Multilevel Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2008; Volume 136, pp. 177–197. [Google Scholar]
  68. Dong, W.; Chen, T.; Tiňo, P.; Yao, X. Scaling Up Estimation of Distribution Algorithms for Continuous Optimization. IEEE Trans. Evol. Comput. 2013, 17, 797–822. [Google Scholar] [CrossRef] [Green Version]
  69. Hansen, N. Towards a New Evolutionary Computation. Stud. Fuzziness Soft Comput. 2006, 192, 75–102. [Google Scholar]
  70. Hedar, A.-R.; Allam, A.A.; Fahim, A. Estimation of Distribution Algorithms with Fuzzy Sampling for Stochastic Programming Problems. Appl. Sci. 2020, 10, 6937. [Google Scholar] [CrossRef]
  71. Maza, S.; Touahria, M. Feature Selection for Intrusion Detection Using New Multi-objective Estimation of Distribution Algorithms. Appl. Intell. 2019, 49, 4237–4257. [Google Scholar] [CrossRef]
Figure 1. The visual structure of the covariance scaling strategy.
Figure 1. The visual structure of the covariance scaling strategy.
Mathematics 09 03207 g001
Figure 2. The change curves of cs and sr with the proposed two adaptive strategies.
Figure 2. The change curves of cs and sr with the proposed two adaptive strategies.
Mathematics 09 03207 g002
Figure 3. The process of the cross-generation individual selection strategy.
Figure 3. The process of the cross-generation individual selection strategy.
Mathematics 09 03207 g003
Table 1. Parameter settings of ACSEDA and the compared algorithms.
Table 1. Parameter settings of ACSEDA and the compared algorithms.
AlgorithmACSEDAEDA/LSEDA2EDA/LS-MSEDAVERSBUMDATRA-EDAMA-ES
ParameterPSPSMPbPcθPSlPSMPaPbθPSsrPSPSsrPSmu
D301300150150.20.20.110020150150.20.20.15000.3590025000.2 4 + 3 ln D | P S / 2 |
501800150200100060011004700
1003200150300200070015006000
Table 2. Comparison between ACSEDA and the compared state-of-the-art EDA variants on the 30-D CEC2014 benchmark problems. The bold results indicate that ACSEDA is significantly better than the compared methods.
Table 2. Comparison between ACSEDA and the compared state-of-the-art EDA variants on the 30-D CEC2014 benchmark problems. The bold results indicate that ACSEDA is significantly better than the compared methods.
FCategoryQualityACSEDAEDA2EDAVERSEDA/LSEDA/LS-MSTRA-EDABUMDAMA-ES
F1Unimodal ProblemsMedian0.00 × 1000.00 × 1006.48 × 1041.90 × 10−91.83 × 10−96.94 × 1071.15 × 1081.42 × 10−14
Mean0.00 × 1000.00 × 1007.76 × 1049.09 × 1011.98 × 1016.80 × 1071.16 × 1081.37 × 10−14
Std0.00 × 1000.00 × 1004.85 × 1044.89 × 1021.07 × 1021.17 × 1072.12 × 1072.55 × 10−15
p-value-NaN=1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.17 × 10−13+
F2Median0.00 × 1000.00 × 1002.54 × 1031.48 × 10−114.59 × 10−116.61 × 1091.80 × 1042.84 × 10−14
Mean0.00 × 1000.00 × 1002.36 × 1032.52 × 10−101.45 × 10−106.58 × 1095.15 × 1052.65 × 10−14
Std0.00 × 1000.00 × 1008.94 × 1025.04 × 10−102.43 × 10−101.05 × 1092.45 × 1067.09 × 10−15
p-value-NaN=1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+7.15 × 10−13+
F3Median0.00 × 1000.00 × 1001.41 × 1038.42 × 10−98.85 × 10−109.83 × 1036.49 × 1035.68 × 10−14
Mean0.00 × 1000.00 × 1001.44 × 1033.10 × 10−92.34 × 10−91.00 × 1046.52 × 1035.68 × 10−14
Std0.00 × 1000.00 × 1005.54 × 1024.33 × 10−94.21 × 10−91.97 × 1031.85 × 1035.05 × 10−29
p-value-NaN=1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.69 × 10−14+
F1–3w/t/l-0/3/03/0/03/0/03/0/03/0/03/0/03/0/0
F4Simple Multimodal ProblemsMedian3.25 × 1000.00 × 1006.39 × 10−11.02 × 10−94.75 × 10−97.93 × 1029.74 × 1015.68 × 10−14
Mean3.15 × 1000.00 × 1006.14 × 10−11.33 × 10−12.66 × 10−17.72 × 1029.88 × 1012.66 × 10−1
Std1.02 × 1000.00 × 1002.76 × 10−17.16 × 10−19.94 × 10−11.27 × 1022.69 × 1019.94 × 10−1
p-value-1.21 × 10−12−3.82 × 10−10−2.15 × 10−10−1.41 × 10−9−3.02 × 10−11+3.02 × 10−11+ 1.69 × 10−10−
F5Median2.09 × 1012.10 × 1012.10 × 1012.00 × 1012.00 × 1012.09 × 1012.10 × 1012.00 × 101
Mean2.09 × 1012.09 × 1012.09 × 1012.00 × 1012.00 × 1012.09 × 1012.10 × 1012.04 × 101
Std4.96 × 10−24.86 × 10−27.13 × 10−25.47 × 10−63.43 × 10−25.47 × 10−25.00 × 10−26.93 × 10−1
p-value-7.01 × 10−2=7.48 × 10−2=1.17 × 10−11−9.37 × 10−12−8.77 × 10−1=2.32 × 10−2+3.96 × 10−4−
F6Median1.99 × 10−60.00 × 1001.77 × 10−54.09 × 1013.81 × 1014.58 × 10−11.50 × 1002.25 × 101
Mean2.07 × 10−62.09 × 10−14.76 × 10−14.05 × 1013.84 × 1015.16 × 10−11.61 × 1002.24 × 101
Std8.29 × 10−75.63 × 10−18.19 × 10−12.14 × 1002.75 × 1004.43 × 10−11.12 × 1004.08 × 100
p-value-3.82 × 10−5+6.61 × 10−1=3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+1.07 × 10−7+3.02 × 10−11+
F7Median0.00 × 1000.00 × 1001.14 × 10−132.21 × 10−23.69 × 10−29.12 × 1013.24 × 10−21.14 × 10−13
Mean0.00 × 1000.00 × 1001.06 × 10−133.52 × 10−24.84 × 10−29.01 × 1019.35 × 10−22.79 × 10−3
Std0.00 × 1000.00 × 1002.84 × 10−144.56 × 10−26.23 × 10−21.25 × 1011.41 × 10−14.40 × 10−3
p-value-NaN=7.15 × 10−13+1.20 × 10−12+1.19 × 10−12+1.21 × 10−12+1.21 × 10−12+3.85 × 10−13+
F8Median9.95 × 10−16.47 × 1001.52 × 1022.27 × 1022.06 × 1021.49 × 1010.00 × 1001.68 × 102
Mean1.16 × 1006.60 × 1001.32 × 1022.38 × 1022.13 × 1021.54 × 1010.00 × 1001.68 × 102
Std8.93 × 10−11.81 × 1005.09 × 1016.69 × 1016.08 × 1013.39 × 1000.00 × 1003.98 × 100
p-value-1.92 × 10−11+3.86 × 10−11+2.13 × 10−11+2.13 × 10−11+2.13 × 10−11+4.26 × 10−9−2.02 × 10−11+
F9Median9.95 × 10−15.47 × 1001.48 × 1023.33 × 1023.20 × 1021.10 × 1019.95 × 10−11.87 × 102
Mean1.16 × 1005.80 × 1001.31 × 1023.28 × 1023.18 × 1021.13 × 1011.41 × 1001.88 × 102
Std9.29 × 10−11.69 × 1004.96 × 1016.66 × 1014.01 × 1012.68 × 1001.05 × 1005.42 × 100
p-value-2.85 × 10−11+1.13 × 10−11+2.40 × 10−11+2.40 × 10−11+2.40 × 10−11+2.03 × 10−2+2.10 × 10−11+
F10Median3.59 × 1006.33 × 1005.69 × 1034.76 × 1034.04 × 1032.39 × 1022.84 × 1013.88 × 103
Mean1.13 × 1015.28 × 1015.62 × 1034.76 × 1034.11 × 1032.86 × 1023.81 × 1013.90 × 103
Std2.89 × 1018.32 × 1013.90 × 1021.24 × 1038.86 × 1021.98 × 1025.51 × 1013.24 × 102
p-value-2.10 × 10−3+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+1.09 × 10−10+8.77 × 10−2=3.01 × 10−11+
F11Median8.36 × 1001.52 × 1016.55 × 1034.64 × 1034.69 × 1032.63 × 1022.26 × 1024.31 × 103
Mean2.80 × 1017.94 × 1016.53 × 1035.04 × 1034.99 × 1032.58 × 1022.43 × 1024.31 × 103
Std4.53 × 1011.19 × 1022.54 × 1021.36 × 1031.07 × 1031.53 × 1022.10 × 1022.21 × 102
p-value-2.87 × 10−2+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+2.44 × 10−9+8.84 × 10−7+2.98 × 10−11+
F12Median2.35 × 1002.46 × 1002.56 × 1006.89 × 10−16.67 × 10−12.47 × 1002.51 × 1003.34 × 10−2
Mean2.32 × 1002.49 × 1002.56 × 1009.21 × 10−17.84 × 10−12.39 × 1002.47 × 1003.68 × 10−2
Std2.41 × 10−12.71 × 10−11.98 × 10−18.73 × 10−14.11 × 10−13.49 × 10−12.56 × 10−12.29 × 10−2
p-value-1.76 × 10−2+1.89 × 10−4+5.57 × 10−10−4.50 × 10−11−1.45 × 10−1=1.70 × 10−2+3.02 × 10−11−
F13Median9.34 × 10−24.40 × 10−21.85 × 10−15.86 × 10−17.04 × 10−13.24 × 1004.27 × 10−23.16 × 10−1
Mean9.35 × 10−24.39 × 10−21.84 × 10−11.85 × 1002.57 × 1003.20 × 1004.25 × 10−23.40 × 10−1
Std1.50 × 10−21.08 × 10−22.68 × 10−22.53 × 1002.87 × 1002.03 × 10−18.97 × 10−39.18 × 10−3
p-value-6.07 × 10−11−3.34 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+ 3.69 × 10−11-3.02 × 10−11+
F14Median2.59 × 10−14.10 × 10−13.10 × 10−13.18 × 10−13.27 × 10−14.63 × 1013.89 × 10−13.39 × 10−1
Mean2.62 × 10−14.06 × 10−13.11 × 10−13.36 × 10−13.42 × 10−14.68 × 1013.89 × 10−13.81 × 10−1
Std5.24 × 10−23.87 × 10−23.61 × 10−29.64 × 10−29.18 × 10−25.15 × 1003.91 × 10−21.37 × 10−1
p-value-1.78 × 10−10+3.01 × 10−4+7.66 × 10−5+9.51 × 10−6+3.02 × 10−11+5.07 × 10−11+1.61 × 10−6+
F15Median4.16 × 1004.24 × 1001.30 × 1019.26 × 1018.31 × 1013.60 × 1003.46 × 1003.24 × 100
Mean4.21 × 1004.26 × 1001.30 × 1019.75 × 1018.43 × 1014.69 × 1004.68 × 1003.40 × 100
Std1.43 × 1001.08 × 1001.04 × 1003.23 × 1012.67 × 1013.53 × 1005.52 × 1009.00 × 10−1
p-value-7.39 × 10−1=3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+ 9.82 × 10−1= 3.11 × 10−1= 3.51 × 10−2−
F16Median8.81 × 1001.24 × 1011.11 × 1011.36 × 1011.34 × 1011.06 × 1011.09 × 1011.37 × 101
Mean8.74 × 1001.23 × 1011.11 × 1011.35 × 1011.34 × 1011.05 × 1011.09 × 1011.37 × 101
Std5.75 × 10−11.92 × 10−13.95 × 10−13.73 × 10−12.65 × 10−13.39 × 10−14.89 × 10−19.40 × 10−2
p-value-3.02 × 10−11+3.34 × 10−11+3.02 × 10−11+3.02 × 10−11+4.08 × 10−11+3.34 × 10−11+3.02 × 10−11+
F4–16w/t/l-8/3/210/2/110/0/310/0/310/3/09/2/29/0/4
F17Hybrid Problems Median1.45 × 1011.16 × 1013.19 × 1042.13 × 1036.76 × 1049.97 × 1043.05 × 1071.76 × 103
Mean2.17 × 1012.00 × 1013.58 × 1044.10 × 1072.38 × 1071.36 × 1052.98 × 1071.77 × 103
Std2.98 × 1012.80 × 1011.97 × 1047.15 × 1075.06 × 1071.05 × 1057.53 × 1064.44 × 102
p-value-4.16 × 10−1=3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+
F18Median4.71 × 10−11.33 × 1006.85 × 1011.61 × 1021.56 × 1022.28 × 1011.38 × 1028.24 × 101
Mean8.77 × 10−11.14 × 1001.13 × 1029.57 × 1085.05 × 1082.57 × 1011.79 × 1028.36 × 101
Std6.39 × 10−17.04 × 10−19.31 × 1011.82 × 1091.32 × 1099.48 × 1001.36 × 1023.01 × 101
p-value-6.79 × 10−2=3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+
F19Median2.78 × 1003.66 × 1004.86 × 1001.56 × 1011.44 × 1014.28 × 1012.10 × 1019.97 × 100
Mean2.78 × 1003.72 × 1004.89 × 1006.83 × 1015.28 × 1014.36 × 1012.42 × 1019.86 × 100
Std5.02 × 10−14.02 × 10−14.92 × 10−11.85 × 1021.33 × 1027.25 × 10−11.00 × 1011.52 × 100
p-value-7.12 × 10−9+3.34 × 10−11+3.02 × 10−11+3.02 × 10−11+2.87 × 10−11+3.02 × 10−11+3.02 × 10−11+
F20Median1.25 × 1001.57 × 1003.95 × 1001.52 × 1041.62 × 1047.85 × 1023.29 × 1042.31 × 102
Mean1.33 × 1001.76 × 1004.14 × 1002.10 × 1042.61 × 1041.03 × 1033.18 × 1042.72 × 102
Std1.98 × 10−15.96 × 10−11.26 × 1001.89 × 1042.55 × 1046.34 × 1025.47 × 1031.43 × 102
p-value-7.70 × 10−4+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+
F21Median1.92 × 1001.85 × 1002.65 × 1042.23 × 1043.49 × 1042.29 × 1023.40 × 1068.32 × 102
Mean1.45 × 1013.35 × 1012.93 × 1045.96 × 1061.05 × 1072.86 × 1023.38 × 1068.98 × 102
Std3.40 × 1015.26 × 1019.29 × 1032.53 × 1072.38 × 1071.94 × 1021.58 × 1063.58 × 102
p-value-8.88 × 10−1=3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+
F22Median2.67 × 1012.81 × 1012.46 × 1021.15 × 1031.24 × 1033.67 × 1011.53 × 1024.15 × 102
Mean3.46 × 1013.63 × 1012.54 × 1021.28 × 1031.41 × 1038.88 × 1011.57 × 1024.95 × 102
Std2.97 × 1013.17 × 1014.84 × 1016.04 × 1027.89 × 1026.13 × 1011.16 × 1021.63 × 102
p-value-2.77 × 10−5+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+6.72 × 10−10+9.76 × 10−10+3.02 × 10−11+
F17–22w/t/l-3/3/06/0/06/0/06/0/06/0/06/0/06/0/0
F23Composition Problems p-value3.15 × 1023.15 × 1023.15 × 1023.20 × 1023.20 × 1023.37 × 1023.19 × 1022.00 × 102
Mean3.15 × 1023.15 × 1023.15 × 1024.20 × 1024.87 × 1023.38 × 1023.20 × 1022.00 × 102
Std5.68 × 10−145.68 × 10−145.68 × 10−142.59 × 1022.96 × 1023.78 × 1001.92 × 1000.00 × 100
p-value-NaN=NaN=1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.69 × 10−14−
F24Median2.00 × 1022.00 × 1022.23 × 1022.63 × 1022.56 × 1022.27 × 1022.30 × 1022.00 × 102
Mean2.04 × 1022.00 × 1022.23 × 1022.80 × 1022.65 × 1022.27 × 1022.30 × 1022.00 × 102
Std8.66 × 1000.00 × 1007.69 × 10−15.55 × 1013.19 × 1011.50 × 1002.92 × 1001.31 × 10−4
p-value-1.10 × 10−2−9.93 × 10−12+6.48 × 10−12+6.48 × 10−12+6.48 × 10−12+6.48 × 10−12+2.03 × 10−1=
F25Median2.03 × 1022.03 × 1022.00 × 1023.06 × 1022.82 × 1022.07 × 1022.06 × 1022.00 × 102
Mean2.03 × 1022.03 × 1022.00 × 1022.97 × 1022.78 × 1022.07 × 1022.06 × 1022.00 × 102
Std1.33 × 1022.63 × 10−20.00 × 1004.49 × 1022.76 × 1013.90 × 10−11.24 × 1000.00 × 100
p-value-8.27 × 10−1=5.91 × 10−13−1.74 × 10−11+1.74 × 10−11+1.74 × 10−11+1.74 × 10−11+5.91 × 10−13−
F26Median1.00 × 1021.00 × 1021.00 × 1021.01 × 1021.00 × 1021.00 × 1021.00 × 1022.00 × 102
Mean1.00 × 1021.00 × 1021.00 × 1021.05 × 1021.04 × 1021.01 × 1021.01 × 1022.00 × 102
Std1.84 × 10−21.25 × 10−21.06 × 1022.38 × 1021.82 × 1021.05 × 1009.10 × 10−10.00 × 100
p-value-2.37 × 10−1−3.34 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+
F27Median3.00 × 1023.41 × 1023.00 × 1021.58 × 1031.46 × 1034.24 × 1023.00 × 1022.00 × 102
Mean3.00 × 1023.38 × 1023.00 × 1021.49 × 1031.21 × 1034.14 × 1023.17 × 1022.00 × 102
Std0.00 × 1004.02 × 1010.00 × 1002.92 × 1024.53 × 1023.09 × 1022.56 × 1020.00 × 100
p-value-2.20 × 10−6+NaN=1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.69 × 10−14−
F28Median8.41 × 1028.19 × 1028.16 × 1025.49 × 1034.98 × 1037.10 × 1024.78 × 1022.00 × 102
Mean8.40 × 1028.21 × 1028.22 × 1025.56 × 1035.11 × 1037.03 × 1024.76 × 1022.00 × 102
Std2.05 × 1014.03 × 1012.02 × 1011.59 × 1031.06 × 1036.79 × 1015.60 × 1000.00 × 100
p-value-2.15 × 10−2−3.34 × 10−3−3.01 × 10−11+3.01 × 10−11+7.09 × 10−9−3.01 × 10−11−1.20 × 10−12−
F29Median7.21 × 1027.15 × 1021.24 × 1034.17 × 1084.37 × 1081.00 × 1035.88 × 1022.00 × 102
Mean7.28 × 1021.40 × 1061.28 × 1034.14 × 1084.40 × 1081.04 × 1038.95 × 1022.00 × 102
Std2.71 × 1013.59 × 1061.39 × 1021.18 × 1081.53 × 1081.60 × 1028.34 × 1020.00 × 100
p-value-1.56 × 10−2+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.34 × 10−11+6.57 × 10−2=1.21 × 10−12−
F30Median1.35 × 1038.73 × 1022.13 × 1033.40 × 1063.33 × 1069.55 × 1024.69 × 1022.00 × 102
Mean1.57 × 1031.00 × 1032.03 × 1033.60 × 1063.41 × 1061.02 × 1034.82 × 1022.00 × 102
Std5.78 × 1024.70 × 1025.37 × 1021.45 × 1061.62 × 1062.69 × 1022.16 × 1020.00 × 100
p-value-1.43 × 10−5−5.56 × 10−4+3.02 × 10−11+3.02 × 10−11+5.46 × 10−6−3.02 × 10−11−1.21 × 10−12−
F23–30w/t/l-2/2/44/2/28/0/08/0/06/0/25/1/21/1/6
w/t/l-13/11/623/4/327/0/327/0/325/3/223/3/419/1/10
Rank2.252.974.776.436.105.034.833.62
Table 3. Comparison between ACSEDA and the compared state-of-the-art EDA variants on the 50-D CEC2014 benchmark problems. The bold results indicate that ACSEDA is significantly better than the compared methods.
Table 3. Comparison between ACSEDA and the compared state-of-the-art EDA variants on the 50-D CEC2014 benchmark problems. The bold results indicate that ACSEDA is significantly better than the compared methods.
FCategoryQualityACSEDAEDA2EDAVERSEDA/LSEDA/LS-MSTRA-EDABUMDAMA-ES
F1Unimodal ProblemsMedian1.42 × 10−147.11 × 10−144.09 × 1056.84 × 10−91.28 × 10−83.42 × 1081.61 × 1071.42 × 10−14
Mean1.14 × 10−147.06 × 10−144.13 × 1056.41 × 1029.33 × 1073.50 × 1081.63 × 1071.80 × 10−14
Std6.77 × 10−151.00 × 10−147.25 × 1042.32 × 1035.02 × 1085.08 × 1073.06 × 1066.28 × 10−15
p-value-5.50 × 10−12+9.04 × 10−12+9.04 × 10−12+9.04 × 10−12+9.04 × 10−12+9.04 × 10−12+5.37 × 10−4+
F2Median5.97 × 10−131.23 × 10−113.30 × 1031.77 × 10−113.67 × 10−113.27 × 10103.27 × 1032.84 × 10−14
Mean7.40 × 10−131.23 × 10−113.49 × 1037.56 × 10−101.59 × 10−93.32 × 10101.56 × 1043.60 × 10−14
Std3.83 × 10−131.92 × 10−121.76 × 1032.98 × 10−93.81 × 10−91.79 × 1092.60 × 1041.26 × 10−14
p-value-2.98 × 10−12+2.98 × 10−12+8.43 × 10−7+1.48 × 10−6+2.98 × 10−11+2.98 × 10−11+ 8.74 × 10−12−
F3Median0.00 × 1000.00 × 1004.85 × 1031.10 × 10−81.11 × 10−82.93 × 1041.25 × 1041.14 × 10−13
Mean0.00 × 1000.00 × 1005.10 × 1031.74 × 1014.35 × 10−52.87 × 1041.35 × 1041.00 × 10−13
Std0.00 × 1000.00 × 1001.03 × 1039.36 × 1011.72 × 10−42.52 × 1035.05 × 1032.40 × 10−14
p-value-NaN=1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.97 × 10−13+
F1–3w/t/l-2/1/03/0/03/0/03/0/03/0/03/0/02/0/1
F4Simple Multimodal ProblemsMedian9.81 × 1015.35 × 1008.16 × 1015.71 × 10−91.29 × 10−84.47 × 1031.25 × 1021.14 × 10−13
Mean9.25 × 1013.93 × 1017.80 × 1011.04 × 1033.86 × 1034.46 × 1031.28 × 1023.99 × 10−1
Std7.64 × 1004.51 × 1012.13 × 1015.11 × 1031.27 × 1044.24 × 1021.57 × 1011.20 × 100
p-value-1.93 × 10−3−4.80 × 10−4−5.29 × 10−9+7.12 × 10−8+1.62 × 10−11+1.10 × 10−10+ 8.76 × 10−12−
F5Median2.11 × 1012.11 × 1012.11 × 1012.00 × 1012.00 × 1012.11 × 1012.11 × 1012.00 × 101
Mean2.11 × 1012.11 × 1012.11 × 1012.00 × 1012.00 × 1012.11 × 1012.11 × 1012.05 × 101
Std4.14 × 10−22.54 × 10−23.32 × 10−20.00 × 1002.21 × 10−24.83 × 10−23.07 × 10−27.59 × 10−1
p-value-8.30 × 10−1=2.51 × 10−2−1.21 × 10−12−4.10 × 10−12−1.45 × 10−1=1.19 × 10−1=1.95 × 10−3−
F6Median5.22 × 10−51.39 × 10−45.51 × 10−17.19 × 1016.91 × 1016.96 × 1003.64 × 10−14.21 × 101
Mean1.74 × 10−22.52 × 10−18.59 × 10−17.17 × 1016.93 × 1017.18 × 1009.50 × 10−14.15 × 101
Std9.31 × 10−25.81 × 10−19.72 × 10−13.34 × 1003.41 × 1002.00 × 1001.23 × 1006.17 × 100
p-value-3.16 × 10−10+1.09 × 10−10+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+1.41 × 10−9+3.02 × 10−11+
F7Median0.00 × 1000.00 × 1003.41 × 10−131.72 × 10−21.97 × 10−22.98 × 1025.73 × 10−32.27 × 10−13
Mean0.00 × 1000.00 × 1009.86 × 10−42.39 × 10−22.18 × 10−22.99 × 1022.20 × 10−22.47 × 10−4
Std0.00 × 1000.00 × 1002.96 × 10−32.69 × 10−22.33 × 10−21.73 × 1014.80 × 10−21.33 × 10−3
p-value-NaN=6.50 × 10−13+1.20 × 10−12+1.20 × 10−12+1.21 × 10−12+1.21 × 10−12+5.02 × 10−13+
F8Median2.98 × 1007.96 × 1003.09 × 1024.69 × 1024.35 × 1024.93 × 1010.00 × 1003.10 × 102
Mean2.98 × 1007.89 × 1002.96 × 1024.83 × 1024.57 × 1024.82 × 1013.32 × 10−23.10 × 102
Std1.23 × 1002.56 × 1005.42 × 1011.07 × 1029.68 × 1016.28 × 1001.79 × 10−15.00 × 100
p-value-7.55 × 10−10+2.51 × 10−11+2.51 × 10−11+2.51 × 10−11+2.51 × 10−11+ 1.69 × 10−12−2.47 × 10−11+
F9Median2.49 × 1006.96 × 1003.11 × 1027.00 × 1026.23 × 1023.80 × 1011.99 × 1004.37 × 102
Mean2.49 × 1006.96 × 1002.82 × 1026.95 × 1026.54 × 1023.75 × 1012.14 × 1004.35 × 102
Std1.35 × 1001.87 × 1008.86 × 1011.20 × 1029.46 × 1014.75 × 1002.74 × 1009.41 × 100
p-value-1.59 × 10−10+2.61 × 10−11+2.61 × 10−11+2.61 × 10−11+2.61 × 10−11+ 2.95 × 10−1=2.59 × 10−11+
F10Median8.44 × 1001.30 × 1021.18 × 1047.57 × 1037.42 × 1031.06 × 1035.79 × 1017.87 × 103
Mean1.07 × 1021.99 × 1021.18 × 1048.30 × 1038.24 × 1031.04 × 1036.25 × 1017.84 × 103
Std1.65 × 1021.80 × 1024.28 × 1022.40 × 1032.07 × 1034.09 × 1024.11 × 1013.26 × 102
p-value-5.44 × 10−3+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+9.91 × 10−11+ 6.95 × 10−1=3.02 × 10−11+
F11Median1.36 × 1021.42 × 1021.25 × 1048.39 × 1038.52 × 1032.57 × 1034.52 × 1028.44 × 103
Mean1.18 × 1021.63 × 1021.25 × 1049.46 × 1039.43 × 1032.52 × 1034.58 × 1028.47 × 103
Std9.29 × 1001.65 × 1023.67 × 1022.78 × 1033.02 × 1039.47 × 1022.82 × 1022.77 × 102
p-value-2.84 × 10−1=3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.65 × 10−11+3.02 × 10−11+
F12Median3.35 × 1003.30 × 1003.20 × 1007.43 × 10−16.52 × 10−13.27 × 1003.38 × 1002.85 × 10−2
Mean3.35 × 1003.27 × 1003.25 × 1008.15 × 10−16.83 × 10−13.26 × 1003.35 × 1003.26 × 10−2
Std2.52 × 10−12.71 × 10−13.05 × 10−13.04 × 10−12.57 × 10−12.56 × 10−11.99 × 10−11.70 × 10−2
p-value-2.84 × 10−1=1.96 × 10−1=3.02 × 10−1−3.02 × 10−11−2.17 × 10−1=8.77 × 10−1=3.02 × 10−11−
F13Median1.40 × 10−17.28 × 10−22.46 × 10−14.98 × 10−14.50 × 10−13.71 × 1006.67 × 10−23.63 × 10−1
Mean1.40 × 10−17.48 × 10−22.44 × 10−11.24 × 1004.97 × 10−13.72 × 1006.38 × 10−23.63 × 10−1
Std1.89 × 10−21.03 × 10−22.22 × 10−22.15 × 1001.24 × 10−11.31 × 10−11.03 × 10−26.78 × 10−2
p-value-3.02 × 10−11−4.08 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+ 3.02 × 10−11−3.02 × 10−11+
F14Median2.57 × 10−14.01 × 10−13.73 × 10−13.82 × 10−13.97 × 10−16.42 × 1014.36 × 10−13.37 × 10−1
Mean2.40 × 10−13.93 × 10−13.76 × 10−14.58 × 10−14.95 × 10−16.40 × 1014.27 × 10−14.23 × 10−1
Std6.75 × 10−24.80 × 10−22.87 × 10−21.86 × 10−11.86 × 10−13.95 × 1003.49 × 10−22.25 × 10−1
p-value-5.07 × 10−10+1.61 × 10−10+2.87 × 10−10+3.16 × 10−10+3.02 × 10−11+4.08 × 10−11+1.87 × 10−7+
F15Median4.87 × 1001.23 × 1012.74 × 1011.84 × 1021.73 × 1022.39 × 1035.30 × 1006.61 × 100
Mean4.81 × 1001.19 × 1012.64 × 1011.84 × 1021.82 × 1022.45 × 1025.27 × 1006.61 × 100
Std4.93 × 10−11.79 × 1004.02 × 1004.72 × 1015.73 × 1019.48 × 1026.27 × 10−11.37 × 100
p-value-3.02 × 10−11+3.34 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+2.38 × 10−3+7.69 × 10−8+
F16Median1.80 × 1012.18 × 1012.07 × 1012.28 × 1012.25 × 1012.01 × 1011.98 × 1012.24 × 101
Mean1.80 × 1012.18 × 1012.07 × 1012.28 × 1012.26 × 1012.00 × 1011.97 × 1012.24 × 101
Std6.67 × 10−12.58 × 10−13.75 × 10−15.14 × 10−14.27 × 10−12.92 × 10−14.87 × 10−11.40 × 10−1
p-value-3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+1.09 × 10−10+3.02 × 10−11+
F4–16w/t/l-7/4/211/1/111/0/211/0/211/2/007/4/210/0/3
F17Hybrid ProblemsMedian7.79 × 1013.16 × 1013.45 × 1045.74 × 1071.49 × 1041.96 × 1071.69 × 1072.45 × 103
Mean1.16 × 1023.77 × 1013.62 × 1043.42 × 1081.04 × 1082.02 × 1071.76 × 1072.49 × 103
Std7.34 × 1012.47 × 1011.57 × 1044.09 × 1081.77 × 1084.76 × 1063.45 × 1065.06 × 102
p-value-1.10 × 10−11−3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+
F18Median7.84 × 1001.40 × 1001.58 × 1022.88 × 1022.98 × 1025.85 × 1081.02 × 1031.66 × 102
Mean8.47 × 1001.35 × 1002.13 × 1023.49 × 1091.34 × 1096.42 × 1081.08 × 1031.65 × 102
Std3.23 × 1006.37 × 10−11.59 × 1027.03 × 1093.62 × 1091.93 × 1085.06 × 1024.18 × 101
p-value-3.02 × 10−11−4.98 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+
F19Median1.09 × 1017.36 × 1003.50 × 1012.53 × 1012.55 × 1015.99 × 1016.30 × 1011.85 × 101
Mean1.08 × 1017.46 × 1003.48 × 1012.59 × 1019.34 × 1015.81 × 1016.07 × 1011.84 × 101
Std7.71 × 10−17.65 × 10−14.99 × 1002.77 × 1003.64 × 1029.80 × 1009.75 × 1002.52 × 100
p-value-4.50 × 10−11−4.50 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+4.08 × 10−11+
F20Median1.96 × 1002.52 × 1002.33 × 1036.41 × 1048.45 × 1045.01 × 1023.41 × 1043.01 × 102
Mean1.95 × 1002.50 × 1002.37 × 1039.01 × 1047.81 × 1045.08 × 1023.35 × 1043.27 × 102
Std2.27 × 10−13.48 × 10−17.57 × 1021.47 × 1054.91 × 1042.60 × 1026.10 × 1031.35 × 102
p-value-1.16 × 10−7+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+
F21Median2.47 × 1021.23 × 1027.13 × 1042.35 × 1071.23 × 1061.64 × 1038.45 × 1061.75 × 103
Mean1.99 × 1029.74 × 1017.27 × 1048.60 × 1071.83 × 1071.98 × 1038.53 × 1061.76 × 103
Std8.11 × 1017.43 × 1011.94 × 1041.08 × 1083.03 × 1071.49 × 1032.01 × 1064.26 × 102
p-value-8.15 × 10−5−3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.34 × 10−11+3.02 × 10−11+3.02 × 10−11+
F22Median2.72 × 1013.02 × 1019.51 × 1021.89 × 1032.11 × 1039.87 × 1014.69 × 1017.94 × 102
Mean3.54 × 1013.02 × 1019.06 × 1029.23 × 1033.33 × 1031.18 × 1021.37 × 1027.58 × 102
Std3.02 × 1016.70 × 10−11.90 × 1022.74 × 1045.36 × 1036.15 × 1011.46 × 1022.84 × 102
p-value-1.36 × 10−7−3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+2.23 × 10−9+1.17 × 10−9+3.02 × 10−11+
F17–22w/t/l-1/0/56/0/06/0/06/0/06/0/06/0/06/0/0
F23Composition ProblemsMedian3.44 × 1023.44 × 1023.44 × 1022.48 × 1031.70 × 1034.17 × 1023.66 × 1022.00 × 102
Mean3.44 × 1023.44 × 1023.44 × 1022.20 × 1031.52 × 1034.21 × 1023.66 × 1022.00 × 102
Std1.71 × 10−131.71 × 10−131.71 × 10−139.29 × 1026.07 × 1021.44 × 1012.74 × 1000.00 × 100
p-value-NaN=NaN=1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.69 × 10−14−
F24Median2.68 × 1022.68 × 1022.67 × 1024.09 × 1023.60 × 1022.72 × 1022.78 × 1022.00 × 102
Mean2.68 × 1022.68 × 1022.64 × 1024.78 × 1024.44 × 1022.72 × 1022.78 × 1022.00 × 102
Std1.33 × 1001.10 × 1005.66 × 1001.69 × 1021.40 × 1028.52 × 10−11.11 × 1000.00 × 100
p-value-9.17 × 10−1=3.00 × 10−1=2.88 × 10−11+2.88 × 10−11+5.25 × 10−11+2.88 × 10−11+1.14 × 10−12−
F25Median2.05 × 1022.05 × 1022.00 × 1025.77 × 1024.32 × 1022.17 × 1022.14 × 1022.00 × 102
Mean2.05 × 1022.05 × 1022.00 × 1025.57 × 1024.29 × 1022.17 × 1022.15 × 1022.00 × 102
Std1.49 × 10−11.58 × 10−10.00 × 1008.87 × 1015.87 × 1011.56 × 1005.84 × 1000.00 × 100
p-value-4.12 × 10−1=1.19 × 10−12−2.98 × 10−11+2.98 × 10−11+2.98 × 10−11+2.98 × 10−11+1.19 × 10−12−
F26Median1.00 × 1021.00 × 1022.00 × 1021.01 × 1021.01 × 1021.06 × 1021.74 × 1022.00 × 102
Mean1.00 × 1021.00 × 1021.96 × 1022.90 × 1021.72 × 1021.06 × 1021.58 × 1022.00 × 102
Std4.78 × 10−21.35 × 10−21.64 × 1012.27 × 1021.18 × 1021.35 × 1004.00 × 1010.00 × 100
p-value-8.98 × 10−11−2.80 × 10−11+6.06 × 10−11+1.09 × 10−10+2.98 × 10−11+2.98 × 10−11+1.21 × 10−11+
F27Median3.00 × 1024.19 × 1023.52 × 1022.68 × 1032.51 × 1037.00 × 1023.68 × 1022.00 × 102
Mean3.17 × 1024.23 × 1023.48 × 1022.67 × 1032.49 × 1037.00 × 1023.74 × 1022.00 × 102
Std2.20 × 1014.98 × 1014.35 × 1011.79 × 1021.16 × 1025.13 × 1012.72 × 1010.00 × 100
p-value-5.37 × 10−11+5.79 × 10−6+2.95 × 10−11+2.95 × 10−11+2.95 × 10−11+4.11 × 10−9+1.18 × 10−12−
F28Median1.16 × 1031.16 × 1031.15 × 1031.15 × 1041.02 × 1041.92 × 1034.29 × 1022.00 × 102
Mean1.16 × 1031.19 × 1031.15 × 1031.15 × 1049.91 × 1032.01 × 1034.31 × 1022.00 × 102
Std4.37 × 1011.68 × 1023.88 × 1012.14 × 1032.17 × 1034.59 × 1026.89 × 1000.00 × 100
p-value-9.35 × 10−1=1.81 × 10−1=3.02 × 10−11+3.02 × 10−11+1.33 × 10−11+3.02 × 10−11+1.21 × 10−12−
F29Median8.14 × 1027.39 × 1021.66 × 1031.57 × 1091.05 × 1091.23 × 1037.29 × 1022.00 × 102
Mean8.34 × 1024.24 × 1061.67 × 1031.57 × 1091.06 × 1092.65 × 1039.82 × 1022.00 × 102
Std5.15 × 1011.63 × 1071.50 × 1023.30 × 1082.06 × 1082.81 × 1037.50 × 1020.00 × 100
p-value-6.77 × 10−5+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+9.92 × 10−11+1.33 × 10−1=1.21 × 10−12−
F30Median8.56 × 1039.37 × 1039.02 × 1032.49 × 1071.83 × 1071.74 × 1058.19 × 1022.00 × 102
Mean8.57 × 1039.50 × 1039.20 × 1032.66 × 1071.92 × 1071.67 × 1058.17 × 1022.00 × 102
Std3.37 × 1027.30 × 1023.67 × 1021.14 × 1076.42 × 1062.95 × 1041.82 × 1020.00 × 100
p-value-1.07 × 10−7+7.77 × 10−9+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+1.21 × 10−12−
F23–30w/t/l-3/4/14/3/18/0/08/0/08/0/05/1/21/0/7
w/t/l-13/9/824/4/228/0/228/0/228/2/021/5/419/0/11
Rank2.332.934.526.806.275.704.303.15
Table 4. Comparison between ACSEDA and the compared state-of-the-art EDA variants on the 100-D CEC2014 benchmark problems. The bold results indicate that ACSEDA is significantly better than the compared methods.
Table 4. Comparison between ACSEDA and the compared state-of-the-art EDA variants on the 100-D CEC2014 benchmark problems. The bold results indicate that ACSEDA is significantly better than the compared methods.
FCategoryQualityACSEDAEDA2EDAVERSEDA/LSEDA/LS-MSTRA-EDABUMDAMA-ES
F1Unimodal Problems Median1.80 × 10−96.82 × 10−131.25 × 1061.87 × 1061.39 × 1034.74 × 1088.33 × 1074.26 × 10−14
Mean1.90 × 10−97.24 × 10−131.22 × 1068.24 × 1094.41 × 1084.62 × 1088.48 × 1071.46 × 10−13
Std7.43 × 10−91.70 × 10−131.68 × 1058.95 × 1091.03 × 1096.07 × 1078.05 × 1065.46 × 10−13
p-value-2.99 × 10−11−3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+1.20 × 10−11−
F2Median3.84 × 10−79.36 × 10−119.84 × 1031.88 × 10−101.14 × 10−109.69 × 10104.27 × 1048.53 × 10−14
Mean3.94 × 10−79.67 × 10−111.12 × 1041.54 × 10−93.87 × 1019.74 × 10107.15 × 1047.48 × 10−14
Std2.02 × 10−72.78 × 10−115.42 × 1033.04 × 10−92.08 × 1023.46 × 1099.73 × 1041.55 × 10−14
p-value-3.02 × 10−11−3.02 × 10−11+3.02 × 10−11+5.57 × 10−10+3.02 × 10−11+3.02 × 10−11+1.48 × 10−11−
F3Median5.40 × 10−130.00 × 1006.11 × 1032.13 × 10−82.57 × 10−81.11 × 1052.39 × 1041.71 × 10−13
Mean7.09 × 10−130.00 × 1006.25 × 1039.02 × 10−14.18 × 1021.10 × 1052.34 × 1042.01 × 10−13
Std4.71 × 10−130.00 × 1001.48 × 1034.31 × 1001.13 × 1035.11 × 1033.80 × 1034.08 × 10−14
p-value-1.18 × 10−12−2.96 × 10−11+2.96 × 10−11+2.96 × 10−11+2.96 × 10−11+2.96 × 10−11+4.68 × 10−9−
F1–3w/t/l-0/0/33/0/02/0/13/0/03/0/03/0/00/0/3
F4Simple Multimodal ProblemsMedian1.85 × 1022.01 × 1021.43 × 1023.99 × 1001.37 × 10−71.27 × 1041.91 × 1021.71 × 10−13
Mean1.81 × 1021.92 × 1021.40 × 1021.04 × 1044.09 × 1031.27 × 1041.82 × 1021.20 × 100
Std2.93 × 1012.69 × 1017.29 × 1003.91 × 1041.31 × 1047.87 × 1022.44 × 1011.83 × 100
p-value-3.76 × 10−1=1.83 × 10−9−3.51 × 10−9+1.45 × 10−7+2.61 × 10−11+9.70 × 10−1=1.14 × 10−11−
F5Median2.13 × 1012.13 × 1012.13 × 1012.00 × 1012.00 × 1012.13 × 1012.13 × 1012.00 × 101
Mean2.13 × 1012.13 × 1012.13 × 1012.00 × 1012.00 × 1012.13 × 1012.13 × 1012.03 × 101
Std2.74 × 10−22.56 × 10−22.46 × 10−21.47 × 10−20.00 × 1002.28 × 10−23.62 × 10−26.25 × 10−1
p-value-1.70 × 10−2−1.86 × 10−1=3.16 × 10−12−1.21 × 10−12−1.69 × 10−1=5.44 × 10−1=9.19 × 10−6−
F6Median4.10 × 10−31.94 × 1001.01 × 1011.55 × 1026.88 × 1013.37 × 1014.93 × 1001.19 × 102
Mean3.83 × 10−12.43 × 1001.06 × 1011.56 × 1026.96 × 1013.39 × 1015.05 × 1001.20 × 102
Std5.70 × 10−11.85 × 1003.51 × 1005.33 × 1002.75 × 1003.54 × 1001.58 × 1007.18 × 100
p-value-9.51 × 10−6+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.34 × 10−11+3.02 × 10−11+
F7Median2.27 × 10−130.00 × 1001.02 × 10−121.90 × 10−81.72 × 10−21.10 × 1035.50 × 10−23.41 × 10−13
Mean2.31 × 10−130.00 × 1001.00 × 10−129.27 × 10−32.65 × 10−21.10 × 1037.16 × 10−24.93 × 10−4
Std1.19 × 10−130.00 × 1001.84 × 10−132.21 × 10−22.82 × 10−23.23 × 1015.06 × 10−21.84 × 10−3
p-value-8.27 × 10−13−1.98 × 10−11+2.22 × 10−11+2.25 × 10−11+2.25 × 10−11+2.25 × 10−11+4.97 × 10−6+
F8Median9.45 × 1002.39 × 1017.33 × 1021.56 × 1034.64 × 1022.71 × 1020.00 × 1006.15 × 102
Mean9.22 × 1002.37 × 1015.79 × 1021.48 × 1034.93 × 1022.75 × 1021.33 × 10−16.13 × 102
Std2.52 × 1004.46 × 1002.94 × 1023.00 × 1021.31 × 1021.62 × 1013.38 × 10−11.08 × 101
p-value-3.11 × 10−11+2.88 × 10−11+2.88 × 10−11+2.88 × 10−11+2.88 × 10−11+3.86 × 10−12−2.86 × 10−11+
F9Median7.96 × 1002.14 × 1017.43 × 1021.68 × 1035.92 × 1022.77 × 1022.98 × 1008.09 × 102
Mean8.06 × 1002.08 × 1016.50 × 1021.68 × 1036.27 × 1022.78 × 1023.26 × 1008.06 × 102
Std2.29 × 1004.72 × 1002.40 × 1022.62 × 1021.31 × 1021.83 × 1011.46 × 1001.52 × 101
p-value-4.34 × 10−11+2.78 × 10−11+2.78 × 10−11+2.78 × 10−11+2.78 × 10−11+2.22 × 10−9−2.77 × 10−11+
F10Median6.11 × 1021.63 × 1032.79 × 1041.78 × 1047.72 × 1036.51 × 1031.01 × 1021.64 × 104
Mean6.60 × 1021.79 × 1032.78 × 1042.20 × 1048.58 × 1036.55 × 1031.17 × 1021.65 × 104
Std3.85 × 1027.29 × 1027.77 × 1027.13 × 1032.27 × 1037.46 × 1021.16 × 1024.17 × 102
p-value-1.55 × 10−9+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+1.20 × 10−8−3.02 × 10−11+
F11Median1.43 × 1031.81 × 1032.89 × 1042.17 × 1047.99 × 1035.73 × 1031.25 × 1031.31 × 104
Mean1.44 × 1031.85 × 1032.87 × 1042.29 × 1048.34 × 1035.66 × 1031.23 × 1031.31 × 104
Std3.41 × 1024.59 × 1029.63 × 1026.21 × 1031.97 × 1034.87 × 1025.13 × 1024.55 × 102
p-value-1.24 × 10−3+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+6.57 × 10−2=3.02 × 10−11+
F12Median3.80 × 1003.97 × 1003.99 × 1001.04 × 1006.78 × 10−14.00 × 1003.93 × 1002.03 × 10−2
Mean3.85 × 1003.99 × 1003.97 × 1001.19 × 1007.28 × 10−13.93 × 1003.96 × 1002.03 × 10−2
Std2.16 × 10−12.21 × 10−12.00 × 10−15.49 × 10−12.81 × 10−13.04 × 10−12.10 × 10−16.72 × 10−3
p-value-3.15 × 10−2+3.92 × 10−2+3.02 × 10−11−3.02 × 10−11−5.75 × 10−2=7.98 × 10−2=3.02 × 10−11−
F13Median2.38 × 10−11.33 × 10−13.15 × 10−14.35 × 10−14.99 × 10−15.54 × 1005.77 × 10−25.49 × 10−1
Mean2.36 × 10−11.35 × 10−13.23 × 10−12.39 × 1008.72 × 10−15.54 × 1005.85 × 10−25.57 × 10−1
Std2.01 × 10−21.85 × 10−22.55 × 10−23.94 × 1001.17 × 1007.78 × 10−29.94 × 10−37.13 × 10−2
p-value-3.02 × 10−11−3.69 × 10−11+3.02 × 10−11+2.15 × 10−10+3.02 × 10−11+3.02 × 10−11−3.02 × 10−11+
F14Median2.74 × 10−14.00 × 10−13.59 × 10−11.44 × 1023.77 × 10−13.29 × 1024.56 × 10−12.92 × 10−1
Mean2.70 × 10−13.86 × 10−13.63 × 10−12.40 × 1024.26 × 10−13.27 × 1024.50 × 10−13.66 × 10−1
Std4.12 × 10−24.86 × 10−21.83 × 10−22.63 × 1021.50 × 10−17.41 × 1002.05 × 10−22.36 × 10−1
p-value-3.82 × 10−9+4.98 × 10−11+3.02 × 10−11+1.29 × 10−9+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+
F15Median1.04 × 1011.04 × 1016.55 × 1015.57 × 1021.93 × 1021.90 × 1051.20 × 1011.38 × 101
Mean1.06 × 1011.03 × 1016.01 × 1012.48 × 1072.48 × 1051.90 × 1051.22 × 1011.45 × 101
Std8.45 × 10−18.26 × 10−11.66 × 1017.66 × 1071.33 × 1063.06 × 1041.07 × 1002.62 × 100
p-value-3.33 × 10−1=1.29 × 10−9+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+1.29 × 10−6+1.69 × 10−9+
F16Median4.12 × 1014.60 × 1014.53 × 1014.65 × 1012.26 × 1014.33 × 1014.26 × 1014.59 × 101
Mean4.12 × 1014.59 × 1014.53 × 1014.64 × 1012.26 × 1014.33 × 1014.26 × 1014.58 × 101
Std5.74 × 10−12.25 × 10−13.31 × 10−19.07 × 10−14.77 × 10−14.56 × 10−15.66 × 10−11.42 × 10−1
p-value-3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11−3.69 × 10−11+1.07 × 10−9+3.02 × 10−11+
F4–16w/t/l-8/2/311/1/111/0/210/0/311/2/05/4/410/0/3
F17Hybrid Problems Median7.10 × 1028.16 × 1022.78 × 1051.29 × 1091.33 × 1079.57 × 1073.87 × 1075.48 × 103
Mean7.27 × 1027.72 × 1022.74 × 1051.32 × 1091.19 × 1089.54 × 1073.85 × 1075.59 × 103
Std1.93 × 1022.45 × 1025.52 × 1049.08 × 1081.52 × 1081.15 × 1073.48 × 1066.64 × 102
p-value-3.26 × 10−1=3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+
F18Median5.80 × 1012.62 × 1012.02 × 1025.64 × 1082.47 × 1023.04 × 1091.36 × 1023.63 × 102
Mean5.89 × 1012.60 × 1012.54 × 1021.97 × 10101.41 × 1093.01 × 1091.73 × 1023.63 × 102
Std1.37 × 1017.82 × 1001.87 × 1022.58 × 10103.71 × 1094.29 × 1081.43 × 1026.79 × 101
p-value-7.39 × 10−11−7.12 × 10−9+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+2.50 × 10−3+3.02 × 10−11+
F19Median9.09 × 1019.40 × 1019.44 × 1015.32 × 1012.61 × 1015.03 × 1026.97 × 1016.03 × 101
Mean9.10 × 1019.40 × 1019.44 × 1012.43 × 1032.57 × 1015.14 × 1027.21 × 1016.28 × 101
Std1.21 × 1001.38 × 1001.41 × 1004.59 × 1032.12 × 1006.67 × 1011.71 × 1011.41 × 101
p-value-2.67 × 10−9+3.16 × 10−10+1.86 × 10−1=3.02 × 10−11−3.02 × 10−11+9.51 × 10−6−8.48 × 10−9−
F20Median6.20 × 1009.31 × 1008.31 × 1032.43 × 1056.28 × 1042.68 × 1041.08 × 1056.01 × 102
Mean6.68 × 1009.56 × 1008.43 × 1032.33 × 1066.75 × 1042.60 × 1041.08 × 1056.36 × 102
Std2.34 × 1002.40 × 1002.00 × 1035.76 × 1063.67 × 1042.96 × 1031.89 × 1041.92 × 102
p-value-8.88 × 10−6+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+
F21Median2.23 × 1021.83 × 1021.59 × 1059.21 × 1085.22 × 1063.03 × 1062.42 × 1073.58 × 103
Mean2.51 × 1021.98 × 1021.51 × 1058.09 × 1082.02 × 1073.18 × 1062.40 × 1073.51 × 103
Std8.04 × 1016.34 × 1013.04 × 1045.30 × 1083.12 × 1071.09 × 1062.95 × 1065.75 × 102
p-value-1.08 × 10−2−3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+
F22Median3.89 × 1013.75 × 1013.54 × 1034.09 × 1031.86 × 1037.75 × 1012.58 × 1021.20 × 103
Mean4.62 × 1016.56 × 1013.49 × 1031.07 × 1052.43 × 1039.21 × 1012.90 × 1021.26 × 103
Std3.11 × 1015.21 × 1012.48 × 1022.59 × 1051.73 × 1034.21 × 1012.09 × 1023.12 × 102
p-value-6.00 × 10−1=3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+5.00 × 10−9+1.43 × 10−8+3.02 × 10−11+
F17–22w/t/l-2/2/26/0/05/1/05/0/16/0/05/0/15/0/1
F23Composition ProblemsMedian3.48 × 1023.48 × 1023.48 × 1026.57 × 1031.64 × 1034.43 × 1023.59 × 1022.00 × 102
Mean3.48 × 1023.48 × 1023.48 × 1026.72 × 1031.44 × 1034.41 × 1023.59 × 1022.00 × 102
Std0.00 × 1000.00 × 1000.00 × 1001.60 × 1036.17 × 1021.26 × 1012.91 × 1000.00 × 100
p-value-NaN=NaN=1.21 × 10−12+3.36 × 10−11+1.21 × 10−12+1.21 × 10−12+1.69 × 10−14−
F24Median3.72 × 1023.72 × 1023.62 × 1025.05 × 1023.39 × 1023.82 × 1023.82 × 1022.00 × 102
Mean3.73 × 1023.73 × 1023.61 × 1025.07 × 1024.29 × 1023.82 × 1023.82 × 1022.00 × 102
Std2.22 × 1001.90 × 1004.63 × 1002.74 × 1011.32 × 1021.41 × 1001.59 × 1000.00 × 100
p-value-7.73 × 10−1=4.08 × 10−11−3.02 × 10−11+1.91 × 10−1=3.02 × 10−11+3.02 × 10−11+1.21 × 10−12−
F25Median2.15 × 1022.16 × 1022.00 × 1021.35 × 1034.27 × 1022.42 × 1022.22 × 1022.00 × 102
Mean2.15 × 1022.16 × 1022.00 × 1021.37 × 1034.20 × 1022.42 × 1022.23 × 1022.00 × 102
Std3.90 × 10−18.62 × 10−10.00 × 1002.04 × 1025.42 × 1013.06 × 1009.36 × 1000.00 × 100
p-value-1.11 × 10−4+1.21 × 10−12−3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+2.39 × 10−4+1.21 × 10−12−
F26Median2.00 × 1022.00 × 1022.00 × 1021.07 × 1031.01 × 1022.00 × 1022.00 × 1022.00 × 102
Mean2.00 × 1022.00 × 1022.00 × 1028.64 × 1021.25 × 1022.00 × 1022.00 × 1022.00 × 102
Std2.50 × 10−43.97 × 10−47.67 × 10−34.77 × 1026.80 × 1012.31 × 10−34.88 × 10−20.00 × 100
p-value-3.16 × 10−1=1.96 × 10−12+9.65 × 10−8+1.02 × 10−6−9.17 × 10−6+2.59 × 10−11+9.91 × 10−13−
F27Median3.00 × 1024.65 × 1023.98 × 1025.45 × 1032.42 × 1031.78 × 1033.13 × 1022.00 × 102
Mean3.04 × 1024.96 × 1024.15 × 1025.50 × 1032.43 × 1031.78 × 1033.23 × 1022.00 × 102
Std7.91 × 1001.13 × 1025.12 × 1013.00 × 1021.17 × 1021.05 × 1022.48 × 1010.00 × 100
p-value-3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+7.60 × 10−7+1.21 × 10−12−
F28Median2.31 × 1032.79 × 1032.17 × 1032.99 × 1041.07 × 1043.99 × 1038.14 × 1022.00 × 102
Mean2.31 × 1032.74 × 1032.17 × 1032.98 × 1041.05 × 1044.45 × 1038.14 × 1022.00 × 102
Std1.10 × 1025.59 × 1025.72 × 1015.35 × 1031.89 × 1031.10 × 1031.85 × 1010.00 × 100
p-value-1.17 × 10−2+2.38 × 10−7−3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11−1.21 × 10−12−
F29Median1.13 × 1037.44 × 1021.79 × 1035.47 × 1099.15 × 1081.49 × 1053.32 × 1022.00 × 102
Mean1.26 × 1037.85 × 1021.80 × 1035.60 × 1099.62 × 1081.69 × 1054.54 × 1022.00 × 102
Std2.79 × 1021.55 × 1024.30 × 1011.23 × 1092.57 × 1081.26 × 1053.73 × 1020.00 × 100
p-value-3.16 × 10−10−3.82 × 10−10+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+5.00 × 10−9−1.21 × 10−12−
F30Median9.17 × 1034.37 × 1038.66 × 1033.02 × 1081.31 × 1077.14 × 1041.92 × 1032.00 × 102
Mean8.91 × 1034.71 × 1038.82 × 1033.11 × 1081.38 × 1078.01 × 1041.88 × 1032.00 × 102
Std1.03 × 1031.21 × 1036.54 × 1021.03 × 1085.63 × 1062.63 × 1044.86 × 1020.00 × 100
p-value-9.92 × 10−11−3.87 × 10−1=3.02 × 10−11+3.02 × 10−11+3.90 × 10−10+3.02 × 10−11−1.21 × 10−12−
F23–30w/t/l-3/3/23/2/38/0/06/1/18/0/05/0/30/0/8
w/t/l-13/7/1023/3/426/1/324/1/528/2/018/4/815/0/15
Rank2.703.374.587.075.275.834.033.15
Table 5. Comparison among ACSEDA, with different settings of the selection ratio (sr) and different settings of the covariance scaling (cs) parameter, on the 50-D CEC 2014 benchmark problems. The best results in each part are highlighted in bold in this table.
Table 5. Comparison among ACSEDA, with different settings of the selection ratio (sr) and different settings of the covariance scaling (cs) parameter, on the 50-D CEC 2014 benchmark problems. The best results in each part are highlighted in bold in this table.
sr = 0.1sr = 0.2sr = 0.3sr = 0.4sr = 0.5
Fcs = 0.1cs = 0.2cs = 0.3cs = 0.4cs = 0.5cs = 0.6cs = 0.7cs = 0.8cs = 0.9cs = 1.0cs = 0.2cs = 0.4cs = 0.6cs = 0.8cs = 1.0cs = 0.3cs = 0.6cs = 0.9cs = 1.0cs = 0.4cs = 0.8cs = 1.0cs = 0.5cs = 1.0
cs/sr = 1.0cs/sr = 2.0cs/sr = 3.0cs/sr = 4.0cs/sr = 5.0cs/sr = 6.0cs/sr = 7.0cs/sr = 8.0cs/sr = 9.0cs/sr = 10.0cs/sr = 1.0cs/sr = 2.0cs/sr = 3.0cs/sr = 4.0cs/sr = 5.0cs/sr = 1.0cs/sr = 2.0cs/sr = 3.0cs/sr = 3.3cs/sr = 1.0cs/sr = 2.0cs/sr = 2.5cs/sr = 1.0cs/sr = 2.0
F12.32 × 1082.33 × 1071.44 × 1053.43 × 1035.87 × 10−145.74 × 10−104.28 × 10−62.57 × 10−25.70 × 1021.03 × 1084.00 × 1083.98 × 1064.67 × 1011.87 × 10−29.68 × 1076.93 × 1089.64 × 1042.52 × 1031.02 × 1081.05 × 1093.46 × 1031.26 × 1081.44 × 1091.47 × 108
F23.39 × 10101.41 × 10105.17 × 1089.41 × 1019.35 × 10−121.08 × 10−77.97 × 10−45.14 × 1004.17 × 1042.97 × 1093.94 × 10107.22 × 1081.87 × 10−24.16 × 1002.34 × 1094.71 × 10103.71 × 1082.33 × 1042.15 × 1095.63 × 10109.39 × 1022.09 × 1096.48 × 10102.01 × 109
F33.47 × 1041.30 × 1044.03 × 1020.00 × 1000.00 × 1002.80 × 10−132.45 × 10−91.77 × 10−51.75 × 10−11.13 × 1053.72 × 1045.53 × 1031.71 × 10−131.02 × 10−51.13 × 1054.33 × 1042.81 × 1028.26 × 10−21.17 × 1054.81 × 1049.73 × 10−61.17 × 1055.47 × 1041.17 × 105
F44.69 × 1031.46 × 1031.36 × 1029.02 × 1018.96 × 1019.26 × 1019.42 × 1019.65 × 1019.73 × 1013.38 × 1025.58 × 1036.39 × 1029.31 × 1019.53 × 1013.31 × 1026.81 × 1031.33 × 1029.55 × 1013.52 × 1028.82 × 1039.32 × 1003.76 × 1021.06 × 1044.20 × 102
F52.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 101
F61.91 × 1018.04 × 1001.36 × 1001.65 × 10−16.99 × 10−21.89 × 10−23.50 × 10−23.66 × 10−11.80 × 1003.38 × 1011.39 × 1012.32 × 1001.21 × 10−11.46 × 10−13.12 × 1011.44 × 1011.06 × 10−11.51 × 1003.06 × 1011.58 × 1011.50 × 10−13.06 × 1011.76 × 1013.13 × 101
F73.19 × 1021.29 × 1026.81 × 1000.00 × 1000.00 × 1001.17 × 10−131.03 × 10−96.92 × 10−66.71 × 10−23.39 × 1013.68 × 1026.40 × 1011.14 × 10−134.14 × 10−62.64 × 1014.33 × 1027.17 × 1003.47 × 10−22.46 × 1015.12 × 1024.55 × 10−62.45 × 1015.92 × 1022.44 × 101
F81.61 × 1028.34 × 1013.44 × 1011.63 × 1018.86 × 1006.00 × 1004.88 × 1003.39 × 1001.71 × 1013.02 × 1021.26 × 1024.67 × 1011.19 × 1014.02 × 1002.97 × 1021.14 × 1022.81 × 1011.09 × 1012.96 × 1021.15 × 1021.56 × 1012.96 × 1021.29 × 1022.96 × 102
F91.60 × 1027.09 × 1012.67 × 1019.24 × 1006.53 × 1004.11 × 1003.68 × 1003.79 × 1001.15 × 1023.32 × 1021.09 × 1023.34 × 1016.37 × 1002.70 × 1003.27 × 1021.02 × 1021.93 × 1013.32 × 1013.26 × 1021.07 × 1029.42 × 1003.21 × 1021.23 × 1023.26 × 102
F104.63 × 1032.46 × 1031.25 × 1038.25 × 1025.36 × 1023.14 × 1022.76 × 1021.30 × 1023.55 × 1028.99 × 1033.26 × 1031.37 × 1037.34 × 1022.96 × 1028.83 × 1032.79 × 1031.05 × 1039.53 × 1029.21 × 1032.70 × 1031.23 × 1039.28 × 1032.86 × 1031.01 × 104
F114.85 × 1032.46 × 1031.31 × 1038.47 × 1026.18 × 1024.83 × 1023.77 × 1023.00 × 1023.22 × 1031.05 × 1043.24 × 1031.33 × 1037.81 × 1025.04 × 1021.05 × 1042.87 × 1031.26 × 1033.65 × 1031.05 × 1042.73 × 1031.23 × 1031.08 × 1042.66 × 1031.10 × 104
F121.14 × 1003.17 × 1003.21 × 1003.20 × 1003.15 × 1003.28 × 1003.23 × 1003.18 × 1003.23 × 1003.42 × 1003.21 × 1003.28 × 1003.20 × 1003.30 × 1003.20 × 1003.27 × 1003.28 × 1003.22 × 1003.35 × 1003.26 × 1003.28 × 1003.25 × 1003.25 × 1003.33 × 100
F133.81 × 1001.53 × 1001.78 × 10−11.43 × 10−11.40 × 10−11.30 × 10−11.08 × 10−18.13 × 10−26.16 × 10−24.44 × 10−14.01 × 1003.50 × 10−11.33 × 10−19.14 × 10−23.78 × 10−14.38 × 1001.63 × 10−16.22 × 10−24.16 × 10−14.74 × 1001.22 × 10−14.24 × 10−15.13 × 1004.27 × 10−1
F146.72 × 1012.05 × 1012.85 × 10−13.37 × 10−13.30 × 10−13.15 × 10−12.93 × 10−12.61 × 10−12.69 × 10−18.46 × 1008.23 × 1018.36 × 10−13.20 × 10−12.98 × 10−15.38 × 1009.72 × 1013.02 × 10−13.06 × 10−13.74 × 1001.14 × 1023.26 × 10−14.16 × 1001.37 × 1023.64 × 100
F155.07 × 1035.31 × 1015.65 × 1004.75 × 1004.94 × 1004.77 × 1007.47 × 1001.38 × 1012.54 × 1018.19 × 1017.77 × 1036.69 × 1005.02 × 1001.30 × 1016.93 × 1011.20 × 1045.41 × 1002.34 × 1016.93 × 1012.26 × 1041.27 × 1017.29 × 1014.24 × 1048.86 × 101
F161.72 × 1011.58 × 1011.84 × 1011.91 × 1012.03 × 1012.02 × 1012.02 × 1012.01 × 1012.02 × 1012.03 × 1011.74 × 1011.97 × 1012.07 × 1012.08 × 1012.05 × 1012.00 × 1012.13 × 1012.12 × 1012.10 × 1012.12 × 1012.20 × 1012.16 × 1012.24 × 1012.24 × 101
F171.18 × 1076.54 × 1023.78 × 1022.87 × 1022.51 × 1022.42 × 1022.30 × 1023.50 × 1027.20 × 1025.30 × 1062.58 × 1073.90 × 1023.20 × 1024.51 × 1026.01 × 1064.60 × 1074.82 × 1029.18 × 1026.59 × 1068.43 × 1076.54 × 1027.59 × 1061.23 × 1089.22 × 106
F182.44 × 1081.30 × 1029.18 × 1014.83 × 1012.76 × 1012.44 × 1012.42 × 1012.63 × 1011.63 × 1021.72 × 1088.31 × 1088.80 × 1013.95 × 1014.11 × 1011.40 × 1081.77 × 1097.36 × 1011.66 × 1021.32 × 1083.13 × 1097.62 × 1011.27 × 1084.56 × 1091.22 × 108
F198.07 × 1013.42 × 1012.69 × 1011.45 × 1011.23 × 1011.19 × 1011.22 × 1011.23 × 1011.31 × 1018.38 × 1019.25 × 1013.61 × 1011.44 × 1011.23 × 1018.69 × 1011.34 × 1023.42 × 1011.30 × 1018.90 × 1012.11 × 1021.62 × 1019.02 × 1013.22 × 1029.37 × 101
F207.88 × 1029.08 × 1013.64 × 1019.82 × 1005.40 × 1003.19 × 1002.71 × 1004.25 × 1003.90 × 1012.17 × 1041.21 × 1034.07 × 1017.49 × 1005.36 × 1002.97 × 1041.69 × 1032.34 × 1013.38 × 1013.55 × 1042.51 × 1032.10 × 1013.88 × 1043.84 × 1033.82 × 104
F211.22 × 1049.05 × 1024.28 × 1023.15 × 1022.61 × 1022.62 × 1022.84 × 1023.24 × 1025.40 × 1023.39 × 1061.45 × 1044.69 × 1023.00 × 1023.99 × 1024.06 × 1062.85 × 1044.04 × 1027.46 × 1024.15 × 1065.22 × 1045.38 × 1024.96 × 1061.19 × 1055.55 × 106
F221.48 × 1026.22 × 1015.03 × 1013.77 × 1013.87 × 1015.43 × 1015.87 × 1016.81 × 1018.39 × 1018.07 × 1021.43 × 1027.32 × 1016.23 × 1017.91 × 1018.02 × 1023.79 × 1024.38 × 1011.20 × 1028.64 × 1021.12 × 1038.50 × 1018.52 × 1023.10 × 1038.66 × 102
F234.62 × 1023.93 × 1023.52 × 1023.44 × 1023.44 × 1023.44 × 1023.44 × 1023.44 × 1023.44 × 1023.94 × 1024.53 × 1023.75 × 1023.44 × 1023.44 × 1024.02 × 1024.62 × 1023.52 × 1023.44 × 1024.14 × 1024.94 × 1023.44 × 1024.24 × 1025.24 × 1024.44 × 102
F242.80 × 1022.74 × 1022.73 × 1022.72 × 1022.72 × 1022.70 × 1022.70 × 1022.69 × 1022.70 × 1023.27 × 1022.76 × 1022.73 × 1022.73 × 1022.71 × 1023.22 × 1022.75 × 1022.73 × 1022.73 × 1023.21 × 1022.74 × 1022.74 × 1023.21 × 1022.72 × 1023.22 × 102
F252.22 × 1022.15 × 1022.09 × 1022.05 × 1022.05 × 1022.05 × 1022.05 × 1022.05 × 1022.05 × 1022.36 × 1022.20 × 1022.13 × 1022.05 × 1022.05 × 1022.47 × 1022.21 × 1022.09 × 1022.05 × 1022.51 × 1022.20 × 1022.05 × 1022.51 × 1022.19 × 1022.51 × 102
F261.27 × 1021.11 × 1021.04 × 1021.01 × 1021.03 × 1021.00 × 1021.00 × 1021.00 × 1021.00 × 1021.62 × 1021.33 × 1021.17 × 1021.05 × 1021.05 × 1021.85 × 1021.38 × 1021.19 × 1021.06 × 1021.91 × 1021.64 × 1021.08 × 1021.99 × 1021.71 × 1021.80 × 102
F271.05 × 1036.94 × 1024.08 × 1023.37 × 1023.29 × 1023.36 × 1023.30 × 1023.34 × 1023.81 × 1021.08 × 1039.37 × 1025.39 × 1023.22 × 1023.29 × 1021.03 × 1039.48 × 1023.93 × 1023.62 × 1021.02 × 1031.02 × 1033.24 × 1021.02 × 1031.10 × 1031.04 × 103
F282.40 × 1031.73 × 1031.51 × 1031.16 × 1031.17 × 1031.18 × 1031.21 × 1031.23 × 1031.28 × 1031.47 × 1032.34 × 1031.47 × 1031.16 × 1031.25 × 1031.51 × 1032.43 × 1031.48 × 1031.30 × 1031.56 × 1032.15 × 1031.23 × 1031.56 × 1032.33 × 1031.56 × 103
F291.24 × 1042.66 × 1031.97 × 1031.20 × 1038.88 × 1029.91 × 1021.29 × 1031.71 × 1034.11 × 1031.07 × 1061.16 × 1041.64 × 1039.69 × 1021.62 × 1031.70 × 1063.79 × 1041.59 × 1034.49 × 1035.00 × 1064.45 × 1041.24 × 1034.60 × 1061.15 × 1057.17 × 106
F301.68 × 1053.69 × 1041.66 × 1041.06 × 1049.03 × 1038.77 × 1039.08 × 1039.47 × 1039.29 × 1033.70 × 1042.36 × 1052.54 × 1049.83 × 1038.95 × 1035.48 × 1043.11 × 1051.73 × 1049.12 × 1035.35 × 1045.07 × 1059.99 × 1036.20 × 1046.99 × 1057.81 × 104
Rank8.837.436.034.533.253.253.223.655.739.074.203.001.621.984.203.301.701.603.402.431.202.371.571.43
Table 6. Comparison between ACSEDA, with and without the adaptive covariance scaling method, on the 50-D CEC 2014 benchmark problems. The best results in each part are highlighted in bold in this table.
Table 6. Comparison between ACSEDA, with and without the adaptive covariance scaling method, on the 50-D CEC 2014 benchmark problems. The best results in each part are highlighted in bold in this table.
sr = 0.1sr = 0.2
Fcs = 0.1cs = 0.2cs = 0.3cs = 0.4cs = 0.5cs = 0.6cs = 0.7cs = 0.8cs = 0.9cs = 1.0Adaptive − cscs = 0.2cs = 0.4cs = 0.6cs = 0.8cs = 1.0Adaptive − cs
F12.32 × 1082.33 × 1071.44 × 1053.43 × 1035.87 × 10−145.74 × 10−104.28 × 10−62.57 × 10−25.70 × 1021.03 × 1081.47 × 10−144.00 × 1083.98 × 1064.67 × 1011.87 × 10−29.68 × 1072.53 × 10−13
F23.39 × 1081.41 × 10105.17 × 1089.41 × 1019.35 × 10−121.08 × 10−77.97 × 10−45.14 × 1004.17 × 1042.97 × 1092.04 × 10−123.94 × 10107.22 × 1091.87 × 10−24.16 × 1002.34 × 1093.83 × 10−11
F33.47 × 1041.30 × 1044.03 × 1020.00 × 1000.00 × 1002.80 × 10−132.45 × 10−91.77 × 10−51.75 × 10−11.13 × 1050.00 × 1003.72 × 1045.53 × 1031.71 × 10−141.02 × 10−51.13 × 1050.00 × 100
F44.69 × 1031.46 × 1031.36 × 1029.02 × 1018.96 × 1019.26 × 1019.42 × 1019.65 × 1019.73 × 1013.38 × 1029.31 × 1015.58 × 1036.39 × 1029.31 × 1019.53 × 1013.31 × 1029.35 × 101
F52.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 101
F61.91 × 1018.04 × 1001.36 × 1001.65 × 10−16.99 × 10−21.89 × 10−23.50 × 10−23.66 × 10−11.80 × 1003.38 × 1011.74 × 10−21.39 × 1012.32 × 1001.21 × 10−11.46 × 10−13.12 × 1012.20 × 10−4
F73.19 × 1021.29 × 1026.81 × 1000.00 × 1000.00 × 1001.17 × 10−131.03 × 10−96.92 × 10−66.71 × 10−23.39 × 1010.00 × 1003.68 × 1026.40 × 1011.14 × 10−134.14 × 10−62.64 × 1010.00 × 100
F81.61 × 1028.34 × 1013.44 × 1011.63 × 1018.86 × 1006.00 × 1004.88 × 1003.39 × 1001.71 × 1013.02 × 1023.05 × 1001.26 × 1024.67 × 1011.19 × 1014.02 × 1002.97 × 1022.35 × 100
F91.60 × 1027.09 × 1012.67 × 1019.24 × 1006.53 × 1004.11 × 1003.68 × 1003.79 × 1001.15 × 1023.32 × 1022.06 × 1001.09 × 1023.34 × 1016.37 × 1002.70 × 1003.27 × 1021.86 × 100
F104.63 × 1032.46 × 1031.25 × 1038.25 × 1025.36 × 1023.14 × 1022.76 × 1021.30 × 1023.55 × 1028.99 × 1037.00 × 1013.26 × 1031.37 × 1037.34 × 1022.96 × 1028.83 × 1031.12 × 102
F114.85 × 1032.46 × 1031.31 × 1038.47 × 1026.18 × 1024.83 × 1023.77 × 1023.00 × 1023.22 × 1031.05 × 1041.22 × 1023.24 × 1031.33 × 1037.81 × 1025.04 × 1021.05 × 1041.72 × 102
F121.14 × 1003.17 × 1003.21 × 1003.20 × 1003.15 × 1003.28 × 1003.23 × 1003.18 × 1003.23 × 1003.42 × 1003.27 × 1003.21 × 1003.28 × 1003.20 × 1003.30 × 1003.20 × 1003.16 × 100
F133.81 × 1001.53 × 1001.78 × 10−11.43 × 10−11.40 × 10−11.30 × 10−11.08 × 10−18.13 × 10−26.16 × 10−24.44 × 10−11.47 × 10−14.01 × 1003.50 × 10−11.33 × 10−19.14 × 10−23.78 × 10−11.67 × 10−1
F146.72 × 1012.05 × 1012.85 × 10−13.37 × 10−13.30 × 10−13.15 × 10−12.93 × 10−12.61 × 10−12.69 × 10−18.46 × 1002.81 × 10−18.23 × 1018.36 × 10−13.20 × 10−12.98 × 10−15.38 × 1002.96 × 10−1
F155.07 × 1035.31 × 1015.65 × 1004.75 × 1004.94 × 1004.77 × 1007.47 × 1001.38 × 1012.54 × 1018.19 × 1014.78 × 1007.77 × 1036.69 × 1005.02 × 1001.30 × 1016.93 × 1015.04 × 100
F161.72 × 1011.58 × 1011.84 × 1011.91 × 1012.03 × 1012.02 × 1012.02 × 1012.01 × 1012.02 × 1012.03 × 1011.80 × 1011.74 × 1011.97 × 1012.07 × 1012.08 × 1012.05 × 1011.86 × 101
F171.18 × 1076.54 × 1023.78 × 1022.87 × 1022.51 × 1022.42 × 1022.30 × 1023.50 × 1027.20 × 1025.30 × 1061.28 × 1022.58 × 1073.90 × 1023.20 × 1024.51 × 1026.01 × 1061.40 × 102
F182.44 × 1081.30 × 1029.18 × 1014.83 × 1012.76 × 1012.44 × 1012.42 × 1012.63 × 1011.63 × 1021.72 × 1087.69 × 1008.31 × 1088.80 × 1013.95 × 1014.11 × 1011.40 × 1081.40 × 101
F198.07 × 1013.42 × 1012.69 × 1011.45 × 1011.23 × 1011.19 × 1011.22 × 1011.23 × 1011.31 × 1018.38 × 1011.11 × 1019.25 × 1013.61 × 1011.44 × 1011.23 × 1018.69 × 1011.12 × 101
F207.88 × 1029.08 × 1013.64 × 1019.82 × 1005.40 × 1003.19 × 1002.71 × 1004.25 × 1003.90 × 1012.17 × 1042.04 × 1001.21 × 1034.07 × 1017.49 × 1005.36 × 1002.97 × 1042.17 × 100
F211.22 × 1049.05 × 1024.28 × 1023.15 × 1022.61 × 1022.62 × 1022.84 × 1023.24 × 1025.40 × 1023.39 × 1062.12 × 1021.45 × 1044.69 × 1023.00 × 1023.99 × 1024.06 × 1062.13 × 102
F221.48 × 1026.22 × 1015.03 × 1013.77 × 1013.87 × 1015.43 × 1015.87 × 1016.81 × 1018.39 × 1018.07 × 1023.70 × 1011.43 × 1027.32 × 1016.23 × 1017.91 × 1018.02 × 1023.70 × 101
F234.62 × 1023.93 × 1023.52 × 1023.44 × 1023.44 × 1023.44 × 1023.44 × 1023.44 × 1023.44 × 1023.94 × 1023.44 × 1024.53 × 1023.75 × 1023.44 × 1023.44 × 1024.02 × 1023.44 × 102
F242.80 × 1022.74 × 1022.73 × 1022.72 × 1022.72 × 1022.70 × 1022.70 × 1022.69 × 1022.70 × 1023.27 × 1022.68 × 1022.76 × 1022.73 × 1022.73 × 1022.71 × 1023.22 × 1022.69 × 102
F252.22 × 1022.15 × 1022.09 × 1022.05 × 1022.05 × 1022.05 × 1022.05 × 1022.05 × 1022.05 × 1022.36 × 1022.05 × 1022.20 × 1022.13 × 1022.05 × 1022.05 × 1022.47 × 1022.05 × 102
F261.27 × 1021.11 × 1021.04 × 1021.01 × 1021.03 × 1021.00 × 1021.00 × 1021.00 × 1021.00 × 1021.62 × 1021.00 × 1021.33 × 1021.17 × 1021.05 × 1021.05 × 1021.85 × 1021.00 × 102
F271.05 × 1036.94 × 1024.08 × 1023.37 × 1023.29 × 1023.36 × 1023.30 × 1023.34 × 1023.81 × 1021.08 × 1033.31 × 1029.37 × 1025.39 × 1023.22 × 1023.29 × 1021.03 × 1033.21 × 102
F282.40 × 1031.73 × 1031.51 × 1031.16 × 1031.17 × 1031.18 × 1031.21 × 1031.23 × 1031.28 × 1031.47 × 1031.18 × 1032.34 × 1031.47 × 1031.16 × 1031.25 × 1031.51 × 1031.14 × 103
F291.24 × 1042.66 × 1031.97 × 1031.20 × 1038.88 × 1029.91 × 1021.29 × 1031.71 × 1034.11 × 1031.07 × 1068.77 × 1021.16 × 1041.64 × 1039.69 × 1021.62 × 1031.70 × 1069.02 × 102
F301.68 × 1053.69 × 1041.66 × 1041.06 × 1049.03 × 1038.77 × 1039.08 × 1039.47 × 1039.29 × 1033.70 × 1048.60 × 1032.36 × 1052.54 × 1049.83 × 1038.95 × 1035.48 × 1048.51 × 103
Rank9.708.407.134.984.053.954.084.556.5510.182.425.223.972.472.835.201.32
Table 7. Comparison between ACSEDA, with and without the adaptive promising individual selection method, on the 50-D CEC 2014 benchmark problems. The best results are highlighted in bold in this table.
Table 7. Comparison between ACSEDA, with and without the adaptive promising individual selection method, on the 50-D CEC 2014 benchmark problems. The best results are highlighted in bold in this table.
cs = 0.6
Fsr = 0.05sr = 0.10sr = 0.15sr = 0.20sr = 0.25sr = 0.30sr = 0.35Adaptive − sr
F11.61 × 10−141.42 × 10−141.28 × 10−147.55 × 1012.52 × 1044.67 × 1051.75 × 1071.42 × 10−14
F22.91 × 10−121.08 × 10−129.56 × 10−132.02 × 1001.65 × 1072.91 × 1091.20 × 10102.22 × 10−12
F30.00 × 1000.00 × 1000.00 × 1000.00 × 1002.28 × 1001.93 × 1038.44 × 1030.00 × 100
F49.07 × 1018.88 × 1019.14 × 1019.26 × 1011.04 × 1023.45 × 1021.14 × 1038.67 × 101
F52.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 1012.11 × 101
F61.23 × 10−47.35 × 10−55.20 × 10−26.25 × 10−51.74 × 10−21.64 × 10−22.24 × 10−19.75 × 10−5
F70.00 × 1000.00 × 1000.00 × 1000.00 × 1006.22 × 10−13.07 × 1011.13 × 1020.00 × 100
F87.36 × 1006.80 × 1008.66 × 1001.43 × 1011.95 × 1012.63 × 1013.48 × 1019.78 × 100
F96.20 × 1004.91 × 1006.10 × 1007.16 × 1001.20 × 1011.71 × 1012.40 × 1016.70 × 100
F102.52 × 1023.05 × 1024.49 × 1025.11 × 1027.11 × 1028.08 × 1027.93 × 1023.22 × 102
F112.30 × 1023.36 × 1024.32 × 1024.96 × 1025.80 × 1027.30 × 1026.84 × 1023.57 × 102
F123.27 × 1003.27 × 1003.21 × 1003.20 × 1003.20 × 1003.30 × 1003.31 × 1003.25 × 100
F131.39 × 10−11.55 × 10−11.60 × 10−11.57 × 10−11.53 × 10−12.45 × 10−18.50 × 10−11.51 × 10−1
F142.85 × 10−12.95 × 10−13.17 × 10−13.11 × 10−13.16 × 10−12.77 × 10−11.25 × 1013.09 × 10−1
F154.82 × 1004.78 × 1004.77 × 1005.04 × 1005.07 × 1005.10 × 1008.61 × 1004.77 × 100
F161.86 × 1011.85 × 1011.84 × 1011.87 × 1011.90 × 1011.91 × 1011.93 × 1011.87 × 101
F171.80 × 1021.75 × 1021.62 × 1021.97 × 1021.70 × 1022.37 × 1022.28 × 1021.44 × 102
F181.35 × 1011.85 × 1012.08 × 1012.98 × 1013.97 × 1015.17 × 1015.58 × 1011.61 × 101
F191.17 × 1011.15 × 1011.22 × 1011.33 × 1012.17 × 1012.81 × 1012.23 × 1011.17 × 101
F202.58 × 1002.71 × 1003.76 × 1006.29 × 1009.09 × 1001.50 × 1012.34 × 1013.14 × 100
F212.29 × 1022.11 × 1022.28 × 1022.47 × 1022.72 × 1023.10 × 1023.15 × 1021.98 × 102
F224.16 × 1013.13 × 1014.45 × 1013.27 × 1014.57 × 1015.78 × 1015.63 × 1013.18 × 101
F233.44 × 1023.44 × 1023.44 × 1023.44 × 1023.50 × 1023.64 × 1023.80 × 1023.44 × 102
F242.69 × 1022.71 × 1022.72 × 1022.72 × 1022.72 × 1022.72 × 1022.71 × 1022.70 × 102
F252.05 × 1022.05 × 1022.05 × 1022.05 × 1022.08 × 1022.10 × 1022.12 × 1022.05 × 102
F261.00 × 1021.00 × 1021.00 × 1021.02 × 1021.04 × 1021.06 × 1021.09 × 1021.00 × 102
F273.33 × 1023.23 × 1023.11 × 1023.12 × 1023.17 × 1024.09 × 1025.12 × 1023.13 × 102
F281.15 × 1031.13 × 1031.15 × 1031.20 × 1031.46 × 1031.40 × 1031.40 × 1031.11 × 103
F298.43 × 1028.32 × 1028.30 × 1021.07 × 1031.50 × 1031.40 × 1031.34 × 1038.24 × 102
F308.92 × 1039.05 × 1039.18 × 1031.14 × 1041.54 × 1042.05 × 1043.29 × 1049.20 × 103
Rank2.982.703.254.535.806.677.432.63
Table 8. Comparison among ACSEDA with different selection strategies for the parent population on the 50-D CEC 2014 benchmark problems. The best results are highlighted in bold in this table.
Table 8. Comparison among ACSEDA with different selection strategies for the parent population on the 50-D CEC 2014 benchmark problems. The best results are highlighted in bold in this table.
FACSEDAACSEDA-OACSEDA-OPACSEDA-OA
F11.14 × 10−149.80 × 1015.68 × 10−155.21 × 10−15
F27.40 × 10−26.04 × 1021.04 × 10−138.62 × 10−13
F30.00 × 1004.29 × 10−110.00 × 1000.00 × 100
F49.25 × 1019.59 × 1019.12 × 1019.68 × 101
F52.11 × 1012.11 × 1012.11 × 1012.11 × 101
F61.74 × 10−24.15 × 10−31.74 × 10−23.47 × 10−2
F70.00 × 1008.03 × 10−121.14 × 10−140.00 × 100
F82.98 × 1004.84 × 1003.13 × 1025.14 × 101
F92.49 × 1004.88 × 1003.16 × 1021.23 × 102
F101.07 × 1024.99 × 1011.22 × 1046.17 × 103
F111.18 × 1022.84 × 1021.27 × 1049.74 × 103
F123.35 × 1003.19 × 1003.23 × 1003.20 × 100
F131.40 × 10−18.34 × 10−22.66 × 10−12.21 × 10−1
F142.40 × 10−12.91 × 10−12.28 × 10−12.35 × 10−1
F154.81 × 1005.42 × 1002.81 × 1012.43 × 101
F161.80 × 1011.88 × 1012.15 × 1011.96 × 101
F171.16 × 1027.10 × 1021.54 × 1031.42 × 102
F188.47 × 1007.40 × 1019.10 × 1011.15 × 101
F191.08 × 1011.22 × 1011.15 × 1011.14 × 101
F201.95 × 1001.54 × 1016.69 × 1014.34 × 100
F211.99 × 1024.93 × 1029.70 × 1022.59 × 102
F223.54 × 1018.56 × 1019.26 × 1025.28 × 101
F233.44 × 1023.44 × 1023.44 × 1023.44 × 102
F242.68 × 1022.72 × 1022.68 × 1022.67 × 102
F252.05 × 1022.05 × 1022.05 × 1022.05 × 102
F261.00 × 1021.01 × 1021.00 × 1021.00 × 102
F273.17 × 1023.44 × 1023.25 × 1023.30 × 102
F281.16 × 1031.36 × 1031.14 × 1031.19 × 103
F298.34 × 1021.81 × 1048.17 × 1029.04 × 102
F308.57 × 1031.07 × 1048.60 × 1038.74 × 103
Rank1.772.922.822.50
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, Q.; Li, Y.; Gao, X.-D.; Ma, Y.-Y.; Lu, Z.-Y.; Jeon, S.-W.; Zhang, J. An Adaptive Covariance Scaling Estimation of Distribution Algorithm. Mathematics 2021, 9, 3207. https://doi.org/10.3390/math9243207

AMA Style

Yang Q, Li Y, Gao X-D, Ma Y-Y, Lu Z-Y, Jeon S-W, Zhang J. An Adaptive Covariance Scaling Estimation of Distribution Algorithm. Mathematics. 2021; 9(24):3207. https://doi.org/10.3390/math9243207

Chicago/Turabian Style

Yang, Qiang, Yong Li, Xu-Dong Gao, Yuan-Yuan Ma, Zhen-Yu Lu, Sang-Woon Jeon, and Jun Zhang. 2021. "An Adaptive Covariance Scaling Estimation of Distribution Algorithm" Mathematics 9, no. 24: 3207. https://doi.org/10.3390/math9243207

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop