Next Article in Journal
Special Issue “Model Predictive Control: Algorithms and Applications”: Foreword by the Guest Editor
Previous Article in Journal
A Hybrid Exact–Local Search Approach for One-Machine Scheduling with Time-Dependent Capacity
Previous Article in Special Issue
Modifications of Flower Pollination, Teacher-Learner and Firefly Algorithms for Solving Multiextremal Optimization Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Self-Adaptive Cooperative Coevolution Algorithm for Solving Continuous Large-Scale Global Optimization Problems †

1
Department of System Analysis and Operations Research, Reshetnev Siberian State University of Science and Technology, 660037 Krasnoyarsk, Russia
2
Department of Information Systems, Siberian Federal University, 660041 Krasnoyarsk, Russia
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in AIP Conference Proceedings, III International Scientific Conference on Modernization, Innovations, Progress: Advanced Technologies in Material Science, Mechanical and Automation Engineering in Material Science, Mechanical and Automation Engineering, Krasnoyarsk, Russia, 16–18 November 2021.
Algorithms 2022, 15(12), 451; https://doi.org/10.3390/a15120451
Submission received: 30 August 2022 / Revised: 24 November 2022 / Accepted: 26 November 2022 / Published: 29 November 2022
(This article belongs to the Special Issue Mathematical Models and Their Applications III)

Abstract

:
Unconstrained continuous large-scale global optimization (LSGO) is still a challenging task for a wide range of modern metaheuristic approaches. A cooperative coevolution approach is a good tool for increasing the performance of an evolutionary algorithm in solving high-dimensional optimization problems. However, the performance of cooperative coevolution approaches for LSGO depends significantly on the problem decomposition, namely, on the number of subcomponents and on how variables are grouped in these subcomponents. Also, the choice of the population size is still an open question for population-based algorithms. This paper discusses a method for selecting the number of subcomponents and the population size during the optimization process (“on fly”) from a predefined pool of parameters. The selection of the parameters is based on their performance in the previous optimization steps. The main goal of the study is the improvement of coevolutionary decomposition-based algorithms for solving LSGO problems. In this paper, we propose a novel self-adapt evolutionary algorithm for solving continuous LSGO problems. We have tested this algorithm on 15 optimization problems from the IEEE LSGO CEC’2013 benchmark suite. The proposed approach, on average, outperforms cooperative coevolution algorithms with a static number of subcomponents and a static number of individuals.

1. Introduction

Traditional evolutionary algorithms are successfully used for solving black-box optimization problems [1]. Cooperative coevolution (CC) was proposed by Potter and De Jong, in [2], to increase the performance of the standard genetic algorithm (GA) [1] when solving continuous optimization problems. The authors proposed two versions of CC-based algorithms, CCGA-1 and CCGA-2. The main idea behind CC is to decompose a problem into parts (subcomponents) and optimize them independently. The authors noted that any evolutionary algorithm (EA) can be used to evolve subcomponents. The first algorithm merges the current solution with subcomponents from the best solutions. The second algorithm merges the current solution with randomly selected individuals from other subcomponents. As has been shown in numerical experiments, CCGA-1 and CCGA-2 outperform the standard GA. This study marked the beginning of a new branch of methods for solving large-scale global optimization problems [3]. The pseudo-code of a CC-based EA is presented in Algorithm 1. The termination criterion is the predefined number of function evaluations (FEs). It should be clarified that CC has two main control parameters: the population size and the number of subcomponents.
Algorithm 1 The classic CC-based EA
Set the number of individuals (pop_size), the number of subcomponents (m)
1:Generate an initial population randomly;
2:Decompose an optimization vector into m independent subcomponents;
3:while (FEs > 0) do
4:for i = 1 to m
5: Evaluate the i-th subcomponent using pop_size individuals;
6: Construct a solution by merging the best-found solutions from all subcomponents;
7:end for
8:end while
9:return the best-found solution;
In general, an LSGO problem can be stated as a continuous optimization problem (1):
f ( x ¯ ) min x ¯ D R n , f = f ( x ) f ( x ¯ ) ,   x D R n ,
where f ( x ¯ ) is a fitness function to be minimized, f : R n R 1 , x ¯ is an n-dimensional vector of continuous variables, D is the search space defined by box constraints x i l x i x i u ,   i = 1 , n ¯ , x i l and x i u are the lower and upper borders of the i-th variable, respectively, and x is a global optimum. It is assumed that the fitness function is continuous. The satisfaction of the Lipschitz condition is not assumed; therefore, no operations are performed to estimate the Lipschitz constant. In this case, the convergence of an algorithm to the global optimum cannot be guaranteed. In the case of a huge number of decision variables, it is not possible to adequately explore the high-dimensional search space using a limited fitness budget. Additionally, we can clarify the goal of the stated problem as proposed in [4]: “the goal of global optimization methods is often to obtain a better estimate of f and x given a fixed limited budget of evaluations of f ( x ¯ ) ”.
In the last two decades, many researchers and applied specialists have successfully applied CC-based approaches to the increase of performance of metaheuristics when solving real-world LSGO problems [5,6,7,8,9,10]. According to the generally accepted classification [11,12], CC-based approaches can be divided into three groups: static, random, and learning-based variable groupings.
In case of static grouping (decomposition), one needs to set a fixed number of subcomponents and how the variables will be assigned to these subcomponents during the optimization process. It is appropriate to apply the static decomposition provided that the relationship between an optimized variables is known. However, many hard LSGO problems are represented by a black-box or gray-box model. The relationship between optimized variables is unknown, and it is risky to choose the number of subcomponents randomly. For example, two interacting variables can be placed in different groups and as a result the performance of this approach will be worse on average than the performance in a case in which the variables are placed in the same group. Nevertheless, the static decomposition performs well on fully separable optimization problems.
Random grouping is a kind of static grouping, but variables can be placed in different subcomponents in different steps of the optimization process. The size of the subcomponents can be fixed or dynamically changed. The main purpose of applying random grouping is to increase the probability of placing interacted variables in the same subcomponents. Furthermore, after a random mixing of variables and the creation of new groups, an optimizer solves a slightly or totally different problem in terms of task features. As a result, the regrouping of variables acts as a reorganization for an optimizer.
Learning grouping is based on experiments that aim to find the original interaction between decision variables. In most cases, these approaches are based on permutations, statistical models, and distribution models. Permutation techniques perturb variables and measure the change in the fitness function. Based on the changes, variables are grouped into subcomponents. In general, permutation-based techniques require a huge number of fitness evaluations. For example, the DGCC [13] algorithm requires 2 [ n 2 m ( n + m 1 ) + n m ] fitness evaluations (FEs) to detect variable interactions. A modification of DGCC, titled DG2 [14], requires n ( n + 1 ) / 2 FEs. In the first iteration, a statistical analysis is performed. In the second iteration, a grouping of decision variables is performed using a statistical metric, for example, a correlation between variables based on fitness function values or a distribution of variable values. In distribution models, the first iteration is based on the estimation of variable distributions and an interaction between variables in the set of the best solutions. After that, new candidate solutions are generated on the basis of the learned variable distributions and variable interactions.
In practice, the determination of the appropriate group size is a hard task, because of unknown optimization problem properties. In static and random grouping, setting an arbitrary group size can lead to low performance. On the other hand, learning grouping needs a large amount of FEs to determine true connections between variables, and there is no guarantee that an EA will perform better using the discovered true connection between variables. Usually, the small group size performs better in the beginning of the optimization process, and the large group size performs better in the last stages [15]. Thereby, there is the need to develop a self-adaptive mechanism for the automatic selection of the number of subcomponents and the population size.
The paper is organized as follows. Section 2 outlines the proposed approach. In Section 3, we discuss our experimental results. We considered how the population size and the number of subcomponents affect the performance of static CC-based approaches. We evaluated the performance of the proposed self-adaptive CC-based algorithm and have compared it with the performance of CC-based approaches with a static number of subcomponents and a static population size. Additionally, we investigated a selective pressure parameter when choosing the number of subcomponents and the population size. All numerical experiments were confirmed by the Wilcoxon rank-sum test. Section 4 concludes the paper and outlines possible future work.

2. Proposed Approach

In this section, the proposed approach is described in detail. The approach combines cooperative coevolution, the multilevel self-adaptive approach for selecting the number of subcomponents and the number of individuals, and SHADE. This approach is titled CC-SHADE-ML. This study was inspired by the MLCC algorithm [16] proposed by Z. Yang, K. Tang, and X. Yao. MLCC is based on the multilevel cooperative coevolution framework for solving continuous LSGO problems. Before the optimization process, there is a need to determine a set of integer values C C _ s e t = ( C C 1 , C C 2 , , C C t , ) corresponding to the number of subcomponents. The optimization process is divided into a predefined number of cycles. In each cycle, the number of subcomponents is selected according to the performance of decomposition in the previous cycles. Variables are divided in subcomponents randomly in each cycle. The Equation (2) is used to evaluate the performance of the selected number of subcomponents after each cycle. f b e f o r e is the best-found fitness value before the optimization cycle, and f a f t e r is the best-found fitness value after the optimization cycle. If the calculated value is less than 1E-4, then it is set to 1E-4. If this condition is not applied, the selection probability for the applied decomposition size is set to 0 after a cycle without improving. This means that the algorithm will never select this parameter in the future. Before starting the optimization process, all values of the p e r f o r m a n c e i vector are set to 1.0.
p e r f o r m a n c e i = ( f b e f o r e f a f t e r ) / f b e f o r e
When the performance of the selected parameter is calculated, it is necessary to recalculate the selection probability for all parameters. In MLCC, the authors propose to use Equation (3), where k is a control parameter and it is set to 7. In the original study, the authors note that 7 is an empirical value. In Section 3, we investigate the influence of this parameter on the algorithm’s performance.
p i = e k p e r f o r m a n c e i j = 1 t e k p e r f o r m a n c e i ,   i = { 1 , 2 , , t }
In the next optimization cycle, the decomposition size will be selected based on the new probability distribution. MLCC uses SaNSDE as a subcomponent optimizer. In the original article, the population size is fixed and is set to 50. At the same time, the population size is one of the most important parameters in population- and swarm-based algorithms. The choice of a good number of individuals can significantly increase the performance of an algorithm. In the proposed approach, for selecting the population size we apply the same idea as in selecting the number of subcomponents.
In this study, we use a recent variant of differential evolution (DE) [17] as a subcomponent optimizer. DE is a kind of EA that solves an optimization problem in the continuous search space but does not require a gradient calculation of the optimized problem. DE applies an iterative procedure for the crossing of individuals to generate new best solutions. F and CR are the main parameters in DE, a scale factor, and a crossover rate, respectively. Many researchers have tried to find good values for these parameters [18], however, these parameters are good only for specific functions. Numerous varieties of the classic DE with self-tuning parameters have been proposed, for example, self-adaptive DE (SaDE) [19], ensemble of parameters and mutation strategies (EPSDE) [20], adaptive DE with optional external archive (JADE) [21], and success-history based parameter adaptation for DE (SHADE) [22]. We use SHADE as an optimizer of subcomponents in the proposed CC-based metaheuristic because it is the self-adaptive and high-performing modification of the classic DE algorithm. SHADE uses a historical memory that stores well-performed values of F and CR. New values of CR and F are generated randomly but close to values of stored pairs of values. An external archive stores previously replaced individuals and is used for maintaining the population diversity. Usually, the external archive size is 2–3 times larger than the population size. The proposed CC-SHADE-ML algorithm differs from MLCC in the following. We use SHADE instead of SaNSDE and extend MLCC by applying a self-adaptation multilevel (ML) approach for the population size. The proposed algorithm can be described by the pseudocode in Algorithm 2.
Algorithm 2 CC-SHADE-ML
Set the set of individuals, the set of subcomponents, optimizer, cycles_number
1:Generate an initial population randomly;
2:Initialize performance vectors, CC_performance and pop_performance;
3:FEs_cycle_init = FEs_total/cycles_number;
4:while (FEs_total > 0) do
5:FEs_cycle = FEs_cycle_init;
6: Randomly shuffle indices;
7: Randomly select CC_size and pop_size from CC_performance and pop_performance;
8:while (FEs_cycle > 0) do
9:  Find the best fitness value before the optimization cycle f_best_before;
10:  for i = 1 to CC_size
11:   Evaluate the i-th subcomponent using the SHADE algorithm;
12:  end for
13:  Find the best fitness value after the optimization cycle f_best_after;
14:  Evaluate performance of CC_size and pop_size using Equation (2);
15:  Update CC_performance and pop_performance;
16:end while
17:end while
18:return the best-found solution

3. Numerical Experiments and Analysis

There are some variants of LSGO benchmarks. The first version was proposed in the CEC’08 special session on LSGO [23]. This benchmark set has seven high-dimensional optimization problems, D = { 100 ,   500 ,   1000 } . Test problems are divided into two classes: unimodal and multimodal. Later, the LSGO CEC’10 benchmark set was proposed [24] as an improved version of LSGO CEC’08. The number of problems was increased to 20 by adding partially separable functions to increase the complexity of the benchmark. In this study, we use a recent version of the benchmark, namely the LSGO CEC’13 benchmark set [25]. This set consists of 15 high-dimensional continuous optimization problems, which are divided into five classes: fully separable (C1), partially additively separable (functions with a separable subcomponent (C2) and functions with no separable subcomponents (C3), overlapping (C4), and non-separable functions (C5). The number of variables of each problem is equal to 1000. The maximum number of fitness evaluations is 3.0×106 in each independent run. The comparison of algorithms is based on the mean of the best-found values, which are obtained in 25 independent runs.
The software implementation of CC-SHADE and CC-SHADE-ML was undertaken using the C++ programming language. We used the MPICH2 framework to parallel numerical experiments because the problems are computational complex. We built a computational cluster of 8 PCs based on AMD Ryzen CPUs. Each CPU has eight cores and sixteen threads (8C/16T). Thus, the total number of computational threads is 128. The operating system was Ubuntu 20.04.3 LTS.

3.1. CC-Based EA

In this subsection we investigate the effect of CC-SHADE’s main parameters on its performance. The population size was set to 25, 50, 100, 150, and 200. The number of subcomponents was set to 1, 2, 5, 10, 20, 50, 100, 200, 500, and 1000. Figure 1 presents heatmaps for each benchmark problem and each combination of the parameters. The x-axis denotes the number of subcomponents, the y-axis denotes the population size. The total number of combinations is equal to 50 for each benchmark problem. The performance of each parameters’ combination is presented as a rank. The biggest number denotes the best average fitness value obtained in 25 independent runs. If two or more combinations of parameters have the same averaged fitness value, then their ranks are averaged. The ranks are colored in heatmaps from white (light) for the worst combination to dark blue (dark) for the best combination. The rank distributions are different in heatmaps for different optimization problems. Figure 2 shows the ranks sum for the algorithm’s parameters for all benchmark problems. The x-axis denotes the number of subcomponents, the y-axis denotes the number of individuals using the results from Figure 1. The highest sum of ranks is the best achieved result. Dark color denotes the best average combination of parameters. The best average combination of parameters for CC-SHADE are 50 subcomponents and 25 individuals.
Table 1 shows the best combination(s) of parameters for each benchmark problem. The first column denotes the problem number, the second column denotes the best combination using the following notation, “CC × pop_size”, where CC is the best subcomponent size, and pop_size is the best population size. The last column denotes the class of a benchmark problem. As we can see from the results in Table 1, we cannot define the best combination of parameter combination for all problems. Additionally, we cannot find any pattern of the best parameters for each class of LSGO problems.

3.2. CC-SHADE-ML

We evaluated the performance of CC-SHADE-ML and compared it with CC-SHADE with a fixed number of subcomponents and a fixed number of individuals. The proposed CC-SHADE-ML algorithm has the following parameters. The set of subcomponents is equal to { 1 ,   2 ,   5 ,   10 ,   20 ,   50 ,   100 ,   200 ,   500 ,   1000 } . The set of the population size is equal to { 25 ,   50 ,   100 ,   150 ,   200 } . The number of cycles is set to 50. According to our numerical experiments, this value for the number of cycles performs better than other tested values. Thus, in each cycle, CC-SHADE-ML evaluates 6.0 × 104 FEs.
We use “CC” and “CC-k(v)” notations to save space in Table 2, where CC is all variants of CC-SHADE parameters, and CC-k(v) is the proposed approach, where the parameter k is set to v (3), and v is the power of the exponent in Equation (3). We investigated v equal to { 1 ,   2 ,   3 ,   5 ,   7 ,   10 } . Table 2 proves the statistical difference in the results of the rank comparison using the Wilcoxon rank-sum test with the p-value equal to 0.05. The first column denotes better (+), worse (-), and equal performance (≈). Other columns contain the settings of the proposed algorithm. The cells contain the total number of benchmark problems where CC demonstrates better, worse, or equal performance in comparison with CC-k(v). As we can see, each version of the proposed algorithm has scores larger than CC. As we can see from Table 2, the proposed algorithm with all values of the power (3) always demonstrates better performance in comparison with the CC with a fixed number of subcomponents and individuals. Based on the results of the numerical experiments and the results of the statistical test, it is preferable on average to choose the proposed approach than the CC algorithm with an arbitrary set of parameters.
Figure 3 shows the ranking of CC-SHADE-ML algorithms with different values of power (3). The ranking is based on the obtained mean values in 25 independent runs. The ranking is averaged on 15 benchmark problems. The highest rank corresponds to the best algorithm.
We have compared the performance estimations of all CC-k algorithms. The statistical experimental results are placed in Table 3. The first column denotes indexes of the algorithm. The second column denotes the title of the algorithm. The next columns denote the compared algorithms corresponding to the index value. Values in each cell are based on the following notation. We compare algorithms from a row and column, if the algorithm from the row demonstrates statistically better, worse of equal performance we add points to the corresponding criterion. Table 3 contains the sum of (better/worse/equal) points of all algorithms.
The results in Table 4 are based on the results from Table 3. Algorithms are sorted according to their statistical performance and their averaged rank. As we can see, CC-k(7) has taken first place. It outperforms the other algorithms 10 times, loses only once and demonstrates the same performance 64 times. It can be noted that the second last column contains large values. This means that the majority of considered algorithms demonstrate an equal performance on benchmark problems that can be explained by the introduced self-adaptiveness.
Based on the ranking and the statistical tests, we can conclude that CC-k(7) performs the other variants of CC-k(v). In the original paper [16], the authors also found that MLCC demonstrates better results with the power value (3) equal to 7.
In Figure 4, we show an example of curves which demonstrate the dynamic adaptation of the number of subcomponents and the population size in one independent run of CC-k(7). The x-axis denotes the FEs, the y-axis denotes the selected level of parameters. The pictures show graphs for F 1 , F 2 , F 12 , and F 15 benchmark problems.
Figure 5 shows convergence graphs for CC-k(v) algorithms. The x-axis denotes the FEs number, the y-axis denotes the averaged fitness value obtained in 25 independent runs. As we can see, the convergence plots are almost similar for all CC-k algorithms. In most cases, the value of the power in (3) does not critically affect the behavior of the CC-SHADE-ML algorithm. As we have noted, according to the results from Table 4, the algorithms’ performance is the same on the majority of problems.

3.3. The Tuned CC-SHADE-ML

In this subsection, we evaluate the performance of the tuned CC-SHADE-ML. As we can see in Figure 2, the region with the best-ranked solutions covers the set of subcomponents equal to { 5 ,   10 ,   20 ,   50 } and the set of the population size equal to { 25 ,   50 ,   100 } . The tuned CC-SHADE-ML will use these parameters. Figure 6 has the same structure as Figure 3. Based on the ranking, CC_tuned-k(1) demonstrates the best performance.
Table 5 has the same structure as Table 2. Table 2 compares the performance of CC and CC_tuned-k algorithms. According to the results of the Wilcoxon test, we can conclude that any power value of tuned CC-k demonstrates better performance than CC with the fixed number of subcomponents and individuals.
Table 6 and Table 7 have the same structure as Table 3 and Table 4, respectively. We placed CC_tuned-k(7) on the first place because it has no loss point. Although, if we take into account only the averaged rank, we then need to place CC_tuned-k(7) in last place. Additionally, CC_tuned-k(1) has the highest rank, however, it does not significantly outperform any of compared algorithms.
We compared the performance of CC-k(7) and CC_tuned-k between each other using the Wilcoxon test. The statistical difference analysis is presented in Table 8. Columns denote the number of benchmark problems. The (+/-/≈) symbols mean better, worse, and equal performance of CC-k in comparison with CC_tuned-k. Algorithms demonstrate the same performance for six problems. CC-k outperforms CC_tuned-k on four problems and loses on five problems.
Figure 7 has the same structure as Figure 4. As we can see, on the F1 benchmark problem, the algorithm has chosen a good combination of parameters and does not change the certain number of cycles because this combination demonstrates the high performance. On other benchmark problems, we can see more rapid switching between values of parameters. Figure 8 has the same structure as Figure 5. It shows convergence graphs for CC_tuned algorithms.
We have compared the performance of CC_tuned-k(7) with other state-of-the-art LSGO metaheuristics. These metaheuristics were specially created and tuned to solve the LSGO CEC’2013 benchmark set. We selected high-performed metaheuristics from the TACOlab database [26]: SHADEILS [27], MOS [28], MLSHADE-SPA [29], CC-RDG3 [30], BICCA [31], IHDELS [32], SGCC [33], SACC [34], CC-CMA-ES [35], VMODE [36], DGSC [37], MPS [38], DECC-G [39], and DEEPSO [40]. Figure 9 shows the ranking of the compared metaheuristics. Table 9 consists of the ranking values of state-of-the-art algorithms depending on the class of benchmark problems. Ranks are averaged in each class. The proposed algorithm takes ninth place out of 15. We should note that the majority of metaheuristics in the comparison use special local search techniques adapted for the CEC’13 LSGO benchmark and their control parameters are also fine-tuned to the given problem set. Thus, there is no guarantee that these algorithms will demonstrate the same high performance with other LSGO problems. At the same time, the proposed approach automatically adapts to the given problem, so we conclude that it can also perform well when solving new LSGO problems. In Section 3.3, we propose a hybrid algorithm which is a combination of CC-SHADE-ML and MTS-LS1 [41].
Table 10 contains the detailed results for the fine-tuned CC-SHADE-ML algorithm. The first column contains three checkpoints, 1.2 × 105, 6.0 × 105, and 3.0 × 106. The remaining columns show the number of a benchmark problem. Each cell contains five numbers: the best-found value, the median value, the worst value, the mean value, and the standard deviation value. The authors of the LSGO CEC’2013 benchmark set recommend the inclusion of this information for the convenient further comparison of the proposed algorithm with others. Usually, the comparison is based on values after 3.0 × 106 FEs.

3.4. Hybrid Algorithm Based on CC-SHADE-ML and MTS-LS1

In the CC-SHADE-ML algorithm, MTS-LS1 [41] performs after the optimization cycle and uses 25,000 FEs (this value has been defined by numerical experiments). MTS-LS1 tries to improve each i-th coordinate using the search range SR[i]. In this study, the initialization value of each SR[i] is equal to ( b i a i ) · 0.4 . a i and b i are low and high bounds for the i-th variable. If MTS-LS1 does not improve a solution using the current value of SR[i], it should be reduced by two times (SR[i] = SR[i]/2). If SR[i] is less than 10−18 (in [41], the original threshold is 10−15), the value is reinitialized. As we can see from the numerical experiments, usually, MTS-LS1 finds a new best solution that is so far from other individuals in the population. Thus, CC-SHADE-ML is not able to improve the best-found solution after applying MTS-LS1, but it does improve the median fitness value in the population. In this case, Formula (2) will be inappropriate for the evaluation of the performance of selected parameters. We use Formula (4) to overcome this difficulty, the formula is based on the median fitness value before and after the CC-SHADE-ML cycle.
p e r f o r m a n c e i = m e d i a n F i t n e s s b e f o r e m e d i a n F i t n e s s a f t e r m e d i a n F i t n e s s b e f o r e
Different mutation schemes have been evaluated and we determined that the best performance of CC-SHADE-ML-LS1 has been reached using the following Formula (5).
u i = x i + F i ( x p b e s t x i ) + F i ( x t x r ) ,   i = 1 , , p o p _ s i z e
here, u i is a mutant vector, x i is a solution from the population, F i is a scale factor, x p b e s t is a solution from the population chosen from the p best solutions, x t is a solution from the population chosen using the tournament selection (in this study, the tournament size is equal to 2), x r is a randomly chosen solution from the population or from the archive. To perform Formula (5), the following condition must be met: i p b e s t t r .
Control parameters of CC-SHADE-ML-LS1 are the following: the set of subcomponents equal to { 5 ,   10 ,   20 ,   50 } ; the set of the population size equal to { 25 ,   50 ,   100 } ; FEs_LS1 equal to 25,000; the mutation scheme, in SHADE, is (5); and the tournament size is 2. FEs _ cycle _ init is equal to 1.5 × 105. The complete pseudocode of the hybrid is presented in Algorithm 3. Additionally, the CC-SHADE-ML-LS1 performance has been evaluated and compared with state-of-the-art metaheuristics. Comparison rules and algorithms for comparison are the same as in Section 3.2. Table 11 and Table 12 have the same structure as Table 9 and Table 10, respectively. Table 11 shows the ranking of CC-SHADE-ML-LS1. Table 12 contains the detailed results of the tuned CC-SHADE-LS1 algorithm. Figure 10 has the same structure as Figure 9.
Algorithm 3 CC-SHADE-ML-LS1
Set the set of individuals, the set of subcomponents, optimizer, cycles_number
1:Generate an initial population randomly;
2:Initialize performance vectors, CC_performance and pop_performance;
3:FEs_cycle_init = FEs_total/cycles_number;
4:while (FEs_total > 0) do
5:FEs_cycle = FEs_cycle_init;
6: Randomly shuffle indices;
7: Randomly select CC_size and pop_size from CC_performance and pop_performance;
8:while (FEs_cycle > 0) do
9:  Find the median fitness value before the optimization cycle m e d i a n F i t n e s s b e f o r e ;
10:  for i = 1 to CC_size
11:   Evaluate the i-th subcomponent using the SHADE algorithm;
12:  end for
13:  Find the median fitness value after the optimization cycle m e d i a n F i t n e s s a f t e r ;
14:  Evaluate performance of CC_size and pop_size using Equation (4);
15:  pdate CC_performance and pop_performance;
16:end while
17:while (FEs_LS1 > 0) do
18:  Apply MTS-LS1(best_fould_solution);
19:end while
20:end while
21:return the best-found solution

4. Conclusions

In this paper, the CC-SHADE-ML algorithm has been proposed for solving large-scale global optimization problems. CC-SHADE-ML changes the number of subcomponents and the number of individuals according to their performance in the last cycles. The experimental results show that the proposed strategy can improve the performance of CC-based algorithms. We have investigated the proposed metaheuristic using the LSGO CEC’13 benchmark set with fifteen benchmark problems. The CC-SHADE-ML algorithm outperforms, on average, CC-SHADE algorithms with a fixed numbers of subcomponents and a fixed population size. All experiments are proven by the Wilcoxon test. CC-SHADE-ML has been compared with fourteen state-of-the-art metaheuristics. The numerical results indicate that the proposed CC-SHADE-ML algorithm is an efficient and competitive algorithm to tackle LSGO problems. We have also proposed hybridization of CC-SHADE-ML and MTS-LS1 algorithms. CC-SHADE-ML-LS1 has a high potential for enhancement. In our further works, we will improve the performance of the CC-SHADE-ML-LS1 algorithm by modifying the optimizer. Additionally, we will perform the comparison on a real-world LSGO problems for which the compared metaheuristics do not have fine-tuned control in order to prove the effect of self-configuration.

Author Contributions

Conceptualization, A.V. and E.S.; methodology, A.V. and E.S.; software, A.V. and E.S.; validation, A.V. and E.S.; formal analysis, A.V. and E.S.; investigation, A.V. and E.S.; resources, A.V. and E.S.; data curation, A.V. and E.S.; writing—original draft preparation, A.V. and E.S.; writing—review and editing, A.V. and E.S.; visualization, A.V. and E.S.; supervision, E.S.; project administration, A.V. and E.S.; funding acquisition, A.V. and E.S. All authors have read and agreed to the published version of the manuscript.

Funding

The reported study was funded by RFBR and FWF according to the research project No. 21-51-14003.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Programming Code

The complete code of the CC-SHADE-ML algorithm is available at GitHub (https://github.com/VakhninAleksei/CC-SHADE-ML, accessed on 18 August 2022).

References

  1. Katoch, S.; Chauhan, S.S.; Kumar, V. A review on genetic algorithm: Past, present, and future. Multimed. Tools Appl. 2021, 80, 8091–8126. [Google Scholar] [CrossRef] [PubMed]
  2. Potter, M.A.; de Jong, K.A. A cooperative coevolutionary approach to function optimization. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Jerusalem, Israel, 9–14 October 1994; pp. 249–257. [Google Scholar]
  3. Mahdavi, S.; Shiri, M.E.; Rahnamayan, S. Metaheuristics in large-scale global continues optimization: A survey. Inf. Sci. 2015, 295, 407–428. [Google Scholar] [CrossRef]
  4. Sergeyev, Y.D.; Kvasov, D.E.; Mukhametzhanov, M.S. On the efficiency of nature-inspired metaheuristics in expensive global optimization with limited budget. Sci. Rep. 2018, 8, 453. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Ma, L.; Hu, K.; Zhu, Y.; Chen, H. Cooperative artificial bee colony algorithm for multi-objective rfid network planning. J. Netw. Comput. Appl. 2014, 42, 143–162. [Google Scholar] [CrossRef]
  6. Sabar, N.; Abawajy, N.; Yearwood, J. Heterogeneous cooperative co-evolution memetic differential evolution algorithms for big data optimisation problems. IEEE Trans. Evol. Comput. 2017, 21, 315–327. [Google Scholar] [CrossRef]
  7. Smith, R.J.; Heywood, M.I. Coevolving deep hierarchies of programs to solve complex tasks. In Proceedings of the Genetic and Evolutionary Computation Conference, Berlin, Germany, 15–19 July 2017; pp. 1009–1016. [Google Scholar]
  8. Dong, X.; Yu, H.; Ouyang, D.; Cai, D.; Ye, Y.; Zhang, Y. Cooperative coevolutionary genetic algorithms to find optimal elimination orderings for bayesian networks. In Proceedings of the IEEE Conference on Bio-Inspired Computing: Theories and Applications, Changsha, China, 23–26 September 2010; pp. 1388–1394. [Google Scholar]
  9. Maniadakis, M.; Trahanias, P. A hierarchical coevolutionary method to support brain-lesion modelling. In Proceedings of the International Joint Conference on Neural Networks, Montreal, QC, Canada, 31 July 2005–4 August 2005; pp. 434–439. [Google Scholar]
  10. Wang, Q.; Fu, Z.; Wang, X.; Hou, Y.; Li, N.; Liu, Q. A study of co-evolutionary genetic algorithm in relay protection system. In Proceedings of the International Conference on Intelligent Computation Technology and Automation, Changsha, China, 20–22 October 2008; pp. 8–11. [Google Scholar]
  11. Omidvar, M.N.; Li, X.; Yao, X. A review of population-based metaheuristics for large-scale black-box global optimization: Part A. IEEE Trans. Evol. Comput. 2022, 26, 802–822. [Google Scholar] [CrossRef]
  12. Omidvar, M.N.; Li, X.; Yao, X. A review of population-based metaheuristics for large-scale black-box global optimization: Part B. IEEE Trans. Evol. Comput. 2022, 26, 823–843. [Google Scholar] [CrossRef]
  13. Omidvar, M.N.; Li, X.; Mei, Y.; Yao, X. Cooperative co-evolution with differential grouping for large scale optimization. IEEE Trans. Evol. Comput. 2014, 18, 378–393. [Google Scholar] [CrossRef] [Green Version]
  14. Omidvar, M.N.; Yang, M.; Mei, Y.; Li, X.; Yao, X. DG2: A faster and more accurate differential grouping for large-scale black-box optimization. IEEE Trans. Evol. Comput. 2017, 21, 929–942. [Google Scholar] [CrossRef] [Green Version]
  15. Vakhnin, A.; Sopov, E. Investigation of Improved Cooperative Coevolution for Large-Scale Global Optimization Problems. Algorithms 2021, 14, 146. [Google Scholar] [CrossRef]
  16. Yang, Z.; Tang, K.; Yao, X. Multilevel cooperative coevolution for large scale optimization. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; pp. 1663–1670. [Google Scholar]
  17. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  18. Ao, Y.; Chi, H. Experimental Study on Differential Evolution Strategies. In Proceedings of the 2009 WRI Global Congress on Intelligent Systems, Xiamen, China, 19–21 May 2009; pp. 19–24. [Google Scholar]
  19. Qin, A.K.; Suganthan, P.N. Self-adaptive differential evolution algorithm for numerical optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; pp. 1785–1791. [Google Scholar]
  20. Mallipeddi, R.; Suganthan, P.N.; Pan, Q.K.; Tasgetiren, M.F. Differential evolution algorithm with ensemble of parameters and mutation strategies. Appl. Soft Comput. 2011, 11, 1670–1696. [Google Scholar] [CrossRef]
  21. Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  22. Tanabe, R.; Fukunaga, A. Success-history based parameter adaptation for Differential Evolution. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 71–78. [Google Scholar]
  23. Tang, K.; Yao, X.; Suganthan, P.N.; MacNish, C.; Chen, Y.-P.; Chen, C.-M.; Yang, Z. Benchmark Functions for the CEC’2008 Special Session and Competition on Large Scale Global Optimization; University of Science and Technology of China (USTC): Hefei, China, 2007; pp. 1–23. [Google Scholar]
  24. Tang, K.; Li, X.; Suganthan, P.N.; Yang, Z.; Weise, T. Benchmark Functions for the CEC’2010 Special Session and Competition on Large-Scale Global Optimization; University of Science and Technology of China (USTC): Hefei, China, 2010; pp. 1–23. [Google Scholar]
  25. Li, X.; Tang, K.; Omidvar, M.N.; Yang, Z.; Qin, K. Benchmark Functions for the CEC’2013 Special Session and Competition on Large Scale Global Optimization; Technical Report, Evolutionary Computation and Machine Learning Group; RMIT University: Melbourne, Australia, 2013; pp. 1–23. [Google Scholar]
  26. Molina, D.; LaTorre, A. Toolkit for the Automatic Comparison of Optimizers: Comparing Large-Scale Global Optimizers Made Easy. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
  27. Molina, D.; LaTorre, A.; Herrera, F. SHADE with Iterative Local Search for Large-Scale Global Optimization. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
  28. LaTorre, A.; Muelas, S.; Pena, J.M. Large Scale Global Opti- mization: Experimental Results with MOS-based Hybrid Algorithms. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation (CEC 2013), Cancun, Mexico, 20–23 June 2013; pp. 2742–2749. [Google Scholar]
  29. Hadi, A.A.; Mohamed, A.W.; Jambi, K. LSHADE-SPA memetic framework for solving large-scale optimization problems. Complex Intell. Syst. 2019, 5, 25–40. [Google Scholar] [CrossRef] [Green Version]
  30. Sun, Y.; Li, X.; Ernst, A.; Omidvar, M.N. Decomposition for Large-scale Optimization Problems with Overlapping Components. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 326–333. [Google Scholar]
  31. Ge, H.; Zhao, M.; Hou, Y.; Kai, Z.; Sun, L.; Tan, G.; Qiang, Q.Z. Bi-space Interactive Cooperative Coevolutionary algorithm for large scale black-box optimization. Appl. Soft Comput. 2020, 97, 10678. [Google Scholar] [CrossRef]
  32. Molina, D.; Herrera, F. Iterative hybridization of DE with local search for the CEC’2015 special session on large scale global optimization 2015. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 1974–1978. [Google Scholar]
  33. Liu, W.; Zhou, Y.; Li, B.; Tang, K. Cooperative Co-evolution with Soft Grouping for Large Scale Global Optimization. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 318–325. [Google Scholar]
  34. Mahdavi, S.; Rahnamayan, S.; Shiri, M. Cooperative co-evolution with sensitivity analysis-based budget assignment strategy for large-scale global optimization. Appl. Intell. 2017, 47, 888–913. [Google Scholar] [CrossRef]
  35. Liu, J.; Tang, K. Scaling Up Covariance Matrix Adaptation Evolution Strategy Using Cooperative Coevolution. In IDEAL 2013: Intelligent Data Engineering and Automated Learning—IDEAL 2013; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2013; pp. 350–357. [Google Scholar]
  36. Lopez, E.D.; Puris, A.; Bello, R.R. Vmode: A hybrid metaheuristic for the solution of large scale optimization problems. Investig. Oper. 2015, 36, 232–239. [Google Scholar]
  37. Li, L.; Fang, W.; Wang, Q.; Sun, J. Differential Grouping with Spectral Clustering for Large Scale Global Optimization. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 334–341. [Google Scholar]
  38. Bolufé-Röhler, A.; Fiol-González, S.; Chen, S. A minimum population search hybrid for large scale global optimization. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 1958–1965. [Google Scholar]
  39. Yang, Z.; Tang, K.; Yao, X. Large scale evolutionary optimization using cooperative coevolution. Inf. Sci. 2008, 178, 2985–2999. [Google Scholar] [CrossRef]
  40. Miranda, V.; Alves, R. Differential Evolutionary Particle Swarm Optimization (DEEPSO): A successful hybrid. In Proceedings of the 2013 BRICS Congress on Computational Intelligence & 11th Brazilian Congress on Computational Intelligence (BRICS-CCI & CBIC 2013), Ipojuca, Brazil, 8–11 September 2013; pp. 368–374. [Google Scholar]
  41. Tseng, L.Y.; Chen, C. Multiple trajectory search for Large Scale Global Optimization. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; pp. 3052–3059. [Google Scholar]
Figure 1. Ranking CC-SHADE algorithms with the static population size and the number of subcomponents. Benchmark problems are: (a) F1; (b) F2; (c) F3; (d) F4; (e) F5; (f) F6; (g) F7; (h) F8, (i) F9; (j) F10; (k) F11; (l) F12; (m) F13; (n) F14; (o) F15.
Figure 1. Ranking CC-SHADE algorithms with the static population size and the number of subcomponents. Benchmark problems are: (a) F1; (b) F2; (c) F3; (d) F4; (e) F5; (f) F6; (g) F7; (h) F8, (i) F9; (j) F10; (k) F11; (l) F12; (m) F13; (n) F14; (o) F15.
Algorithms 15 00451 g001
Figure 2. The ranks sum of CC-SHADE algorithms’ parameters for all benchmark problems.
Figure 2. The ranks sum of CC-SHADE algorithms’ parameters for all benchmark problems.
Algorithms 15 00451 g002
Figure 3. The ranking of CC-k algorithms.
Figure 3. The ranking of CC-k algorithms.
Algorithms 15 00451 g003
Figure 4. The self-adaptation curves for CC_size and pop_size of CC-k(7) in an independent run on: (a,c,e,g)—CC-size for F1, F2, F12, F15 and (b,d,f,h)—pop-size for F1, F2, F12, F15.
Figure 4. The self-adaptation curves for CC_size and pop_size of CC-k(7) in an independent run on: (a,c,e,g)—CC-size for F1, F2, F12, F15 and (b,d,f,h)—pop-size for F1, F2, F12, F15.
Algorithms 15 00451 g004
Figure 5. Convergence plots of CC-k algorithms. Benchmark problems are: (a) F1; (b) F2; (c) F3; (d) F4; (e) F5; (f) F6; (g) F7; (h) F8, (i) F9; (j) F10; (k) F11; (l) F12; (m) F13; (n) F14; (o) F15.
Figure 5. Convergence plots of CC-k algorithms. Benchmark problems are: (a) F1; (b) F2; (c) F3; (d) F4; (e) F5; (f) F6; (g) F7; (h) F8, (i) F9; (j) F10; (k) F11; (l) F12; (m) F13; (n) F14; (o) F15.
Algorithms 15 00451 g005aAlgorithms 15 00451 g005b
Figure 6. The ranking of CC-tuned-k algorithms.
Figure 6. The ranking of CC-tuned-k algorithms.
Algorithms 15 00451 g006
Figure 7. The self-adaptation curves for CC_size and pop_size of CC-tuned-k(7) in an independent run on: (a,c,e,g)—CC-size for F1, F4, F7, F15 and (b,d,f,h)—pop-size for F1, F4, F7, F15.
Figure 7. The self-adaptation curves for CC_size and pop_size of CC-tuned-k(7) in an independent run on: (a,c,e,g)—CC-size for F1, F4, F7, F15 and (b,d,f,h)—pop-size for F1, F4, F7, F15.
Algorithms 15 00451 g007
Figure 8. Convergence plots of CC-tuned-k algorithms. Benchmark problems are: (a) F1; (b) F2; (c) F3; (d) F4; (e) F5; (f) F6; (g) F7; (h) F8, (i) F9; (j) F10; (k) F11; (l) F12; (m) F13; (n) F14; (o) F15.
Figure 8. Convergence plots of CC-tuned-k algorithms. Benchmark problems are: (a) F1; (b) F2; (c) F3; (d) F4; (e) F5; (f) F6; (g) F7; (h) F8, (i) F9; (j) F10; (k) F11; (l) F12; (m) F13; (n) F14; (o) F15.
Algorithms 15 00451 g008
Figure 9. The ranking of CC-SHADE-ML and other state-of-the-art algorithms.
Figure 9. The ranking of CC-SHADE-ML and other state-of-the-art algorithms.
Algorithms 15 00451 g009
Figure 10. The ranking of CC-SHADE-ML-LS1 and other state-of-the-art algorithms.
Figure 10. The ranking of CC-SHADE-ML-LS1 and other state-of-the-art algorithms.
Algorithms 15 00451 g010
Table 1. The best combination(s) of parameters for CC-SHADE on LSGO CEC’2013.
Table 1. The best combination(s) of parameters for CC-SHADE on LSGO CEC’2013.
Benchmark ProblemThe Best Combination(s)Class
120 × 50C1
21000 × 200C1
31000 × 50, 1000 × 100, 1000 × 150, 1000 × 200C1
41 × 200C2
51 × 200C2
61000 × 200C2
75 × 200C2
81 × 200C3
95 × 150C3
101000 × 200C3
115 × 200C3
1250 × 25C4
135 × 150C4
142 × 200C4
15200 × 25C5
Table 2. The Wilcox rank-sum test CC vs. CC-k(v).
Table 2. The Wilcox rank-sum test CC vs. CC-k(v).
CC vs.CC-k(1)CC-k(2)CC-k(3)CC-k(5)CC-k(7)CC-k(10)
+215215193218185190
-417421439423448452
118114118109117108
Table 3. The results of the Wilcoxon test for CC-k with different parameter values.
Table 3. The results of the Wilcoxon test for CC-k with different parameter values.
IndexAlgorithm(2)(3)(4)(5)(6)
(1)CC-k(1)0/2/130/4/110/4/110/4/111/4/10
(2)CC-k(2)-0/1/140/1/141/4/101/3/11
(3)CC-k(3)--0/1/140/1/141/2/12
(4)CC-k(5)---0/0/151/1/13
(5)CC-k(7)----1/0/14
(6)CC-k(10)-----
Table 4. The sum of the Wilcoxon test results.
Table 4. The sum of the Wilcoxon test results.
NumberAlgorithmTotal WinTotal LossTotal EqualAveraged Rank
(5)CC-k(7)101644.47
(6)CC-k(10)105603.87
(3)CC-k(3)64653.87
(4)CC-k(5)71673.13
(2)CC-k(2)49622.67
(1)CC-k(1)118563.00
Table 5. The Wilcox rank-sum test CC vs. CC_tuned-k.
Table 5. The Wilcox rank-sum test CC vs. CC_tuned-k.
CC vs.CC_tuned-k(1)CC_tuned-k(2)CC_tuned-k(3)CC_tuned-k(5)CC_tuned-k(7)CC_tuned-k(10)
+595752525751
-848691908690
373737383739
Table 6. The results of the Wilcoxon test for CC-k with different parameter values.
Table 6. The results of the Wilcoxon test for CC-k with different parameter values.
NumberAlgorithm(2)(3)(4)(5)(6)
(1)CC_tuned-k(1)0/0/150/1/140/1/140/1/140/1/14
(2)CC_tuned-k(2)-0/2/130/0/150/2/130/1/14
(3)CC_tuned-k(3)--2/1/120/1/141/0/14
(4)CC_tuned-k(5)---0/0/150/0/15
(5)CC_tuned-k(7)----1/0/14
(6)CC_tuned-k(10)-----
Table 7. The sum of the Wilcoxon test results.
Table 7. The sum of the Wilcoxon test results.
NumberAlgorithmTotal WinTotal LossTotal EqualAveraged Rank
(5)CC_tuned-k(7)50702.50
(3)CC_tuned-k(3)62673.2
(6)CC_tuned-k(10)22713.53
(4)CC_tuned-k(5)22713.77
(1)CC_tuned-k(1)04714.47
(2)CC_tuned-k(2)05703.53
Table 8. The Wilcox rank-sum test CC-k(7) vs. CC_tuned-k(7).
Table 8. The Wilcox rank-sum test CC-k(7) vs. CC_tuned-k(7).
CC-k(7) vs. CC_tuned-k(7)F1F2F3F4F5F6F7F8F9F10F11F12F13F14F15
-+++-+---
Table 9. The ranking of state-of-the-art metaheuristics.
Table 9. The ranking of state-of-the-art metaheuristics.
MetaheuristicC1C2C3C4C5Total
SHADEILS9.6711.7511.0014.6715.0062.08
MOS12.8310.759.5012.3310.0055.42
MLSHADE-SPA13.3312.0011.7512.003.0052.08
CC-RDG36.5012.2512.639.6711.0052.04
BICCA11.0010.5011.009.008.0049.50
IHDELS9.008.008.3812.339.0046.71
SGCC2.006.638.388.3314.0039.33
SACC12.174.754.504.6713.0039.08
CC-SHADE-ML4.676.255.757.0012.0035.67
CC-CMA-ES9.335.388.007.672.0032.38
VMODE6.005.386.387.336.0031.08
DGSC6.677.385.385.004.0028.42
MPS4.338.257.503.005.0028.08
DECC-G10.006.004.383.671.0025.04
DEEPSO2.504.755.503.337.0023.08
Table 10. Detailed results for the fine-tuned CC-SHADE-ML algorithm.
Table 10. Detailed results for the fine-tuned CC-SHADE-ML algorithm.
F1F2F3F4F5F6F7F8
1.2 × 105BEST8.00 × 1075.42 × 1032.08 × 1012.98 × 10102.83 × 1061.05 × 1064.36 × 1084.37 × 1014
MEADIAN8.98 × 1081.03 × 1042.12 × 1018.49 × 10105.50 × 1061.06 × 1061.84 × 1093.58 × 1015
WORST4.51 × 1091.29 × 1042.13 × 1014.80 × 10118.97 × 1061.06 × 1068.26 × 1094.48 × 1016
MEAN1.58 × 1099.94 × 1032.11 × 1011.13 × 10115.66 × 1061.06 × 1062.20 × 1098.33 × 1015
STD1.49 × 1091.93 × 1031.35 × 10-19.03 × 10101.93 × 1062.37 × 1031.70 × 1091.15 × 1016
6.0 × 105BEST1.79 × 1031.28 × 1032.08 × 1019.30 × 1091.47 × 1061.05 × 1069.95 × 1069.78 × 1013
MEADIAN2.43 × 1053.71 × 1032.09 × 1012.06 × 10102.53 × 1061.06 × 1064.74 × 1073.02 × 1014
WORST1.28 × 1075.97 × 1032.11 × 1014.91 × 10104.35 × 1061.06 × 1061.49 × 1089.79 × 1014
MEAN1.80 × 1063.64 × 1032.09 × 1012.25 × 10102.72 × 1061.06 × 1065.91 × 1074.13 × 1014
STD3.64 × 1061.45 × 1039.31 × 10-21.01 × 10107.59 × 1052.42 × 1034.14 × 1072.55 × 1014
3.0 × 106BEST2.32 × 10-238.16 × 1022.07 × 1011.06 × 1091.47 × 1061.05 × 1067.09 × 1044.10 × 1013
MEADIAN5.00 × 10-161.18 × 1032.08 × 1015.63 × 1092.53 × 1061.05 × 1066.29 × 1051.11 × 1014
WORST6.12 × 10-43.64 × 1032.09 × 1012.45 × 10104.18 × 1061.06 × 1062.99 × 1062.63 × 1014
MEAN2.58 × 10-51.35 × 1032.08 × 1016.13 × 1092.70 × 1061.05 × 1069.23 × 1051.21 × 1014
STD1.22 × 10-45.73 × 1023.91 × 10-24.68 × 1097.29 × 1052.25 × 1038.83 × 1055.46 × 1013
F9F10F11F12F13F14F15
1.2 × 105BEST1.57 × 1089.32 × 1071.00 × 10101.96 × 1081.04 × 10108.40 × 10101.46 × 107
MEADIAN4.11 × 1089.42 × 1071.05 × 10113.48 × 1092.96 × 10105.19 × 10114.39 × 107
WORST1.15 × 1099.48 × 1073.98 × 10114.44 × 10109.13 × 10101.12 × 10122.46 × 108
MEAN4.28 × 1089.42 × 1071.44 × 10111.09 × 10104.13 × 10105.06 × 10115.90 × 107
STD2.09 × 1083.67 × 1051.25 × 10111.36 × 10102.59 × 10102.80 × 10115.16 × 107
6.0 × 105BEST9.45 × 1079.21 × 1076.49 × 1081.46 × 1031.15 × 1094.10 × 1084.76 × 106
MEADIAN2.07 × 1089.33 × 1072.15 × 1095.92 × 1032.81 × 1092.44 × 1091.01 × 107
WORST5.73 × 1089.40 × 1072.59 × 10102.42 × 1066.60 × 1091.28 × 10112.61 × 107
MEAN2.34 × 1089.32 × 1074.30 × 1092.10 × 1053.15 × 1091.44 × 10101.22 × 107
STD1.13 × 1083.85 × 1056.43 × 1096.60 × 1051.43 × 1092.74 × 10105.76 × 106
3.0 × 106BEST9.36 × 1079.16 × 1074.71 × 1071.02 × 1031.20 × 1071.63 × 1071.28 × 106
MEADIAN2.00 × 1089.27 × 1071.56 × 1081.22 × 1035.46 × 1076.40 × 1071.86 × 106
WORST5.74 × 1089.31 × 1073.53 × 1081.99 × 1034.66 × 1085.42 × 1083.14 × 106
MEAN2.27 × 1089.26 × 1071.71 × 1081.32 × 1039.75 × 1079.41 × 1072.00 × 106
STD1.12 × 1083.44 × 1059.20 × 1072.89 × 1021.19 × 1081.11 × 1085.73 × 105
Table 11. The ranking of state-of-the-art metaheuristics.
Table 11. The ranking of state-of-the-art metaheuristics.
MetaheuristicC1C2C3C4C5Total
SHADEILS8.6711.7511.0014.6715.0061.08
MOS12.6710.759.5012.3310.0055.25
MLSHADE-SPA13.0012.0011.7511.673.0051.42
CC-RDG35.8312.2512.639.3311.0051.04
BICCA10.6710.5011.009.008.0049.17
IHDELS10.336.506.259.6713.0045.75
SGCC8.008.008.3812.009.0045.38
SACC2.006.638.388.3314.0039.33
CC-SHADE-ML-LS112.004.754.504.3312.0037.58
CC-CMA-ES8.675.387.757.332.0031.13
VMODE6.005.386.136.676.0030.17
DGSC6.337.385.385.004.0028.08
MPS4.338.257.503.005.0028.08
DECC-G9.336.004.383.331.0024.04
DEEPSO2.174.505.503.337.0022.50
Table 12. Detailed results for the fine-tuned CC-SHADE-ML-LS1 algorithm.
Table 12. Detailed results for the fine-tuned CC-SHADE-ML-LS1 algorithm.
F1F2F3F4F5F6F7F8
1.2 × 105BEST3.91 × 1035.94 × 1032.11 × 1016.35 × 10104.58 × 1061.05 × 1067.05 × 1089.88 × 1014
MEADIAN5.49 × 1038.47 × 1032.12 × 1011.92 × 10116.12 × 1061.06 × 1062.74 × 1092.49 × 1015
WORST7.60 × 1039.44 × 1032.13 × 1014.74 × 10117.28 × 1061.06 × 1064.78 × 1093.05 × 1016
MEAN5.63 × 1037.99 × 1032.12 × 1012.00 × 10116.12 × 1061.06 × 1062.52 × 1096.75 × 1015
STD1.34 × 1031.41 × 1037.55 × 10-21.32 × 10111.04 × 1063.06 × 1031.31 × 1091.06 × 1016
6.0 × 105BEST0.00 × 1001.04 × 1032.00 × 1018.86 × 1091.66 × 1061.04 × 1061.81 × 1077.66 × 1013
MEADIAN0.00 × 1001.18 × 1032.00 × 1013.92 × 10103.87 × 1061.05 × 1061.97 × 1083.56 × 1014
WORST1.54 × 10-321.49 × 1032.00 × 1019.74 × 10105.59 × 1061.05 × 1066.84 × 1082.14 × 1015
MEAN3.07 × 10-331.18 × 1032.00 × 1014.85 × 10103.61 × 1061.05 × 1062.33 × 1085.32 × 1014
STD6.87 × 10-331.61 × 1021.41 × 10-53.21 × 10101.58 × 1063.16 × 1032.54 × 1087.21 × 1014
3.0 × 106BEST0.00 × 1007.42 × 1022.00 × 1012.80 × 1091.61 × 1061.03 × 1061.81 × 1053.77 × 1013
MEADIAN0.00 × 1007.77 × 1022.00 × 1013.11 × 1093.87 × 1061.04 × 1065.17 × 1056.87 × 1013
WORST0.00 × 1001.33 × 1032.00 × 1019.54 × 1095.05 × 1061.05 × 1067.00 × 1051.11 × 1014
MEAN0.00 × 1008.66 × 1022.00 × 1014.26 × 1093.28 × 1061.04 × 1064.50 × 1056.90 × 1013
STD0.00 × 1002.07 × 1021.41 × 10-52.47 × 1091.28 × 1067.18 × 1032.23 × 1052.87 × 1013
F9F10F11F12F13F14F15
1.2 × 105BEST3.78 × 1089.29 × 1073.65 × 10103.63 × 1081.47 × 10101.76 × 10112.96 × 107
MEADIAN4.56 × 1089.39 × 1071.86 × 10112.84 × 1092.43 × 10102.51 × 10114.94 × 107
WORST8.71 × 1089.46 × 1074.39 × 10111.20 × 10107.16 × 10105.56 × 10117.50 × 107
MEAN5.07 × 1089.38 × 1072.27 × 10114.82 × 1093.08 × 10102.77 × 10115.23 × 107
STD1.71 × 1085.94 × 1051.29 × 10115.26 × 1091.89 × 10101.37 × 10111.59 × 107
6.0 × 105BEST1.58 × 1089.26 × 1071.39 × 1095.84 × 1021.21 × 1093.21 × 1095.66 × 106
MEADIAN3.78 × 1089.30 × 1074.54 × 1098.54 × 1023.49 × 1093.18 × 10107.32 × 106
WORST4.22 × 1089.40 × 1075.82 × 10101.18 × 1031.27 × 10107.89 × 10101.37 × 107
MEAN3.50 × 1089.31 × 1071.27 × 10109.08 × 1024.57 × 1093.50 × 10108.98 × 106
STD9.32 × 1075.19 × 1052.04 × 10102.19 × 1023.98 × 1092.99 × 10103.34 × 106
3.0 × 106BEST1.06 × 1089.17 × 1073.50 × 1074.27 × 1007.98 × 1062.53 × 1078.33 × 105
MEADIAN3.08 × 1089.24 × 1071.02 × 1081.06 × 1011.11 × 1076.31 × 1071.42 × 106
WORST3.58 × 1089.26 × 1073.95 × 1081.06 × 1036.14 × 1071.56 × 1082.56 × 106
MEAN2.60 × 1089.23 × 1071.50 × 1082.84 × 1022.35 × 1077.11 × 1071.55 × 106
STD1.04 × 1083.23 × 1051.29 × 1084.22 × 1022.19 × 1074.58 × 1076.65 × 105
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vakhnin, A.; Sopov, E. A Novel Self-Adaptive Cooperative Coevolution Algorithm for Solving Continuous Large-Scale Global Optimization Problems. Algorithms 2022, 15, 451. https://doi.org/10.3390/a15120451

AMA Style

Vakhnin A, Sopov E. A Novel Self-Adaptive Cooperative Coevolution Algorithm for Solving Continuous Large-Scale Global Optimization Problems. Algorithms. 2022; 15(12):451. https://doi.org/10.3390/a15120451

Chicago/Turabian Style

Vakhnin, Aleksei, and Evgenii Sopov. 2022. "A Novel Self-Adaptive Cooperative Coevolution Algorithm for Solving Continuous Large-Scale Global Optimization Problems" Algorithms 15, no. 12: 451. https://doi.org/10.3390/a15120451

APA Style

Vakhnin, A., & Sopov, E. (2022). A Novel Self-Adaptive Cooperative Coevolution Algorithm for Solving Continuous Large-Scale Global Optimization Problems. Algorithms, 15(12), 451. https://doi.org/10.3390/a15120451

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop