You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Article
  • Open Access

27 April 2020

Niching Multimodal Landscapes Faster Yet Effectively: VMO and HillVallEA Benefit Together

and
Faculty of Science and Engineering, Iwate University, Ueda 4-3-5, Morioka, Iwate 020-0066, Japan
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Intelligent Optimization in Big Data, Machine Learning and Artificial Intelligence

Abstract

Variable Mesh Optimization with Niching (VMO-N) is a framework for multimodal problems (those with multiple optima at several search subspaces). Its only two instances are restricted though. Being a potent multimodal optimizer, the Hill-Valley Evolutionary Algorithm (HillVallEA) uses large populations that prolong its execution. This study strives to revise VMO-N, to contrast it with related approaches, to instantiate it effectively, to get HillVallEA faster, and to indicate methods (previous or new) for practical use. We hypothesize that extra pre-niching search in HillVallEA may reduce the overall population, and that if such a diminution is substantial, it runs more rapidly but effective. After refining VMO-N, we bring out a new case of it, dubbed Hill-Valley-Clustering-based VMO (HVcMO), which also extends HillVallEA. Results show it as the first competitive variant of VMO-N, also on top of the VMO-based niching strategies. Regarding the number of optima found, HVcMO performs statistically similar to the last HillVallEA version. However, it comes with a pivotal benefit for HillVallEA: a severe reduction of the population, which leads to an estimated drastic speed-up when the volume of the search space is in a certain range.

1. Introduction

In the arena of optimization, a heuristic algorithm seeks for solutions that are good enough (not necessarily optimal) within a fair computation time [1]. Thus, it is crucial to keep the balance between the quality of the approximated solutions and the time used to reach them. Beyond simple heuristics for specific problems, metaheuristics are intelligent mechanisms that guide other heuristics through the search process [2]. Evolutionary algorithms (EAs) are a category of metaheuristics based on biological evolution. As global optimization methods, typical single objective EAs seek for a unique global optimum, ignoring the possible existence of other optima. That means a serious limitation in industrial scenarios described by multimodal optimization problems, such as those reported in [3,4]. In this context, the term multimodality denotes the presence of optima in various regions of the search space. Many decisions in engineering, e.g., selecting a final design, depend on the earlier optimization of relevant aspects, i.e., cost and simplicity [5]. Hence, experts look for several optima instead of the only best solution, so they can choose the most suitable by considering further practical aims.
In the course of the last four decades, researchers have steadily coped with that dilemma, leading to the resultant field of evolutionary multimodal optimization [6]. The simultaneous detection of multiple optima in the search space is a big challenge. Such a high difficulty, together with the vast range of application domains comprised, make it a very active research field that intends the devise of niching algorithms [6,7], which are the key methods here, and their coupling with metaheuristics. Inspired by the dynamics of the ecosystems in nature, the paradigm of niching is the common computational choice for multimodal optimization. Niching techniques have complemented several metaheuristic approaches, e.g., Genetic Algorithms (GAs) [5,6,7,8,9,10], Particle Swarm Optimization (PSO) [11,12,13,14] and Differential Evolution (DE) [15,16,17,18]. Because of their strong synergy, niching methods have also been explained as the extension of EAs to multimodal scenarios [19].
The Variable Mesh Optimization (VMO) [20] metaheuristic has proved to perform competitively in continuous landscapes. In its canonical form, it is capable to locate different optimal solutions but it cannot maintain them over the time [21]. A few works [21,22,23] have augmented VMO as to make it cope with multimodality. They include VMO-N [22], a generic VMO framework for multimodal problems, whose only two instances are termed Niche-Clearing-based VMO (NC-VMO) [21] and Niche-based VMO via Adaptive Species Discovery (VMO-ASD) [22]. The experimental analysis demonstrated that VMO-N is a suitable approach for multimodal optimization. However, a final request in [22] persists as the search for a strongly competitive variant of VMO-N, which uses a more robust niching procedure than those in NC-VMO and VMO-ASD.
To annul that limitation, we unveil a new case of VMO-N that exploits the Hill-Valley Clustering (HVC) niching technique [24], and the Adapted Maximum-Likelihood Gaussian Model Iterated Density-Estimation Evolutionary Algorithm with univariate Gaussian distribution (AMaLGaM-Univariate) [25,26]. Recently, Maree et al. [24,27] successfully used those methods together in the HillVallEA scheme. Motivated by their relevant outcomes, we instantiate the VMO-N framework by incorporating such a joint strategy. The ensuing HVcMO method is expressly dubbed HVcMO20a when pointing at its primary setup. This is the first competitive version of VMO-N. To support this claim, we compared it to remarkable metaheuristic strategies for multimodal optimization. Further than the VMO-N’s instances, HVcMO20a overcomes the Niching VMO (NVMO) [23] method, as far as we know, the other VMO-based approach for multimodal optimization reported in the literature.
In addition, HVcMO is not only a case of VMO-N, but also an extension of HillVallEA19, the ultimate version of HillVallEA, a quite sophisticated metaheuristic for multimodal optimization. HillVallEA19 [27] outperforms the algorithms presented in the last two editions (2018 and 2919) of the Competition on Niching Methods for Multimodal Optimization, within the Genetic and Evolutionary Computation Conference (GECCO) [28]. In spite of that, a strong weakness of HillVallEA19 is that it tends to need very large populations. However, it is not occasional at all that an effective multimodal optimizer suffers such a drawback, considering that many functions have a large number of optima to be found, and that many of them involve numerous variables as well.
As expected, observed results show that the combined use of HVC and AMaLGaM-Univariate within HVcMO make that VMO-oriented optimizer capable to approximate multimodal problems effectively. On the other hand, the application of the search operators of VMO allows such an extended HillVallEA mechanism to perform faster over a large set of problems among those used in this study, which derives from an overall reduction of the population size on the problem in the test suite. This fact evidences the mutual benefit of using both approaches together. Given new multimodal optimization problems, apart from those seen in this study, it is then possible to recommend either HVcMO20a or HillVallEA19 to solve such problems, according to their common characteristics with the benchmark functions approximated by these algorithms.
Multimodal optimization is theorized in Section 2, together with some niching-related concepts. Relevant works, including VMO-N and HillVallEA, are reviewed in Section 3, where some ideas to deal with outlined shortcoming are exposed as the objectives of this research. The VMO-N framework is improved in Section 4, and then instantiated as the HVcMO algorithm. Section 5 describes the setup for the experiments to validate and analyze such a new proposal, whose results are discussed in Section 6. Finally, Section 7 comes with the conclusions and some directions for future work.

2. Formal Notion of Multimodal Optimization and Niching Approach

For a sake of simplicity, multimodal optimization is usually defined in informal manners, mostly by means of some descriptive cases of multimodal scenarios, like in Figure 1. Such conceptualizations are definitely valid and assure a straightforward understanding of what multimodal optimization is.
Figure 1. Examples of multimodal maximization functions, including a couple of usual benchmarks: (a) Equal Maxima; (b) Uneven Decreasing Maxima. The last graph illustrates: (c) a case with plateaus.
However, even supported by examples of functions, a more formal viewpoint is necessary. Let us formulate a continuous optimization problem P as a model driven by the elements below:
P   ( S , Ω , f )
standing S for a search space, an abstract construction of all the possible solutions over a finite set of decision variables X j , with 1 j D , where D is the dimension of P . For each X j , the pair of lower ( a j ) and upper ( b j ) bounds (limits) of its domain is given in B { [ a j , b j ] | 1 j D } . Being Ω the set of constraints between variables, P is an unconstrained problem if Ω = {   } . Finally, f : B indicates the objective function to optimize (either minimize or maximize).
In this scenario, instantiating a variable X j means to assign a real value v j [ a j , b j ] to it, that is:
X j v j
and therefore, a solution s S is a complete assignation where the values given to the decision variables satisfy the constraints in Ω . A solution s signifies a global optimum for a minimization problem P if and only if f reaches its lowest value at s , among all the solutions in S . Contrarily, when P is a maximization problem, s is said a global optimum iif f ( s ) is the largest value of the objective function in S . Those two criteria for global optimality can be defined in a formal fashion as:
( s ) [ G M i n ( s , P ) s S ( s ) [ s S f ( s ) f ( s ) ] ]
( s ) [ G M a x ( s , P ) s S ( s ) [ s S f ( s ) f ( s ) ] ]
Likewise, a solution s may be a local optimum for P , i.e., in a certain region S S . Since the decision variables are real-valued, the number of subsets of S is infinite, and seemingly the number of local optima, but only the subspaces that represent peaks count. The set of peaks ( P k ) denotes the partition of S whose elements are regions where f is quasi-convex, for a minimization task:
( P k ) [ P e a k s ( P k , P ) P a r t i t i o n ( P k , S ) ( S ) [ S P k Q u a s i c o n v e x I n R e g i o n ( f , S ) ] ]
or quasi-concave, in the event of a maximization problem:
( P k ) [ P e a k s ( P k , P ) P a r t i t i o n ( P k , S ) ( S ) [ S P k Q u a s i c o n c a v e I n R e g i o n ( f , S ) ] ]
In Figure 1, each example of maximization consists of five (quasi-concave) peaks. In the rightmost graph, the peaks indeed end in ‘plateaus’, i.e., flat regions of S where the infinite encircled solutions have exactly the same fitness value. Formally, the local optimality can be defined as:
( s ) [ L M i n ( s , P ) ( S ) [ S P k s S ( s ) [ s S f ( s ) f ( s ) ] ] ]
( s ) [ L M a x ( s , P ) ( S ) [ S P k s S ( s ) [ s S f ( s ) f ( s ) ] ] ]
While a global optimum is also a local one in the peak it belongs to, it is common to disjoint those concepts for practical reasons. Let us S denote the (complete) set of local optima (i.e., one optimum per peak; infinite if a plateau), and S be the set of global optima, so that S S S . Hereinafter, by ‘local optima’ we refer only to the partial set of local optima ( S ), which excludes the global ones:
S S S
The global optimization approach solely accepts that S = 1 and | P k | = 1 . Diametrically, the multimodal optimization paradigm concerns the existence of multiple peaks ( | P k | 2 ), not only several optima. For instance, a single-peak function ending in a flat region (infinite optima) is weak unimodal [29]. Solving a multimodal optimization problem means locating all peaks. The standard population-based optimizers are then modified to create stable subpopulations (known as niches) at all peaks to discover. Actually, niching algorithms exist for that purpose. Being the best solution in a niche referred as its master, every niche corresponds to a single peak of the multimodal problem, and most functions are featured by peaks of distinct radii, usually unknown. Several of those problems have been standardized as test cases for new multimodal optimizers. For some of them, the number of local optima is known precisely; in others, it is simply huge. Thus, only the global optima located are used for assessing the performance of new algorithms, which then might avoid efforts on finding local optima. It does not mean any shortage as most of those algorithms can be easily adapted to pursue global and local optima all together, when required in real-world scenarios.

4. The Proposals

In response to the second goal projected, this section presents the VMO-N framework with several improvements that guarantee a high flexibility. This new scheme constitutes the base for creating future VMO multimodal optimizers, such as the HVcMO algorithm, also introduced below.

4.1. Variable Mesh Optimization with Niching: A Revised Framework

This fresh proposal preserves the essential contributions of VMO-N: (1) advising when to better apply the niching step within VMO, and (2) applying the adaptive clearing operator over each niche. However, it comes with a strong adaptability by reason of the multiple optional commands, as Algorithm 4 explains. One of them is the use of an elitist archive E , a memory-based strategy imported from the literature. It has shown helpful in many multimodal optimizers [18,23,24,30,37], some of which were previously approached in this paper. Apart from the masters of the peaks, it might be helpful to store other nodes with additional purposes. Hence, it is optionally considered a second external list, indicated as R in the framework.
The previous works about VMO-N overcame the initial shortcoming of VMO related to its incapacity to maintain the fittest found nodes in the population along the time. Given that, memorizing such nodes is unnecessary, unless certain circumstances apply, e.g., if they are required for implementing any mechanism, like the ‘tabu’ list in [37], to explicitly avoid re-visiting regions of the search space. However, this revamped VMO-N considers the possibility that the best individuals discovered are deliberately excluded from the mesh for the next iteration, either temporarily or permanently. In that situation, E is extremely needed.
Other optional instructions are carefully placed in the building blocks of VMO-N. Besides providing adaptability to that multimodal optimization scheme, they make it more generalizable, which means that its instances can deal with a vast amount of multimodal problems. A plethora of methods can be created by instantiating this framework in future. Actually, some of those non-compulsory commands indirectly suggest further strategies to apply. In addition, the number of parameters used and of variables declared is as large as needed, for example, by the niching algorithm incorporated or by any elective step conducted.
Based on the outcomes in [23,24], an important increase is the possibility of a local optimization step (Algorithm 4, line 31). In order to enhance the search, any freely chosen optimizer is separately run over each niche fund. It is said a local optimizer as it initializes from the individuals in a given niche, using either some of them or all. However, it might reach solutions in other areas of the search space, beyond the limits of the niche at hand. Following the main course of VMO-N, once it has been improved, the adaptive clearing affects each niche (in line 33) and then, the fittest node in the niche (indicated as N h j , 1 ) is considered to update the list of elites only if N h j , 1 was discovered just now. Moreover, in case of using the second external list ( R ), it is updated at that moment as well.
There are a couple of alternative courses involving the update of the memories (line 34), taking into account that it can be permuted either with line 33 or with line 35. If the order of the instructions in lines 33 and 34 are exchanged, the adaptive clearing occurs after updating the lists. In fact, that possibility was announced in Section 3.4.3, while explaining that even in that case the algorithmic sequence of any instance of VMO-N will differ from that of the NVMO algorithm. Such a modification has sense if, for example, several nodes have to be saved in the second extra memory before they are removed by the adaptive clearing operator.
On the other hand, permuting lines 34 and 35 indicates that the memory update happens at once, after processing all niches, instead of after processing every single niche. In that case, the set { N h j , 1 , } contains the master of each j-th niche to update E , together with other relevant nodes to update R . What is more, it is possible to update one of the lists ( E or R ) inside the for-loop, every time a niche is processed, and the other list after that, only once. That alternative derives from the separability of both lists, and from the permutability of line 34. Nonetheless, effecting the adaptive cleaning (line 33) is no longer obligatory, because other mechanisms may satisfy its main function: to foster diversity in the mesh.
Mathematics 08 00665 i002
The VMO-N framework also integrates the notion of what we coin as global key nodes. That implies not only the single fittest solution but any other with global importance, for example, every node having similar fitness than the best solution in the population. Whenever that option is taken, the global expansion of the mesh takes into account the set G k of such key solutions. Likewise, the local expansion is slightly adapted. Given any s i node, its k neighbors are not necessarily selected from the whole mesh. Instead, they are found among those nodes belonging to a certain universe that is specifically defined for the i-th node, as a subset of the sampled mesh, that is U i M .
The annotated restriction 3.0 I T 3.5 I derives from the details of the expansion. Setting T 3 I assures a least creation of 2 I nodes, which is enough to expand locally, globally, and to some extent, from the frontiers. Moreover, 3.0 I < T 3.5 I benefits even more the creation of solutions from the frontiers. Actually, T 3.5 I is more than sufficient to treat both frontiers fully. Instead of setting T as a parameter, now VMO-N can also compute it as needed, for example to precise the exact | L s H | wanted. If | L s H | is very small, T 3.0 I can be obtained. In addition, even if T is set as a parameter together with I , they may vary along the search (either in line 36 or in line 42).
Other revisions are the operation with several termination criteria for the evolutionary process, and the use of a while-loop rather than a for-loop to describe it. Although the latter is coherent with our practice of VMO, it has no other impact but gaining descriptive capability for future. The altered framework remains valid to pursue both global and local optima, whereas it allows limiting the search to only one of those types (e.g., by means of lines 34 and 37).

4.2. Hill-Valley-Clustering-Based Variable Mesh Optimization

Algorithm 5 reveals the elementary units of HVcMO, the novel instance of VMO-N, whose competitiveness is later confirmed in Section 6, as to accomplish the third research objective of this study. It is also an evident extension of HillVallEA that integrates some important additions. Thus, HVcMO is seen from the perspectives of its two parents. In this new case of the VMO-N framework:
  • two external lists are used to keep the elites and those nodes for rejection, respectively,
  • HVC is employed as the niching algorithm,
  • the adaptive clearing is not applied,
  • a local optimizer is run over each niche, and
  • the mesh in the next generation is fully replaced by a new one.
Since the mesh is entirely reset (line 13), an elitist archive is required and the adaptive clearing is needless. The pursuit for diversity recurs, now by the rejection scheme all along the restart of the population, which follows the instructions for HillVallEA, same as the update of the archive. Besides, before niching the expanding mesh, it is enlarged with all the already identified elites (line 39) to use them as attractors while executing the niching method. That is equivalent to evolve a population that consists of two consecutive segments for certain masters of the found niches and for random nodes, respectively, so that the sorting, the truncation and the expansion affect only the latter segment.
Moreover, the search operators of VMO were implemented the same as for [20,21,22]. The corresponding formulations given in [20] largely apply, except for some changes that involve mainly the local expansion. Among them, the distance threshold is based on [21,22], as:
ε 1 D j = 1 D ε j
where D is the poblem dimension and each component ε j denotes a portion of the amplitude of the domain of the j-th variable whose upper and lower bounds are respectively b j and a j . That fraction depends on the current count of function evaluations, out of a fixed maximum number ( M a x F E s ):
ε j { ( b j a j ) / 2 ,                             F E used <0.15MaxFEs ( b j a j ) / 4 ,   0.15 M a x F Es F E used <0.30MaxFEs ( b j a j ) / 8 ,   0.30 M a x F EsF E used <0.60MaxFEs ( b j a j ) / 16 ,   0.60 M a x F EsF E used <0.80MaxFEs ( b j a j ) / 100 ,   0.80 M a x F EsF E used
Mathematics 08 00665 i003
Besides, for every node s i to be expanded locally by the F-operator, the list of neighbors L k i is not determined from the entire mesh but from a certain universe U i . In HVcMO, regardless of the node at hand, such a universe is the fittest τ -fraction of the mesh. As for that, we use the selection fraction defined for the truncation of the initial mesh, but they may also be set unequally. Not all the solutions in U i have global importance, even for relatively short values of τ . For that reason, that universe should not be confused with a set of key nodes. The larger τ is, the more nodes with no global relevance are included in U i . The choice of that quasi-local F-operator is mostly moved by our concern about the running time. Future works should evaluate the effects of other ways to decide U i .
The formulations for the G-operator and the H-operator remains unaltered, but the size of frontiers affected by the latter is jointly delimited by their fixed amplitude ratio H a m p . In the previous studies about VMO, the value of T is fixed as a parameter, informing the possible largest length of the mesh after the expansion. Thus, the extent of the creation of nodes from the frontiers, i.e., | L s H | , is basically figured as the difference of T and the size of the enlarging mesh (Algorithm 1, line 19). In that case, the proportion of the mesh taken as frontiers is influenced by T . Conversely, in HVcMO, | L s H | is computed as the lowest value between the percentage of the initial mesh indicated by H a m p , and a firm upper limit ( H m a x ) for the size of the frontiers (Algorithm 5, line 18). Next, in lines 19~38 that are equivalent to lines 8~27 in Algorithm 4, it is marked how to calculate T . Note that it is just a formalism to trace T from Algorithm 5 to Algorithm 4, and then from Algorithm 4 to Algorithm 1, as to figure | L s H | . In practice, after calculating | L s H | , HVcMO does not utilize any T at all.
From the viewpoint of HillVallEA, the biggest augmentation in HVcMO is the effecting of the search operators of VMO (lines 18~39) over the truncated population. As of the population is restarted randomly, HillVallEA uses a couple of important strategies, i.e., the rejection mechanism to sparse the solutions through unexplored areas of the search space, and the subsequent truncation to process only the best part of the population. The adding of the expansion operators of VMO as another preparation step before niching intends to improve the quality of the population as well. However, it provokes that the number of nodes increases, and then also the consumption of the budget and the execution time. That consequence applies for any specific sample size at a certain moment, but is not necessarily valid for the entire evolutionary optimization. Our conjectures about it are delineated as:
  • Hypothesis 1: Using additional search operators to enhance the population before niching may reduce the total number of solutions throughout the optimization process by HillVallEA.
  • Hypothesis 2: If the reduction of the overall number of solutions is big enough, the total execution time of HillVallEA should also decrease, while keeping quite similar multimodal capability.
To evaluate them (in Section 6), we borrowed search procedures from VMO. Others may be used instead, e.g., crossover mechanisms designed for GAs. Thus, a research avenue for HillVallEA begins. Another variation by HVcMO concerns the diminution of the mesh size when the remaining budget is insufficient to deal with a new population having the current length. This choice was indeed contemplated by Maree et al. in [48], but discarded, as they considered fruitless to sample such small populations at the end of the optimization process. In case of HVcMO, the size of the mesh properly decreases (line 8) when the available budget is insufficient not only to sample a population with the current size, but also to expand at least part of the new nodes, via VMO. Even without any deep analysis about it, we implemented that modification based on some empirically observed benefits. Besides, it slightly increments the overall number of solutions and thus, the running time. Therefore, that means a practical opportunity to prove that even with a forced longer execution time (beyond that provoked by VMO itself), HVcMO can run faster than HillVallEA. However, future works should verify the actual advantages of keeping such a late shrinkage of the population in HVcMO.
As an aftermath, the condition to increase the size parameters (line 50) is altered. In HillVallEA, that occurs if no new peak is detected. In HVcMO, it happens if also the size of the population was never reduced, since it is senseless to push a larger mesh again after it was previously shortened due to a lack of budget. On the other hand, one more extension pretends to reinforce the potential of the rejection while sampling the population. Not only the solutions sampled in the preceding iteration are considered, but also each master representing a newly discovered peak (in the current iteration), together with the best elite ever found (line 49). A closing comment clarifies about HVcMO20a, which is nothing else but the HVcMO algorithm where AMaLGaM-Univariate performs as the local optimizer and the parametric specifications for HillVallEA19 are widely adopted, as detailed below.

5. Experimental Setup

The general elements of the experimental analysis are explained in this section. They include the setting of parameters for HVcMO20a, the suite of benchmark problems, the baseline methods, the performance criteria, and the statistical tests for comparisons. This work follows the procedures of the Competition on Niching Methods for Multimodal Optimization within GECCO [28], which extensively embraces the instructions in [40]. For any single run, the tried algorithm stops when a given budget is finished. Stated as a certain maximum number of functions evaluations ( M a x F E ), the budget for every test problem is specified in Table 2. Every algorithm is run 50 times over each test problem. The outputs are assessed at five levels of accuracy: 1.0 × 10 1 , 1.0 × 10 2 , 1.0 × 10 3 , 1.0 × 10 4 and 1.0 × 10 5 , and the results are averaged over a given number of runs ( N R 50 ) at every level of accuracy. Such a concept of accuracy is meant only to evaluate the outputs of any multimodal optimizer. However, methods that use certain thresholds, e.g., the tolerance in HillVallEA, may set them by considering those values of accuracy as a reference.
Table 2. Characteristics of the test problems and budget allowance. Each value in red represents a search space volume that is either small or large; if blue, it denotes a volume of a medium size.

5.1. Configuration of Parameters of HVcMO20a

Equivalent to N in HillVallEA, the size of the initial mesh is set as I 64 , setting a selection fraction of τ c A 0.35 The number of neighbors of each node when affected by the local expansion is k 3 , the same as in [20,21,22,23]. HVcMO20a takes AMaLGaM-Univariate as the local optimizer and then, the size of the population to enhance every niche (by means of HVC) is N c A 10 D (the values of D are shown in Table 2), and its initial fraction is N c i n i 0.8 . The increment factors for the length of the overall population and the cluster size are fixed as N i n c 2.0 and N c i n c 1.1 , respectively. The level of tolerance is set to 1.0 × 10 5 , except for the later estimate of the time ratio (see Section 6.3). The domain bounds ( B ) for the variables of each problem are also given in Table 2.
Furthermore, the frontiers are jointly delimited by an amplitude rate of H a m p 0.05 , and the nodes to generate from them are at most H m a x 50 . Hence, while in the evolutionary optimization the length of the initial mesh grows in the range 64 I 2857 , it is truncated to 23 | M | 1000 nodes and therefore, the number of nodes to expand from the frontiers is 2 | L s H | 50 . For larger initial meshes, | L s H | keeps at constant value of 50. This choice of effecting the H-operator over a few solutions responds to the outcomes by previous works and some analytical observations. Despite the frontier operator was found useful for global optimization with VMO [32,51], there is no evidence of its impact on the VMO-based multimodal optimizers. Indeed, that represents another pending research issue, in particular since we noticed a detriment of the performance of HVcMO in some cases when the H-operator is applied on the large scale. We keep the usage of the frontier operator at a low rate, based on such yet incomplete findings, and also on the outcomes unveiled in [51], where it is proved the less influential among the search operators of VMO for global optimization.

5.2. Benchmark Multimodal Problems

A standardized test suite is used, being N F 20 the number of problems. Table 2 summarizes their main features, detailed in [40], plus the M a x F E specified as the budget afforded. The objective functions are formulated in [40], as well. We add information about the volume of the box-bounded search space; it is expressed as the product of the amplitudes of the domains of all variables:
V o l j = 1 D ( b j a j )
being D the dimension, and b j and a j the respective upper and lower bounds of the j-th variable. Values are highlighted (in red or blue) according to the volume, empirically seen as small ( V o l 90 ), medium ( 90 < V o l 10 3 ) or large ( 10 3 < V o l 10 20 ). This issue gains importance in Section 6.

5.3. Baseline Methods

As part of the experimental analysis, HVcMO20a is compared to the other VMO-based multimodal optimizers, namely NC-VMO and VMO-ASD (the previous instances of the VMO-N framework), and also the NVMO method. Besides, HVcMO20a is contrasted with the HillVallEA19 algorithm and its predecessor, HillVallEA18, as well as with other remarkable metaheuristics such as NEA2+, proposed by M. Preuss in [52], and RS-CMSA, introduced by Ahrari et al. in [53]. The other included baselines are SDE-Ga and ANBNWI-DE, by Kushida, respectively ranked first and second in the last two editions of the abovementioned competition, whose results are reported in [54].

5.4. Major Performance Criteria

The following description concerns the standard measures utilized for comparing HVcMO20a with the group of baseline optimizers, while additional criteria are later used to contrast it only with HillVallEA19, regarding their running times (Section 6.3). According to the most recent procedures indicated for the referred competition, every method is assessed by taking into account three scenarios defined by these scores: the peak ratio ( P R ), the (static) F 1 measure and the dynamic F 1 . They are bounded in [ 0 ,   1 ] ; the larger, the better. Among them, the P R allows contrasting new algorithms with a larger set of earlier multimodal optimizers, even if not recent ones, since the other two criteria were just adopted in the last years. For any specific run, the P R is the percentage of the number of peaks found ( N P F ) out of the number of known peaks ( N K P ), keeping in mind that such peaks represent global optima only:
P R N P F N K P
Additionally, the well-known F 1 statistic measure is re-formulated in this context as follows:
F 1 2 P R S R P R S R
where given the set of output solutions ( O S ), i.e., the presumed global optima returned by the algorithm, the success rate S R of O S tells the fraction of actual global optima found with respect to the count of output solutions:
S R N P F | O S |
It is important to distinguish S R from the success rate ( S R ) of runs [40], a formerly used measure. Furthermore, after a run is finished, the achieved O S set is entirely used to compute the static F 1 , while the dynamic F 1 is progressively calculated at the moments when solutions are discovered. Thus, such a calculation considers the count of function evaluations ( F E i ) at the instant i, with 1 i | O S | , which corresponds to the i-th found optima in O S . The set O S [ 1 : i ] then consists of the first i optima located. Finally, the dynamic F 1 is figured as the area under the curve divided by the M a x F E allotted:
d y n F 1 ( M a x F E F E | O S | ) F 1 ( O S ) + i = 2 | O S | ( F E i F E i 1 ) F 1 ( O S [ 1 : i 1 ] ) M a x F E

5.5. Statistical Validation

Regardless of the measures, the advices by Demšar [55] for comparing classifiers using non-parametric tests turned into a universal practice to assess methods in several fields, e.g., evolutionary computation [56]. However, as we alerted [22], such statistical comparisons are common in studies on global optimization, but sporadic in those about multimodal optimizers. For the validation of the outcomes by HVcMO20a, we apply the Wilcoxon signed-ranks test to contrast pairs of algorithms, and the Friedman test to detect significant differences between a group of methods. The non-parametric analyses aim to reject the null-hypothesis (H0) that the compared algorithms perform similarly. If the Friedman test does it, we use the post-hoc Nemenyi test to identify which methods differ significantly, concerning the measure at hand. The election of this last procedure responds also to the intention of contrasting, in unison, the earlier versions of HillVallEA with other multimodal optimizers, from a statistical perspective. Here, we set the significance level at 5%, i.e., the confidence level is α 0.05 .

6. Discussion of Results

The output files of HillVallEA19 that led to the statements presented in [27], are now available online [54]. However, it was necessary to compute further rough data, e.g., the population size at the end of each iteration of the algorithm, as to conduct the analysis given in Section 6.3. For that reason, we executed HillVallEA19 again by using its original program [48]. Hence, for a matter of homogeneity, we calculated the P R , F 1 and dynamic F 1 for the new output data in order to use their values also in the Section 6.2, instead of the ones published in [27]. As expected, the values for those metrics in both studies are quite similar. In view of that, if the analysis related to such measures is replicated, assuming the numbers in [27], it will lead to equivalent ends. As Supplementary Materials, the output files of our runs of HillVallEA19 and HVcMO20a are accessible online [57], in the required format [28], where the times reported have no other use than checking the correct order of the output solutions recorded; those values of time are not considered for the experiments in this study.

6.1. Outperforming NVMO and Earlier Instances of VMO-N

According to the availability of performance data, HVcMO20a is contrasted with the previous VMO-based multimodal optimizers by means of the peak ratio only. Table 3 shows the average assessments (at all accuracy levels) of all runs over the whole test suite, for each algorithm. In case of NC-VMO and VMO-ASD, the comparisons are based only on the results for 6 and 8 problems, respectively, which are reported in [22]. On the other hand, the results considered for NVMO involve the full validation suite of 20 problems, as published in [23]. The results by HVcMO20a over every single benchmark function are detailed in Table A1 from Appendix A.
Table 3. Peak ratios reached by the VMO-based multimodal optimizers, averaged at all accuracy levels over sets of 6, 8 and 20 benchmark problems.
Although the overall PR values put HVcMO20a on top of all these methods, the application of the Wilcoxon test over each pair of algorithms is required to deepen the analysis. Table 4 confirms that HVcMO20a significantly outperforms both VMO-ASD and NVMO, regarding the peak ratio. About the remaining evaluation (HVcMO20a vs. NC-VMO), it takes into account six problems; for two of them, those methods achieve equal outputs. Thus, there are only four relevant comparisons, a sample size that is insufficient to make a reliable computation of the Wilcoxon test. Alternatively, it is possible to calculate it with the 6 comparisons as relevant by splitting the ranks of such two ties evenly among the sums R+ and R−. In that variant, the R+ would be 19.5, while both R− and the W statistic would be equal to 1.5, which is greater than 0, the critical value of the Wilcoxon test for 6 paired comparisons at α 0.05 , making the test unable to reject the null-hypothesis. Hence, no significance difference between the peak ratios scored by HVcMO20a and by NC-VMO are detected with the studied data.
Table 4. Wilcoxon test regarding PR. The sum of ranks of comparisons where HVcMO20a outdoes the other algorithm (either VMO-ASD or NVMO) is R+, and R− is the opposite sum. If the W statistic is equal or less than the corresponding critical value, the null-hypothesis is rejected.

6.2. Further Baseline Comparison

Different from above, HVcMO20a is contrasted with the previous variants of HillVallEA and other outstanding meta-heuristics by means of the PR and the two other scenarios, i.e., the static and the dynamic F 1 . The mean results of our executions of HVcMO20a, HillVallEA19 and ANBNWI-DE over the benchmark problems are shown in Table A1, in Appendix A. For ANBNWI-DE, we computed the performance scores by using the output data available in [54], same as for the other baseline methods: NEA2+, RS-CMSA and SDE-Ga. However, we do not report the calculations for the last three algorithms because they match those published in [27], in relation to them.
Table 5 exhibits the average measurements for the multimodal optimizers, considering all accuracy levels and the whole set of problems. The group of HillVallEA methods, including HVcMO20a, beat the rest of the algorithms in every scenario. Among them, HillVallEA19 shows the best performance, closely followed by HVcMO20a and HillVallEA18, in that order. However, is the advantage of the HillVallEA family over the remaining algorithms significant? How about the differences within the three variants of HillVallEA?
Table 5. Scores averaged at all accuracy levels over the entire validation suite.
The Friedman test confirms the existence of significant differences in the pool of methods at every scenario (see Table 6). Therefore, a post-hoc examination should determine which algorithms perform significantly distinct. That is clarified in Figure 4, Figure 5 and Figure 6, the graphical representations of the implication of the Nemenyi test for the scenarios of P R , F 1 and dynamic F 1 , respectively.
Table 6. Friedman test. Since p-value < 0.05 (α), each null-hypothesis is rejected.
Figure 4. Nemenyi test regarding PR. Connected optimizers are not statistically different.
Figure 5. Nemenyi test regarding F1. Connected optimizers are not statistically different.
Figure 6. Nemenyi test regarding dynamic F1. Connected optimizers are not statistically different.
For seven algorithms and 20 comparisons (benchmark problems), the critical difference (CD) for the Nemenyi procedure at α 0.05 is 2.015. Any two algorithms perform significantly distinct if the distance between their average ranks is at least CD. Although HillVallEA19, HVcMO20a and HillVallEA18 confirm to rank before all the other methods, there is no significant difference between the three of them at any scenario. Thus, the performances of such three algorithms are statistically equivalent. Regarding the peak ratio, HillVallEA19 significantly outdoes RS-CMSA, ANBNWI-DE and NEA2+. Both HVcMO20a and HillVallEA18 are significantly better than NEA2+.
With respect to the F 1 measure, HillVallEA19 statistically beats again RS-CMSA, ANBNWI-DE and NEA2+. The outcomes by the HVcMO20a and the HillVallEA18 algorithms are statistically better than those achieved by ANBNWI-DE and by NEA2+. Moreover, in the scenario of the dynamic F 1 , HillVallEA19 is significantly better than NEA2+, RS-CMSA, ANBNWI-DE and SDE-Ga, while the superiorities of HVcMO20a and HillVallEA18 over both ANBNWI-DE and SDE-Ga are significant.
Besides being the best VMO-based multimodal optimizer, these results confirm that HVcMO20a beats several prominent algorithms from the state-of-the-art, significantly in some cases. In so doing, the third research objective (the pursue of a competitive variant of VMO-N) is accomplished.
The application of statistical comparisons keeps as a research debt in the arena of multimodal optimization. The utility of the non-parametric procedures goes beyond the discovery of significant contrasts between the performances of the methods. For instance, giving the mean scores in Table 5, HVcMO20a and HillVallEA18 are equal in terms of P R and F 1 , while ANBNWI-DE is better than SDE-Ga in such scenarios. However, Figure 4 and Figure 5 clarify that indeed HVcMO20a is rated before HillVallEA18, and SDE-Ga is better placed than ANBNWI-DE, in view of the individual ranks of their values for every test problem, instead of the average achievements.

6.3. HVcMO20a or HillVallEA19? When to Apply Each?

It is evident now the gain of putting the formulations of HillVallEA on the VMO-N framework, resulting in the effective HVcMO20a algorithm. The benefit of using the search operators of VMO in HillVallEA is not yet clear though. Summarily, that contribution comes in terms of the execution time.

6.3.1. When Counting Function Evaluations Is Not Enough

The convergence speed of an optimizer is a common way to get an idea of its rapidness. However, making a suitable formulation of that measure is more difficult in multimodal optimization than in global optimization. In [40], it is defined as the average number of function evaluations ( F E ) needed to locate all global optima. If the optimizer cannot find all the desired optima, M a x F E (the budget) is assumed as the F E for that run. However, there are various situations in which it gives a wrong notion, e.g., it considers the count of optima only if all of them are located. For example, if any pair of multimodal optimizers reach the fixed budget (and stop), after respectively finding the 30% and the 60% of the wanted optima, the convergence speed for both of them is interpreted the same ( M a x F E ). However, one of them seems able to converge (to all optima) first, if the budget were larger. Yet hard, an alternative for this is to substantially increase the budget for a better convergence analysis, like in [24]. In addition, what does one function evaluation represents in terms of time? Is it possible that a certain optimizer burns the same budget than another, but converges more rapidly to the same optima?
Table 7 shows the mean numbers of function evaluations by both HillVallEA19 and HVcMO20a over the entire test suite, and over problems grouped by the search space volume. HillVallEA19 uses less budget than HVcMO20a, but the average count of iterations when HillVallEA19 runs is larger. Besides, HillVallEA19 fails (no new elite is found) in more iterations than HVcMO20a. Thus, HillVallEA19 increases the population more often and what is more relevant, to a larger extent. At the start, they both use 64 individuals; if the current iteration fails, they become 128, then 256, and so on. Adding 703 solutions (in average) to the population in HillVallEA19 whenever it fails, but only 497 in HVcMO20a, indicates that smaller unsuccessful populations are doubled in HVcMO20a, i.e., at earlier moments. How many solutions do they handle in total? How much does it delay them?
Table 7. Mean values of budget usage, failed iterations and population increase after failure.
The convergence speed cannot answer such interrogations. The alternative to follow depends on the situation. In case of HillVallEA19 and HVcMO20a, it is possible to move forward by analyzing the time complexity, that can be preliminary understood as O ( n 2 ) , where n represents N for HillVallEA19 and I for HVcMO20a. Beyond such a brief statement, comparing the execution times of such algorithms requires to study exhaustively their time functions, which have to be carefully defined. Since both functions are in the order of n 2 , ascertaining their dominant coefficients is the key to compare the algorithms with respect to the running time. Otherwise, that can be done through a vast empirical analysis of the times consumed by the programs that implement HVcMO20a and HillVallEA19, which is the way followed in this paper. Regardless of the choice taken, the primary aim is to estimate the ratio of t H V c M O 20 a ( n ) to t H i l l V a l l E A 19 ( n ) , which respectively indicates the execution time by HVcMO20a, and the running time by HillVallEA19. Formally, the time ratio is:
t r a t i o t H V c M O 20 a ( n ) t H i l l V a l l E A 19 ( n )
From the empirical viewpoint, it is needed a vast number of paired time comparisons that should also be diverse, e.g., involving several test functions and varying parameters (tolerance, populations size, etc.). The more contrasting cases are considered, the more reliable the estimation of the ratio is. Since a full harmonization is impossible, a range of actions should be done to track and to examine sufficient rough running times of the programs in a fair manner. In our analysis, that is intended by:
  • executing single iterations of them, instead of the whole evolutionary process,
  • conducting the experiment over populations of certain sizes, to be exact n { 2 6 , 2 7 , 2 8 , 2 9 , 2 10 } ,
  • generating 50 distinct populations for every single sample size,
  • running the programs over each of the 20 benchmark problems using the same populations,
  • replicating the process for five levels of tolerance set equal to the accuracy levels, for a total of 25,000 runs per program (the tolerance influences AMaLGaM-Univariate and thus, the overall process),
  • effecting the 50,000 runs in turn and on the same computer, i.e., using the same specifications of hardware and software,
  • operating no other computational process, apart from those controlled by the system,
  • reusing much of the source code of HillVallEA19 to program the common aspects in HVcMO20a, to reduce the influence that the skills of the programmer have over the execution time, and by
  • estimating the ratio based not only on mean calculations but also on every single paired contrast,
  • excluding the outliers during the examination of the resultant measurements.

6.3.2. The Time Ratio

The first attempt to decide the proportion of the time by HVcMO20a with respect to the time by HillVallEA19 concerns the ratio of means ( R o M ) and the mean of ratios ( M o R ) metrics. In this study, they can be defined as follows, in a wide manner:
R o M e x p = l o w e r E x p u p p e r E x p p I d = l o w e r I d u p p e r I d r u n = 1 N R t H V c M O 20 a ( 2 e x p ) p I d , r u n e x p = l o w e r E x p u p p e r E x p p I d = l o w e r I d u p p e r I d r u n = 1 N R t H i l l V a l l E A 19 ( 2 e x p ) p I d , r u n
M o R e x p = l o w e r E x p u p p e r E x p p I d = l o w e r I d u p p e r I d r u n = 1 N R t H V c M O 20 a ( 2 e x p ) p I d , r u n t H i l l V a l l E A 19 ( 2 e x p ) p I d , r u n ( u p p e r E x p l o w e r E x p + 1 ) ( u p p e r I d l o w e r I d + 1 ) N R
where l o w e r E x p , u p p e r E x p { 6 , 7 , 8 , 9 , 10 } , with u p p e r E x p l o w e r E x p . Thus, it can be figured with respect to only one specific population size, or regarding several (all) sizes. Besides, p I d denotes the test problem, with l o w e r I d , u p p e r I d { 1 , 2 , , 19 , 20 } and u p p e r I d l o w e r I d , so that the times for either only one benchmark problem or the overall test suite can be taken. Finally, N R represents the number of runs for which the checked running times are considered.
Table A2 (Appendix B) reveals the values of M o R and R o M for each test problem, seeing the population sizes by separate, and all together. They consider the times for all the runs at every level of tolerance ( N R 250 ), and also for those runs of the programs adopting a tolerance of 1.0   ×   10 5 ( N R 50 ). The observations for 50 and for 250 runs coincide, evidencing the reliability of the experiment. Such a match suggests that in terms of time both programs respond stably in the same way to the variation of the tolerance, as long as the processed populations and the parametric setups (including the adjusted tolerance) are the same. The highest values of M o R and R o M for every population size are reported in Table 8, together with their overall rates, i.e., considering all the problems. We skip the least values as they might represent outliers. Preliminary, 1.30 < t r a t i o 1.65 , but since these are mean rates, deciding the ratio in that range might lead to an undesired favoritism for HVcMO20.
Table 8. General values of ratio of means (RoM) and mean of ratios (RoM).
For every single pair of runs, we find the interval its ratio belongs to, among nine possible ranges whose amplitudes were decided in the experimental phase (see Table A3, Appendix B). It shows that over the 60.80% of the ratios are in the range (1.30, 1.65], confirming the previous analysis of the mean rates. What is more, setting t r a t i o equal to 1.65 covers over the 94.80% of the runs (see Table 9). In spite of the range (1.65, 2.00] contains only about the 4.20%, which could be interpreted as outliers too, we finally adopt t r a t i o 2.00 , covering the 99.06% of runs. Such an election does not favor HVcMO20a at all, quite the opposite. That fact supports the soundness of the later estimation of the times needed by HVcMO20a and by HillVallEA19a to execute the entire evolutionary optimization, not only particular iterations, as thus far. At that moment, the actual coefficients of the time functions of the algorithms are not relevant, but the t r a t i o . Hence, for an input of n solutions they are assumed as:
t H i l l V a l l E A 19 ( n ) n 2
t H V c M O 20 a ( n ) 2 n 2
Table 9. Number and percentage of runs covered by possible ratios of time, considering 5000 and 25,000 runs: 50 per each of 20 functions with populations of 5 sizes, at 5 levels of tolerance.

6.3.3. Population Size and Execution Time

In Appendix B, Table A4 discloses the new analytical data about HVcMO20a and HillHallEA19, averaged over the same 50 runs per algorithm for each test problem, using tolerance 1.0 × 10 5 , whose results were studied in Section 6.1 and Section 6.2. The first of those extra metrics is the final population size, which indicates the biggest population sampled, i.e., that of the last iteration. In addition, the total population size is the sum of the sizes ( n ) of all the sampled populations:
T o t a l P o p S i z e r u n = 1 50 i = 1 # I t e r a t i o n s r u n   n i 50
Lastly, adhering Equations (20) and (21), the overall execution time units are estimated as:
T i m e H i l l V a l l E A 19 r u n = 1 50 i = 1 # I t e r a t i o n s r u n   n i 2 50
T i m e H V c M O 20 a r u n = 1 50 i = 1 # I t e r a t i o n s r u n   2 n i 2 50
The inspection of these scores leads to an important discovery: HillVallEA19 is faster than HVcMO20a, in 9 of the 10 problems with search space volumes either small or large (marked in red, in Table 2), while HVcMO20a performs faster than HillVallEA19, in 9 out of 10 problems with medium volumes (indicated in blue). Such a volume seems to be key for the time demanded to approximate the problems. Figure 7 offers a graphical notion of this, considering average values.
Figure 7. Mean results over the problems grouped by the size of the search space volume: (a) Final size of the population (actual); (b) Total size of the population (actual); (c) Execution time (estimate).
This information is completed with the Wilcoxon test (see Table 10). The populations processed by HVcMO20a are significantly shorter than those handled by HillVallEA19. That confirms the first hypothesis in Section 4.2, that extra pre-niching search operators (like those of VMO) may reduce the total population in HillVallEA. A direct interpretation is that in HVcMO20a, the size of the sampled population increases less often than in HillVallEA19. The reduction of the number of solutions is more drastic with respect to the problems with a medium volume of the box-bounded search space. In consequence, HVcMO20a is estimated to run statistically faster than HillVallEA19, in that case.
Table 10. Wilcoxon test as regards (final and total) population sizes and execution time. R− is the sum of ranks of comparisons where HVcMO20a outdoes HillVallEA19, and R+ is the opposite sum. If the W statistic is equal or less than the critical value, the null-hypothesis is rejected.
Moreover, considering the validation suite entirely, HVcMO20a also executes more rapidly than HillVallEA19, not significantly though. That fact is influenced by the difference between populations, when it comes to problems with either small or large search space volume. The total number of solutions in HVcMO20a is significantly smaller than in HillVallEA19. In spite of that, such a reduction is not large enough to decrease properly the overall running time by HVcMO20a. Thus, HillVallEA19 is indeed estimated to execute significantly faster than HVcMO20a over that group of methods.
The second hypothesis stated in Section 4.2 is then partially verified as well, since the core part of it guesses that when the decrease of the total population size is sufficiently big, the overall running time of HillVallEA lessens. The remain of the hypothesis claims that the multimodal power of the algorithm keeps similar in those cases. As examined, that happens both over the whole test suite and over the problems having medium search space volumes only. With the comparison of HVcMO20a and HillVallEA19 in Section 6.2, it was already demonstrated that the performance of HillVallEA remains quite the same after incorporating the VMO search mechanisms, considering all the benchmark problems. It is proved below for those with medium-volume search space.
The fourth research objective of this paper is then accomplished, once HVcMO20a is a yet effective version of HillVallEA, also faster in several cases. The last goal of this investigation concerns the need for some criterion to select either HVcMO20a or HillVallEA19 to deal with new multimodal problems in future. If the box-bounded search space of the target problem has a medium-size volume, we recommend to use HVcMO20a; otherwise, HillVallEA19 is preferred. Our suggestions take into account the analyses on the execution time and about the main performance measures, which is completed (in Table 11) by verifying that in both situations, the algorithms are similarly effective.
Table 11. Scores averaged at all accuracy levels on problems grouped by the search space volume.

7. Conclusions and Future Work

The VMO-N framework, with its corresponding instances, and the NVMO algorithm, constitute the state-of-the-art regarding the multimodal optimization approaches for the VMO metaheuristic. The contrasts between them are examined in this study, and VMO-N is revised, turning into a very flexible scheme that can be vastly instantiated by incorporating any niching technique and further search strategies. Actually, its first competitive version is presented as HVcMO, specifically referred as HVcMO20a in the current setup. This newly launched algorithm outperforms not only the former instances of VMO-N, but also the NVMO method and several prominent multimodal optimizers, in some cases, in a statistically significant way.
At the same time, HVcMO20a is an extension of HillVallEA19, the ultimate version of the successful HillVallEA multimodal optimizer, whose main drawback concerns the use of very large populations. That limitation is tackled in this paper, since HVcMO20a reduces the number of solutions needed to approximate the studied benchmark problems (compared to HillVallEA19), which signifies a decrease of the overall execution time when the reduction of the population is sufficiently big. The experiments confirmed the mutual benefit of VMO and HillVallEA. The recent HVcMO borrows mainly the HVC niching method and the AMaLGaM-Univariate local optimizer from HillVallEA, resulting in a competitive algorithm for multimodal optimization. Additionally, the application of the search operators of VMO within HillVallEA makes it work significantly more express over certain problems. A practical advice derives for problems whose box-bounded search spaces are not larger than 10 20 . If that volume is medium-size, i.e., in ( 90 ,     10 3 ], it is recommended to employ HVcMO20a. Conversely, HillVallEA19 should be applied if the box-bounded search space is any small or large, i.e., if its volume is either in ( 0 ,   90 ], or in ( 10 3 ,     10 20 ].
This study provides multiple statistical evidences for the empirical comparison of the methods. Because such a practice is frequent in other areas of artificial intelligence but not in multimodal optimization, that is also an attempt to throw light on the utility of the non-parametric analysis for the research on this field. Besides, this investigation opens new research avenues for both HillVallEA and the VMO-N framework, as several interrogations remain unresolved. Beyond those of VMO, what advantage may other evolutionary search operators bring for HillVallEA? How beneficial is the late shrinkage of the population in HVcMO? When it comes to the local expansion within HVcMO, may other alternatives to decide the universe cause better effects? What is the impact of the frontier operator on the performance of such a new VMO-based multimodal optimizer? How can HVcMO and HillVallEA develop into more efficient and more effective algorithms? How well do they perform in real-world scenarios, and how well on problems with search spaces larger than those in this study? These questions determine some of the directions of forthcoming research.

Supplementary Materials

The following are available online at https://www.mdpi.com/2227-7390/8/5/665/s1, The source code of HVcMO20a that extends HillVallEA19 is available online [57], together with their 2000 output files (50 runs × 20 test problems × 2 algorithms), in the format requested in [28].

Author Contributions

Conceptualization, R.N. and C.H.K.; methodology, R.N. and C.H.K.; software, R.N.; validation, R.N. and C.H.K.; formal analysis, R.N.; investigation, R.N.; resources, C.H.K.; data curation, R.N.; writing—original draft preparation, R.N.; writing—review and editing, R.N. and C.H.K.; visualization, R.N.; supervision, C.H.K.; project administration, C.H.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

This work was in part supported by the Japanese Government (MEXT).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. PR, F1 and dynF1

Table A1. Mean values measured over 50 runs by ANBNWI-DE, HillVallEA19 and HVcMO20a for each test problem, assessed at all levels of accuracy.
Table A1. Mean values measured over 50 runs by ANBNWI-DE, HillVallEA19 and HVcMO20a for each test problem, assessed at all levels of accuracy.
ProblemANBNWI-DEHillVallEA19HVcMO20a
IdPRF1dynF1PRF1dynF1PRF1dynF1
11.0001.0000.9741.0001.0000.9951.0001.0000.993
20.9980.9980.9721.0001.0000.9891.0001.0000.986
31.0000.6670.6501.0001.0000.9941.0001.0000.991
40.9960.9960.9521.0001.0000.9761.0001.0000.973
51.0001.0000.9681.0001.0000.9841.0001.0000.983
61.0001.0000.6251.0001.0000.9671.0001.0000.956
70.9650.9770.8711.0001.0000.9661.0001.0000.963
80.9250.9600.5870.9790.9890.8060.9740.9860.797
90.6920.8120.5550.9680.9840.8100.9510.9750.803
100.9990.9990.9811.0001.0000.9821.0001.0000.980
110.9950.9680.8851.0001.0000.9831.0001.0000.981
120.9250.9470.6491.0001.0000.9641.0001.0000.958
131.0000.9860.8661.0001.0000.9631.0001.0000.948
140.8890.9220.7870.9230.9580.8810.8470.9130.851
150.7380.8480.6230.7500.8570.8370.7500.8570.829
160.6830.8040.6620.6940.8170.7910.6670.8000.783
170.6660.7970.4860.7500.8570.8150.7500.8570.803
180.6670.8000.6330.6670.8000.7650.6670.8000.762
190.5500.7080.4640.5950.7430.6600.6080.7530.659
200.4020.5550.3470.4830.6500.5290.4880.6550.528

Appendix B. Ratio of Time and Population Size

Table A2. Individual values of ratio of means (RoM) and mean of ratios (RoM), for 50 runs and 250 runs of HVcMO20a and HillVallEA19 over each test problem with populations of different sizes.
Table A2. Individual values of ratio of means (RoM) and mean of ratios (RoM), for 50 runs and 250 runs of HVcMO20a and HillVallEA19 over each test problem with populations of different sizes.
n 2 6 n 2 7 n 2 8 n 2 9 n 2 10 All Pop Sizes
ProblemRoMMoRRoMMoRRoMMoRRoMMoRRoMMoRRoMMoR
IdConsidering 50 Runs per Function Using a Tolerance of 1.0   ×   10 5
11.3431.3541.3461.3511.3301.3301.3421.3421.3211.3211.3251.340
21.2501.2811.3631.3691.3011.3041.3341.3421.3191.3191.3211.323
31.3051.3271.3751.3821.3241.3241.3541.3541.3161.3161.3241.341
41.1661.1891.2871.2931.2821.2831.3401.3401.3111.3111.3141.283
51.1961.2211.3091.3151.2941.2961.3501.3501.3131.3131.3181.299
61.3621.3821.4401.4511.4591.4601.3731.3731.3121.3121.3351.396
71.3111.3301.3331.3441.3391.3401.3321.3321.3081.3091.3151.331
81.4801.5251.4271.4421.4881.4891.4301.4311.3721.3721.3931.452
91.6161.6351.4541.4601.4171.4191.3561.3571.2961.3001.3251.434
101.0931.1081.2111.2151.2601.2601.3341.3341.3191.3191.3141.247
111.1241.1411.2331.2381.2761.2781.3551.3561.3261.3261.3221.268
121.4061.4311.4511.4711.4041.4061.3801.3801.3151.3151.3391.401
131.2651.3041.3901.3971.3581.3591.3971.3971.3411.3411.3531.360
141.2961.3131.3891.3981.4381.4441.4861.4871.3801.3811.4031.405
151.3721.4431.5361.5621.4651.4751.4681.4691.3861.3911.4121.468
161.2661.3161.2561.2881.3411.3501.4181.4201.3861.3861.3781.352
171.2651.3091.3061.3381.4391.4551.4431.4461.3881.3891.3961.387
181.0491.1261.0201.0681.0831.0981.2001.2081.2331.2361.1851.147
191.0941.1411.0731.1141.1311.1471.2011.2111.2371.2401.1941.171
201.0101.1120.9390.9911.0001.0361.0971.1231.1821.1861.0881.090
IdConsidering 250 Runs per Function: 50 Runs for Every Mesh Size Using the 5 Levels of Tolerance
11.3701.3871.3471.3521.3601.3601.3431.3431.3331.3341.3361.355
21.3181.3421.3511.3551.3391.3421.3321.3381.3411.3411.3391.344
31.3711.3941.3361.3431.3621.3631.3311.3321.3421.3421.3411.355
41.2361.2541.2871.2911.3291.3311.3281.3281.3371.3371.3331.308
51.2021.2291.3111.3161.3311.3351.3371.3371.3281.3281.3291.309
61.3941.4221.4381.4481.4701.4721.3641.3641.3351.3361.3501.408
71.3581.3781.3441.3521.3441.3451.3101.3121.3241.3261.3231.343
81.4801.5271.4511.4651.4921.4941.4141.4151.3851.3851.4001.457
91.6091.6261.4501.4571.4321.4341.3421.3431.3171.3181.3381.436
101.1421.1561.2001.2041.2871.2901.3161.3171.3311.3311.3211.259
111.1371.1581.2111.2171.3091.3111.3371.3381.3441.3441.3341.273
121.4081.4371.4451.4641.4461.4491.3601.3601.3431.3431.3561.411
131.2701.3081.3681.3741.3921.3941.3731.3751.3551.3561.3601.361
141.2691.2881.3891.3991.4741.4791.4641.4651.3951.3971.4111.405
151.3401.3921.4981.5231.4951.5031.4531.4541.4071.4091.4241.456
161.2541.3011.2491.2741.3551.3671.3991.4011.3881.3891.3761.346
171.2631.3081.2741.3011.4251.4431.4141.4181.4021.4021.3961.374
180.9561.0471.0291.0981.1031.1201.1981.2061.2321.2351.1821.141
191.0221.0511.0111.0491.1341.1501.1851.1951.2481.2501.1871.139
201.0021.1000.9450.9931.0231.0641.0791.1041.1871.1921.0891.091
Table A3. Number and percentage of runs covered by different ranges of time ratio.
Table A3. Number and percentage of runs covered by different ranges of time ratio.
n 2 6 n 2 7 n 2 8 n 2 9 n 2 10 All SizesPercentage Covered
tratioFor Every Population Size: 50 Runs per Each of 20 Functions for Tolerance 1.0   ×   10 5
(0.00, 1.00]16189532133276.54%
(1.00, 1.20]23716280666360812.16%}27.52%
(1.20, 1.30]1501832435513776815.36%
(1.30, 1.40]113223352601722201140.22%}60.84%
(1.40, 1.65]20625824325173103120.62%
(1.65, 1.75]444413601072.14%}4.16%
(1.75, 2.00]572913021012.02%
(2.00, 2.50]2611300400.80%}0.94%
(2.50, 4.00]6100070.14%
#Runs100010001000100010005000
tratioFor Every Population Size: 50 per Each of 20 Functions Using the 5 Levels of Tolerance
(0.00, 1.00]7774352441041615766.30%
(1.00, 1.20]1185882386354273308012.32%}26.43%
(1.20, 1.30]686942710591598352714.11%
(1.30, 1.40]631114019582881350010,11040.44%}62.12%
(1.40, 1.65]1035123115121038605542121.68%
(1.65, 1.75]2091791052405172.07%}4.21%
(1.75, 2.00]31314068765342.14%
(2.00, 2.50]1504717122170.87%}0.94%
(2.50, 4.00]144000180.07%
#Runs5000500050005000500025,000
Table A4. Mean population size and running time over 50 runs with tolerance of 1.0   ×   10 5 .
Table A4. Mean population size and running time over 50 runs with tolerance of 1.0   ×   10 5 .
ProblemFinal Pop Size (Actual)Total Pop Size (Actual)Time Units (Estimated)
IdHVcMO20aHillVallEA19HVcMO20aHillVallEA19HVcMO20aHillVallEA19
1656413712818,1868192
2646413113416,7128602
3646412812816,3848192
4687715919722,28215,892
5646412913216,5488438
630246114372198777,748883,671
715462929735013,54518,752,14332,556,974
89012191757816,4976,688,96816,971,203
9794619,98885,183165,129700,629,7221,529,669,468
10687018722325,88715,892
112523588681179493,158539,935
1231663712302388811,5001,642,824
13134119204429634812,453,15113,276,692
145714938013,55924,901114,926,086179,571,917
1540968192889317,29945,454,13190,031,473
166636819213,50916,721128,647,93890,269,860
1740968192899517,31845,485,42589,991,086
18409645888504946844,906,25030,494,802
198028868418,25819,988185,760,922115,860,931
2019,00524,57650,10459,7481,294,673,0041,010,064,589

References

  1. Reeves, C.R. Modern Heuristic Techniques. In Modern heuristic search methods; Rayward-Smith, V.J., Osman, I.H., Reeves, C.R., Smith, G.D., Eds.; John Wiley & Sons: New York, NY, USA, 1996; pp. 1–25. [Google Scholar]
  2. Sörensen, K.; Sevaux, M.; Glover, F. A History of Metaheuristics. In Handbook of Heuristics; Martí, R., Pardalos, P.M., Resende, M.G., Eds.; Springer: Cham, Switzerland, 2018; pp. 1–18. [Google Scholar] [CrossRef]
  3. Chica, M.; Barranquero, J.; Kajdanowicz, T.; Damas, S.; Cordón, Ó. Multimodal optimization: An effective framework for model calibration. Inf. Sci. 2017, 375, 79–97. [Google Scholar] [CrossRef]
  4. Woo, D.-K.; Choi, J.-H.; Ali, M.; Jung, H.-K. A Novel Multimodal Optimization Algorithm Applied to Electromagnetic Optimization. IEEE Trans. Magn. 2011, 47, 1667–1673. [Google Scholar] [CrossRef]
  5. Dilettoso, E.; Salerno, N. A Self-Adaptive Niching Genetic Algorithm for Multimodal Optimization of Electromagnetic Devices. IEEE Trans. Magn. 2006, 42, 1203–1206. [Google Scholar] [CrossRef]
  6. Das, S.; Maity, S.; Qu, B.-Y.; Suganthan, P.N. Real-Parameter Evolutionary Multimodal Optimization-A Survey of the State-of-the-Art. Swarm Evol. Comput. 2011, 2, 71–88. [Google Scholar] [CrossRef]
  7. Della Cioppa, A.; De Stefano, C.; Marcelli, A. Where Are the Niches? Dynamic Fitness Sharing. IEEE Trans. Evol. Comput. 2007, 11, 453–465. [Google Scholar] [CrossRef]
  8. Kamyab, S.; Eftekhari, M. Using a Self-Adaptive Neighborhood Scheme with Crowding Replacement Memory in Genetic Algorithm for Multimodal Optimization. Swarm Evol. Comput. 2013, 12, 1–17. [Google Scholar] [CrossRef]
  9. Sopov, E. Self-Configuring Ensemble of Multimodal Genetic Algorithms. In Computational Intelligence. IJCCI 2015. Studies in Computational Intelligence, Vol 669; Merelo, J.J., Rosa, A., Cadenas, J.M., Correia, A.D., Madani, K., Ruano, A., Filipe, J., Eds.; Springer: Cham, Switzerland, 2017; pp. 56–74. [Google Scholar] [CrossRef]
  10. De Magalhães, C.S.; Almeida, D.M.; Barbosa, H.J.C.; Dardenne, L.E. A Dynamic Niching Genetic Algorithm Strategy for Docking Highly Flexible Ligands. Inf. Sci. 2014, 289, 206–224. [Google Scholar] [CrossRef]
  11. Li, X. Niching Without Niching Parameters: Particle Swarm Optimization Using a Ring Topology. IEEE Trans. Evol. Comput. 2010, 14, 150–169. [Google Scholar] [CrossRef]
  12. Nápoles, G.; Grau, I.; Bello, R.; Falcon, R.; Abraham, A. Self-Adaptive Differential Particle Swarm Using a Ring Topology for Multimodal Optimization. In Proceedings of the 13th International Conference on Intelligent Systems Design and Applications (ISDA’13), Bangi, Malaysia, 8–10 December 2013; pp. 35–40. [Google Scholar] [CrossRef][Green Version]
  13. Fieldsend, J.E. Running Up Those Hills: Multi-Modal Search with the Niching Migratory Multi-Swarm Optimiser. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC’14), Beijing, China, 6–11 July 2014; pp. 2593–2600. [Google Scholar] [CrossRef]
  14. Li, X. Developing Niching Algorithms in Particle Swarm Optimization. In Handbook of Swarm Intelligence. Adaptation, Learning, and Optimization; Panigrahi, B.K., Shi, Y., Lim, M.-H., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; Volume 8, pp. 67–88. [Google Scholar] [CrossRef]
  15. Qu, B.Y.; Suganthan, P.N. Novel Multimodal Problems and Differential Evolution with Ensemble of Restricted Tournament Selection. In Proceedings of the 2010 IEEE Congress on Evolutionary Computation (CEC’10), Barcelona, Spain, 18–23 July 2010; pp. 1–7. [Google Scholar] [CrossRef]
  16. Epitropakis, M.G.; Plagianakos, V.P.; Vrahatis, M.N. Finding Multiple Global Optima Exploiting Differential Evolution’s Niching Capability. In Proceedings of the 2011 IEEE Symposium on Differential Evolution (SDE’11), Paris, France, 11–15 April 2011; pp. 1–8. [Google Scholar] [CrossRef]
  17. Thomsen, R. Multimodal Optimization Using Crowding-Based Differential Evolution. In Proceedings of the 2004 IEEE Congress on Evolutionary Computation (CEC’04), Portland, OR, USA, 19–23 June 2004; Volume 2, pp. 1382–1389. [Google Scholar] [CrossRef]
  18. Epitropakis, M.G.; Li, X.; Burke, E.K. A Dynamic Archive Niching Differential Evolution Algorithm for Multimodal Optimization. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation (CEC’13), Cancún, Mexico, 20–23 June 2013; pp. 79–86. [Google Scholar] [CrossRef]
  19. Shir, O.M. Niching in Evolutionary Algorithms. In Handbook of Natural Computing; Rozenberg, G., Bäck, T., Kok, J.N., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 1035–1069. [Google Scholar] [CrossRef]
  20. Puris, A.; Bello, R.; Molina, D.; Herrera, F. Variable Mesh Optimization for Continuous Optimization Problems. Soft Comput. 2012, 16, 511–525. [Google Scholar] [CrossRef]
  21. Navarro, R.; Falcon, R.; Bello, R.; Abraham, A. Niche-Clearing-Based Variable Mesh Optimization for Multimodal Problems. In Proceedings of the 2013 World Congress on Nature and Biologically Inspired Computing (NaBIC’13), Fargo, ND, USA, 12–14 August 2013; pp. 161–168. [Google Scholar] [CrossRef]
  22. Navarro, R.; Murata, T.; Falcon, R.; Kim, C.H. A Generic Niching Framework for Variable Mesh Optimization. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC’15), Sendai, Japan, 25–28 May 2015; pp. 1994–2001. [Google Scholar] [CrossRef]
  23. Molina, D.; Puris, A.; Bello, R.; Herrera, F. Variable Mesh Optimization for the 2013 CEC Special Session Niching Methods for Multimodal Optimization. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation (CEC’13), Cancún, Mexico, 20–23 June 2013; pp. 87–94. [Google Scholar] [CrossRef]
  24. Maree, S.C.; Thierens, D.; Alderliesten, T.; Bosman, P.A.N. Real-Valued Evolutionary Multi-Modal Optimization Driven by Hill-Valley Clustering. In Proceedings of the 2018 Genetic and Evolutionary Computation Conference (GECCO’18), Kyoto, Japan, 15–19 July 2018; pp. 857–864. [Google Scholar] [CrossRef]
  25. Bosman, P.A.N.; Grahl, J.; Thierens, D. Enhancing the Performance of Maximum–Likelihood Gaussian EDAs Using Anticipated Mean Shift. In Parallel Problem Solving from Nature—PPSN X. PPSN 2008; Lecture Notes in Computer Science; Rudolph, G., Jansen, T., Beume, N., Lucas, S., Poloni, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; Volume 5199, pp. 133–143. [Google Scholar] [CrossRef]
  26. Bosman, P.A.N.; Grahl, J.; Thierens, D. Benchmarking Parameter-Free AMaLGaM on Functions With and Without Noise. Evol. Comput. 2013, 21, 445–469. [Google Scholar] [CrossRef] [PubMed]
  27. Maree, S.C.; Alderliesten, T.; Bosman, P.A.N. Benchmarking HillVallEA for the GECCO 2019 Competition on Multimodal Optimization. Available online: https://arxiv.org/abs/1907.10988v1 (accessed on 25 October 2019).
  28. Competition on Niching Methods for Multimodal Optimization. Available online: http://www.epitropakis.co.uk/gecco2019/ (accessed on 23 October 2019).
  29. Kanemitsu, H.; Imai, H.; Miyaskoshi, M. Definitions and Properties of (Local) Minima and Multimodal Functions using Level Set for Continuous Optimization Problems. In Proceedings of the 2013 International Symposium on Nonlinear Theory and its Applications (NOLTA2013), Santa Fe, NM, USA, 8–11 September 2013; pp. 94–97. [Google Scholar] [CrossRef]
  30. Zhai, Z.; Li, X. A Dynamic Archive Based Niching Particle Swarm Optimizer Using a Small Population Size. In Proceedings of the This paper appeared at the Thirty-Fourth Australasian Computer Science Conference (ACSC2011), Perth, Australia, 17 January 2011; Reynolds, M., Ed.; Conferences in Research and Practice in Information Technology (CRPIT), Australian, Computer Society, Inc.: Perth, Australia, 2011; Volume 113. [Google Scholar]
  31. Kronfeld, M.; Zell, A. Towards Scalability in Niching Methods. In Proceedings of the 2010 IEEE Congress on Evolutionary Computation (CEC’10), Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar] [CrossRef]
  32. Puris, A.; Bello, R.; Molina, D.; Herrera, F. Optimising Real Parameters Using the Information of a Mesh of Solutions: VMO Algorithm. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation (CEC’12), Brisbane, QLD, Australia, 10–15 June 2012; pp. 1–7. [Google Scholar] [CrossRef]
  33. Pétrowski, A. Clearing Procedure as a Niching Method for Genetic Algorithms. In Proceedings of the IEEE Conference on Evolutionary Computation, Nagoya, Japan, 20–22 May 1996; pp. 798–803. [Google Scholar] [CrossRef]
  34. Sareni, B.; Krähenbühl, L. Fitness Sharing and Niching Methods Revisited. IEEE Trans. Evol. Comput. 1998, 2, 97–106. [Google Scholar] [CrossRef]
  35. Li, J.P.; Balazs, M.E.; Parks, G.T.; Clarkson, P.J. A Species Conserving Genetic Algorithm for Multimodal Function Optimization. Evol. Comput. 2002, 10, 207–234. [Google Scholar] [CrossRef] [PubMed]
  36. Gan, J.; Warwick, K. Dynamic Niche Clustering: A Fuzzy Variable Radius Niching Technique for Multimodal Optimisation in GAs. In Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No.01TH8546), Seoul, Korea, 27–30 May 2001; Volume 1, pp. 215–222. [Google Scholar] [CrossRef]
  37. Brown, M.S. A Species-Conserving Genetic Algorithm for Multimodal Optimization; Nova Southeastern University: Fort Lauderdale, FL, USA, 2010. [Google Scholar]
  38. Iwase, T.; Takano, R.; Uwano, F.; Sato, H.; Takadama, K. The Bat Algorithm with Dynamic Niche Radius for Multimodal Optimization. In Proceedings of the 2019 3rd International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence, Malé, Maldives, 24 March 2019; pp. 8–13. [Google Scholar] [CrossRef]
  39. Solis, F.J.; Wets, R.J.B. Minimization by Random Search Techniques. Math. Oper. Res. 1981, 6, 19–30. [Google Scholar] [CrossRef]
  40. Li, X.; Engelbrecht, A.; Epitropakis, M.G. Benchmark Functions for CEC’2013 Special Session and Competition on Niching Methods for Multimodal Function Optimization. Technical Report. Evolutionary Computation and Machine Learning Group, RMIT University: Australia, 2013. Available online: https://titan.csit.rmit.edu.au/~e46507/cec13-niching/competition/cec2013-niching-benchmark-tech-report.pdf (accessed on 23 October 2019).
  41. Qu, B.Y.; Liang, J.J.; Suganthan, P.N. Niching Particle Swarm Optimization with Local Search for Multi-Modal Optimization. Inf. Sci. 2012, 197, 131–143. [Google Scholar] [CrossRef]
  42. Della Cioppa, A.; Marcelli, A.; Napoli, P. Speciation in Evolutionary Algorithms: Adaptive Species Discovery. In Proceedings of the 2011 Genetic and Evolutionary Computation Conference (GECCO’11), Dublin, Ireland, 12 July 2011; pp. 1053–1060. [Google Scholar] [CrossRef]
  43. Ursem, R.K. Multinational Evolutionary Algorithms. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; Volume 3, pp. 1633–1640. [Google Scholar] [CrossRef]
  44. Towards a New Evolutionary Computation. Studies in Fuzziness and Soft Computing; Lozano, J.A., Larrañaga, P., Inza, I., Bengoetxea, E., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 192. [Google Scholar] [CrossRef]
  45. Dong, W.; Yao, X. NichingEDA: Utilizing the Diversity inside a Population of EDAs for Continuous Optimization. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (CEC’08), Hong Kong, China, 1–6 June 2008; pp. 1260–1267. [Google Scholar] [CrossRef]
  46. Chen, B.; Hu, J. An Adaptive Niching EDA Based on Clustering Analysis. In Proceedings of the 2010 IEEE Congress on Evolutionary Computation (CEC’10), Barcelona, Spain, 18–23 July 2010; pp. 1–7. [Google Scholar] [CrossRef]
  47. Yang, P.; Tang, K.; Lu, X. Improving Estimation of Distribution Algorithm on Multimodal Problems by Detecting Promising Areas. IEEE Trans. Cybern. 2015, 45, 1438–1449. [Google Scholar] [CrossRef] [PubMed]
  48. HillVallEA. Available online: https://github.com/scmaree/HillVallEA (accessed on 28 October 2019).
  49. Rodrigues, S.; Bauer, P.; Bosman, P.A.N. A Novel Population-Based Multi-Objective CMA-ES and the Impact of Different Constraint Handling Techniques. In Proceedings of the 2014 Genetic and Evolutionary Computation Conference (GECCO’14), Vancouver, BC, Canada, 12–16 July 2014; pp. 991–998. [Google Scholar] [CrossRef]
  50. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  51. Navarro, R. Optimización Basada En Mallas Variables Con Operador de Fronteras Basado En Búsqueda Genética (Variable Mesh Optimization with Frontiers Operator Based on Genetic Search); Universidad de Holguín: Piedra Blanca, Holguín, Cuba, 2012; Available online: https://repositorio.uho.edu.cu/jspui/handle/uho/444 (accessed on 10 March 2020).
  52. Preuss, M. Improved Topological Niching for Real-Valued Global Optimization. In Applications of Evolutionary Computation. EvoApplications 2012. Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7248. [Google Scholar] [CrossRef]
  53. Ahrari, A.; Deb, K.; Preuss, M. Multimodal Optimization by Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations. Evol. Comput. 2017, 25, 439–471. [Google Scholar] [CrossRef] [PubMed]
  54. CEC2013. Available online: https://github.com/mikeagn/CEC2013 (accessed on 15 January 2020).
  55. Demšar, J. Statistical Comparisons of Classifiers over Multiple Data Sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  56. Derrac, J.; García, S.; Molina, D.; Herrera, F. A Practical Tutorial on the Use of Nonparametric Statistical Tests as a Methodology for Comparing Evolutionary and Swarm Intelligence Algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  57. HVcMO. Available online: https://github.com/ricardonrcu/HVcMO (accessed on 24 March 2020).

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.