Next Article in Journal
Comparison of Bone Segmentation Software over Different Anatomical Parts
Next Article in Special Issue
Optimizing the Layout of Run-of-River Powerplants Using Cubic Hermite Splines and Genetic Algorithms
Previous Article in Journal
Birds Detection in Natural Scenes Based on Improved Faster RCNN
Previous Article in Special Issue
Cycle Mutation: Evolving Permutations via Cycle Induction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Stigmergy-Based Differential Evolution

by
Valentín Osuna-Enciso
1,† and
Elizabeth Guevara-Martínez
2,*,†
1
Department of Computer Sciences, Centro Universitario de Ciencias Exactas e Ingenierías-Universidad de Guadalajara, Av. Revolución 1500, Col. Olímpica, Guadalajara 44430, Jalisco, Mexico
2
Department of Engineering, Universidad Anáhuac México, Avenida Universidad Anáhuac 46, Col. Lomas Anáhuac, Huixquilucan 52786, Estado de Mexico, Mexico
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2022, 12(12), 6093; https://doi.org/10.3390/app12126093
Submission received: 18 May 2022 / Revised: 12 June 2022 / Accepted: 14 June 2022 / Published: 15 June 2022
(This article belongs to the Special Issue Evolutionary Computation: Theories, Techniques, and Applications)

Abstract

:
Metaheuristic algorithms are techniques that have been successfully applied to solve complex optimization problems in engineering and science. Many metaheuristic approaches, such as Differential Evolution (DE), use the best individual found so far from the whole population to guide the search process. Although this approach has advantages in the algorithm’s exploitation process, it is not completely in agreement with the swarms found in nature, where communication among individuals is not centralized. This paper proposes the use of stigmergy as an inspiration to modify the original DE operators to simulate a decentralized information exchange, thus avoiding the application of a global best. The Stigmergy-based DE (SDE) approach was tested on a set of benchmark problems to compare its performance with DE. Even though the execution times of DE and SDE are very similar, our proposal has a slight advantage in most of the functions and can converge in fewer iterations in some cases, but its main feature is the capability to maintain a good convergence behavior as the dimensionality grows, so it can be a good alternative to solve complex problems.

1. Introduction

Metaheuristics (MH) are computational algorithms inspired by biological phenomena that solve a great diversity of complex scientific and engineering problems, particularly optimization. Some key features of metaheuristics are: (1) robustness, (2) self-organization, (3) adaptation and (4) decentralized control [1]. Two of the most recognized branches in MH, which group several approaches, are Evolutionary Computation (EC) and Swarm Intelligence (SI) algorithms [2]. Even though different metaphors inspire them, all have a set of parameters that the user must tune in a manual, adaptive, or self-adaptive manner to help the algorithm’s exploration/exploitation balance [3]. However, in several MH, the use of a global best candidate solution to direct the search process toward the real optimum is common.
For instance, Harmony Search [4,5,6] utilizes several individuals from the harmony memory (HM) to improvise a new harmony, and this individual is incorporated into the HM only if it is better than the worst individual in the whole population. Differential Evolution [7] uses in most canonical variants [8] a weighted difference of at least two parents to generate a new individual, which interchanges information with the actual individual and becomes the trial solution. The selection step compares the trial individual’s fitness against both the actual individual’s fitness and the best individual found so far. Another metaheuristic approach is Particle Swarm Optimization [9], which considers the local best and the global best to make a velocity vector added to the actual individual to generate a new individual, whose fitness is compared against the local best and the global best and, if necessary, replaces the global best. The canonical Fireworks algorithm [10,11], whose population is composed of fireworks, modifies the individuals by consecutively applying the explosion, the Gaussian mutation, and the mapping operators. After such alterations, the new candidate solution’s fitness is compared against the best individual so far, and the individual with better fitness value is kept in memory. Based on some plants’ reproduction strategy, the authors in [12] proposed the Flower Pollination Algorithm (FPA). This metaheuristic uses the abiotic, biotic, and switch operators to either locally or globally modify the actual candidate solution. As in the previous algorithms, FPA keeps tracking the best global individual found at each iteration. Another algorithm is the Gravitational Search algorithm [13], in which the movements of the candidate solutions (called objects) consider their masses and accelerations to actualize their velocities and positions. As in other metaheuristics, the algorithm updates the global worst and the global best at each iteration. Other examples of metaheuristics that actualize the best solution found at each iteration are the Firefly Algorithm [14,15], Cuckoo Search [16], Artificial Bee Colony optimization [17,18] and Chicken Swarm optimization [19].
In order to solve high-dimensional problems and evaluate a large amount of data, it is necessary to improve the efficiency and execution times of evolutionary algorithms. Many authors consider parallel and distributed approaches of metaheuristic algorithms as alternatives to overcome the difficulties derived from the increase in data complexity. A recent review of parallel and distributed approaches for evolutionary algorithms is presented in [20]. In addition, much research reports metaheuristics translated to parallel languages, such as CUDA (Compute Unified Device Architecture) [21], which runs on parallel general-purpose graphic processing units (GPGPU). For example, a survey of GPU parallelization strategies for metaheuristics is conducted in [22]. The authors of [23] propose two approaches to parallelize the Gravitational Search algorithm using CUDA: one that modifies the whole population with a single kernel and another that utilizes multiple kernels to operate the complete population. In the multiple kernel version, the least parallelizable is the update mass kernel. At a given moment, all threads could be trying to write the global best to the same memory location.
With respect to DE, several proposals have been made to parallelize this algorithm, such as the one presented in [24], where opposition-based learning and Differential Evolution are programmed with CUDA to optimize 19 benchmark functions. The authors cannot perform the fittest selection in parallel, and they propose a reduction approach to find the best individual at each iteration. A similar approach is found in [25], where only four functions are tested up to 100 dimensions. In this case, the authors propose the use of an adaptive scaling factor and an adaptive crossover probability; even though the authors utilize the DE/rand/1/bin version, they also keep track of the best individual so far. To represent DE variants, a general pattern DE/x/y/z is used, where x describes the base vector selection type, y represents the number of differentials and z is the type of crossover operator, and if there is an asterisk symbol (*) it means that any valid operator can be used for that process. Thus, for DE/rand/1/bin, “rand” indicates that the individuals are randomly chosen, “1” specifies that only one vector difference is used to form the mutated population and “bin” means that a binomial crossover is applied.
A survey of DE parallelized on GPGPU is found in [26], tracking the research efforts to parallelize DE from computer networks since 2004, passing through the use of GPGPU with first (and therefore, with a lack of features) CUDA SDKs and ending with a brief exploration of DE and GPU-based frameworks. It is worth mentioning that, in most of the articles, the original operators are preserved and only the parallel implementation is performed using one kernel for each process. To have a general idea about the implementation of DE on GPUs, Table 1 shows some proposals highlighting the implemented DE version, whether any operator was modified or not, the number and type of test functions and the maximum dimensionality. It can be observed that the idea of proposing modifications to differential evolution operators has not been widely investigated.
Some metaheuristics utilize two methods to update the global best (individual and fitness) by: (a) comparing it against the corresponding values of every individual in the population and (b) sorting the corresponding values of the entire population. In the first approach, parallelization implies that N p 1 individuals will be trying to update the best fitness value in the actual iteration and the best individual to the same memory’s positions: this is the worst-case scenario. This could produce race conditions, which are undesirable because they could produce a loss of information. A better method is the sorting-based approach which can only be partially parallelized, in fact, this DE algorithm version is the one used for comparisons in this research.
This paper considers a modification, based on the stigmergy concept, to Differential Evolution (version best/1/bin, in this case the best individual solution of the DE population is selected as the target vector) to eliminate the global best utilization. For this purpose, the mutation and crossover operators are modified. Such adjustment, which we called Stigmergic Differential Evolution (SDE), has a similar performance to the original DE with a lower computational cost in some cases when programmed into GPGPU; mainly, it preserves a good convergence behavior as the dimensionality grows. Therefore, we consider that other metaheuristics that utilize the global best to guide the optimization process could benefit from the proposed methodology.
The remaining paper is as follows: Section 2 explains the concept of stigmergy and explores some of its applications to solve engineering problems. Section 3 gives an account of our proposal and the benchmark of optimization problems for the comparisons. In contrast, Section 4 explains the experimental setup, shows the results and briefly discusses the results. Section 5 concludes this work and gives some future venues around stigmergic metaheuristics.

2. Stigmergy and Some Stigmergic Applications

Some authors consider insect colonies as complex adaptive systems, with properties of: (1) spatial distribution; (2) decentralized control; (3) hierarchical organization of individuals; (4) adaptation to environmental changes, among others [17,34]. For the second property, this means that there is no global control [34], and therefore the interactions among individuals are distributed. To explain the behavior of insect colonies, researchers proposed the term stigmergy. Such a concept comes from entomology, and Pierre-Paul Grassé proposed it to explain the coordination and collaboration by the indirect communication [35,36] of social insects (particularly termites and ants) to complete complicated projects [37]. In the original paper, Grassé considers that every individual in a colony that acts (ergon) on the medium leaves a trace (stigma), which at the same time stimulates another action from the same or another individual (Figure 1). Actions and marks occur on the medium, and as mentioned, the latter only stimulate (but not force) the action of either the same or other individuals.
The concept of stigmergy has served as an inspiration to propose algorithms utilized to solve some problems in engineering [38,39,40], and it also was used as an inspiration to propose one of the most known metaheuristic algorithms: Ant System (AS) [41], the predecessor of Ant Colony Optimization (ACO) and its variants. In these approaches, it is the pheromone-based construction of trails that represents the stigmergic mechanism [36,42,43].
ACO considers a population of artificial agents that keep some memory in the form of the pheromone of the best solutions visited so-far; those solutions are paths of a completely connected graph, constructed by selecting every vertex in the path in a probabilistic manner, being that vertices in the graph with the most artificial pheromones are the more likely to be selected. As the algorithm works in the graphs’ space to construct the solutions, it is natural that the first applications of this technique are in combinatorial optimization, such as the Traveling Salesman Problem or job-shop scheduling. The application of ACO to solve numerical optimization problems is not straightforward. Often, the proposals include hybridizations with other techniques, as the continuous ACO (CACO) [44], continuous interacting ant colony (CIAC) [45], or continuous ant colony system (CACS) [42]; however, there are some research efforts using direct modifications to the original ACO. For instance, in his doctoral thesis, Korošec proposed two stigmergy-based modifications of the original ACO to solve numerical problems, including the Differential Ant-Stigmergy Algorithm [36]. In the proposal, the authors present a fine-grained discretization of the continuous search space to construct a directed graph, to which they apply the operators Gaussian pheromone aggregation, pheromone evaporation and parameter precision to build a solution for the continuous optimization problem at hand. Korošec and colleagues later improved the original algorithm to tackle high-complexity problems or even dynamic problems [43,46] by using a pheromone operator based on the Cauchy distribution.
A recently proposed stigmergic algorithm to solve numeric problems is the teaching–learning-based optimization [47]. In the paper, students’ cooperation (ergon/action) through a board (stigma/mark) simulates the stigmergic process. Algorithmically, the authors create a ranking matrix (of the same size as the original population) for every individual, and they use a roulette wheel to select a guide student. Such guiding students serve as the mean to generate new solutions, which utilize as a standard deviation the weighted average of the remaining individuals. Therefore, the guiding student directs the complete exploration–exploitation process, diluting the stigmergic process as it happens in other metaheuristics.
In the next section, we explain our approach, the Stigmergic Differential Evolution, SDE. The algorithm eliminates the use of global best. Thus, the whole population controls the exploration–exploitation process in a coordinated but decentralized manner; moreover, to explore the efficiency of the proposal and its parallelization feasibility, we programmed it on GPGPU using the CUDA language.

3. Stigmergic Differential Evolution

3.1. Differential Evolution

Differential Evolution (DE) is an algorithm proposed in 1995 to solve continuous optimization problems [7]. Even though there are several variants of DE [8], one of the more popular and powerful variants is the DE/best/1/bin, which since its development has solved several problems in science and engineering [48,49,50]. The notation in the next explanations considers lower bold letters as vectors, lower italic letters with subindexes as vector components, upper bold letters as matrices and upper or lower letters without subindexes as single real or integer numbers.
The first step in the algorithm is the initialization of a random population:
X = x i , j k = l j + r a n d ( ) u j l j j = 1 , , D ; i = 1 , , N p ; k = 0
where N p is the population size, D represents the problem dimensionality, k is the actual iteration, l and u are the lower and upper limits of the search space and  r a n d ( ) is a random number drawn from a uniform distribution. In this phase the parameters’ scaling mutation and crossover factor ( F , C r ) are also set. The second stage creates a mutant vector:
v k = x b e s t k + F · x r 1 k x r 2 k
considering r 1 and r 2 as random integers uniformly distributed in [ 1 , N p ] with the restriction that r 1 r 2 i , and  x b e s t k is the best individual found so-far. The next step obtains the trial vector:
u j k = v j k i f r a n d C r o r j = j r a n d x i , j k o t h e r w i s e
where j r a n d 1 , 2 , , D is a random integer taken from a uniform distribution. In the last phase, the algorithm applies the selection operation by evaluating the trial and the original individual with the objective function f:
x i k + 1 = u k i f f u k < f x i k x i k o t h e r w i s e
The previously mentioned steps (mutation, crossover and selection) are repeated until a criterion is met, usually a maximum number of iterations.

3.2. The Proposal–Action, Agent, Medium, Trace and Coordination

As in other bioinspired algorithms, in Stigmergic Differential Evolution, we do not try to precisely copy every aspect of the stigmergy concept but only to use it as an inspiration source. Let us consider a stigmergy process [51,52]: the nesting construction/repairing made by termites. This process is parallel, it needs only local information and it has no apparent global leadership. By considering Figure 1, for a single stigmergic loop, it happens that individual i (previously stimulated by trace i or other traces) produces an action i that leaves a trace i which probabilistically stimulates either individual i or near individuals. We also consider that, as in the case of real termites, only a few individuals can share a physical space (Figure 2), and only one of them could leave a trace/mark in the medium at a time. Many termites work in the nest, then, they complete the previous steps in parallel, and they repeat the stigmergic cycle until the mound is complete and functional [37].
As in other metaheuristics, Stigmergic DE considers a population of candidate solutions as the agents, while evaluating the candidate solutions represents the traces. Like its real counterpart, the stigmergic approach considers that few candidate solutions can share consecutive memory spaces with one another. For example, in Table 2, candidate solutions x i 1 , x i and  x i + 1 , and their respective fitness function evaluations, are neighbors, as they share consecutive locations in memory.
As in other algorithms, Equation (1) serves to initialize the population of candidate solutions in SDE. Prior to the action step, we compare the traces of the actual with the next and the previous individuals in the memory:
x l e a d k = a r g m i n f x i 1 k , f x i k , f x i + 1 k
for i = 2 , , N p 1 ; the original index of x l e a d k is also stored, e.g.,  l e a d = i 1 , l e a d = i , or  l e a d = i + 1 . The remaining two candidate solutions after applying Equation (5) are randomly labeled, with  50% probability, as  x w o r k 1 k and x w o r k 2 k ; also, their respective indexes are stored. The action step considers the mutation DE operator with the differential weight F:
v i k = x l e a d k + F · x w o r k 1 k x w o r k 2 k
together with the original trial DE operator using the crossover rate C r :
u i , j k = v i , j k i f r a n d ( ) < C r o r j = j r a n d x i , j k o t h e r w i s e
After the trial vector is complete, the next phase of stigmergic DE is the trace update:
x l e a d k + 1 = u i k i f f u i k < f x l e a d k x l e a d k o t h e r w i s e
x w o r k 1 k + 1 = u i k i f f u i k < f x w o r k 1 k x w o r k 1 k o t h e r w i s e
x w o r k 2 k + 1 = u i k i f f u i k < f x w o r k 2 k x w o r k 2 k o t h e r w i s e
It is essential to clarify that Equations (8)–(10) are mutually exclusive in their application; for example, if the condition is true in Equation (8), then Equations (9) and (10) are not applied in this step of the algorithm. As in other metaheuristics, the algorithm repeats the previous steps until it reaches a specific criterion.
One of the main differences between Differential Evolution and Stigmergic Differential Evolution is the complete elimination of the global optimum. Instead, SDE uses a leader’s concept, which is a local best concerning the actual individual. Another distinction is the tracing step, which involves the previous, actual and posterior candidate solutions. According to their fitness, SDE could update any of the three individuals in this stage, or none in the worst case. This crucial step enables the algorithm to share information among candidate solutions of the whole population via indirect communication. In that sense, we consider that SDE disseminates the improvements in the search for the optimum in a distributed manner.

3.3. Parallel DE and SDE

In the literature, several proposals for parallelization of DE exist to solve plenty of optimization problems, as reported in [24,25,53,54,55,56]. Nevertheless, we utilize a basic CUDA parallelization as mentioned in Section 3.2 of [25] to compare the two algorithms. For the DE version, we programmed three kernels:
1.
Generate mutated and trial vectors and evaluate trial vectors.
2.
Compare and update the fitness of trial vectors and the original population.
3.
Obtain the global best.
This version of DE permits a fine-grained parallelization of the first two kernels. Still, the case is different in the third one, which only can be partially parallelized with reduction steps and using a reduction approach, which is a common way to reduce a search effort in parallel. The complete kernel to obtain the global best is in Algorithm 1; it is essential to notice that this kernel operates only with vector sizes divisible by 2.
In the case of SDE, we programmed only one CUDA kernel containing the steps: comparing previous traces, action and trace updating. In that sense, the nature of Stigmergic DE permits a fine-grained parallelization, meaning all the steps of the stigmergic approach can be applied to every individual in the population in parallel. A pseudocode of the stigmergic kernel is given in Algorithm 2.
Algorithm 1 Kernel of DE to obtain the global best
1:
procedure_global_ void getGlobalBest(double* F, double* fbest, double* xbest, double* X, int* indices)
2:
    _shared_ double min[Np]; // Declare array in shared memory
3:
    _shared_ int indexes[Np]; // Array of indexes in shared memory
4:
    int blockId = blockIdx.x + blockIdx.y * gridDim.x;
5:
    int arrayIndex = blockId * (blockDim.x * blockDim.y) + (threadIdx.y * blockDim.x) + threadIdx.x, i;
6:
    min[threadIdx.x] = F[arrayIndex];
7:
    indexes[threadIdx.x] = indices[arrayIndex];
8:
    _syncthreads();
9:
    int nTotalThreads = blockDim.x; // Total number of active threads
10:
    while nTotalThreads > 1 do
11:
        int halfPoint = (nTotalThreads >> 1);
12:
        // only the first half of the threads will be active.
13:
        if (threadIdx.x < halfPoint) then
14:
           // Obtain the shared value stored by another thread
15:
           double temp = min[threadIdx.x + halfPoint];
16:
           int temp1 = indexes[threadIdx.x + halfPoint];
17:
           if (temp < min[threadIdx.x]) then
18:
               min[threadIdx.x] = temp;
19:
               indexes[threadIdx.x] = temp1;
20:
           end if
21:
        end if
22:
        _syncthreads();
23:
        nTotalThreads = (nTotalThreads >> 1);
24:
    end while
25:
    if (threadIdx.x == 0) then
26:
        fbest[blockIdx.y + blockIdx.x] = min[0];
27:
        for (i = 0; i < D; i++) xbest[i] = X[(indexes[0]) * D + i];
28:
    end if
29:
    _syncthreads();
30:
end procedure
Algorithm 2 Stigmergy-inspired kernel of SDE
1:
procedure_global_ void stigmergic(double* X, double* F, double f, double cr, int funcion)
2:
    Obtain the thread identity with respect to the block in the grid
3:
    Obtain jrand as a uniform integer random number
4:
    if (arrayIndex > 0 and arrayIndex < Np − 1) then
5:
        // Trace comparison:
6:
        Obtain x l e a d , x w o r k 1 , x w o r k 2 according to their fitness
7:
        //Action (Mutant vector, Trial vector):
8:
        Calculate mutant vector with Equation (6)
9:
        Compose trial vector with Equation (7)
10:
        //Trace update:
11:
        Compare and update fitness of trial vector against fitness of x l e a d , x w o r k 1 , x w o r k 2 with Equations (8)–(10)
12:
        _syncthreads();
13:
    end if
14:
end procedure
A kernel that generates the initial population is the same for both algorithms; also, DE utilizes an extra kernel to evaluate the initial population’s fitness only once.

4. Experimental Results

To perform the experiments, the computer has the next features:
  • OS: Windows 10 64 bits
  • CPU: i7-4770 running at 3.4 GHz
  • RAM: 16 GB
  • GPU: GTX 960 with 2 GB of RAM
  • Compiler: Visual Studio 2019
Both algorithms are programmed in CUDA C by using the SDK version 11.1, reusing some kernel functions such as creating or evaluating the initial population. Moreover, both implementations utilize global memory, except for the reduction approach, to find the global best in DE, which uses shared memory.
The DE version/best/1/bin is the algorithm programmed in this paper, and the SDE operators are a modified version of DE; also, the parameters are set as: F = 0.8 and C r = 0.3 in the two approaches. Population size is set as 256 and 512 individuals for every experiment, whereas for every problem the dimension is 100 and 500. The reason for these population sizes is because of the requirements of the kernel to obtain the global best used in DE.

4.1. Benchmark Used in the Experiments

The experimental testbed consists of 27 functions. We used 14 multidimensional objective functions (Table 3) that have different search spaces and different complexity levels. For example, the functions: Step, Ackley, Griewank, Levy, Rastrigin, Schwefel and Schwefel 2.22 have many local minima near the global optimum. The functions: Sphere, Sum Powers, Sum Squares and Trid have only one global optimum and are bowl-shaped. The functions Zakharov, Dixon and Price and Rosenbrock have a valley-shaped form, and the Zakharov function has only one global optimum, whereas the other two functions have several optima. Moreover, 13 two-dimensional objective functions (Table 4) were tested. Some of these functions have an optimal value solution outside the vector 0, …, 0. This is useful for testing a bias towards such a value when new metaheuristic approaches are proposed, as suggested by [57].

4.2. Execution Time

To compare the computational time of DE and SDE, we stored the execution time after 5000 iterations of every algorithm and calculated the average and standard deviation after 30 runs. In this case, we measured the execution time since the initial population’s creation until the last iteration; the time counter is reset to zero only after one run is complete. The problem dimensionality has either 100 or 500 dimensions, with 256 and 512 individuals; therefore, this experiment has four instances. The calculus of speedup considers the average time of each function s p e e d u p = T D E T S D E . The results are in Table 5 and Table 6, with the best ones in bold letters.
SDE has a slightly better averaged execution time in most of the cases, except for the Griewank function and in some experiments for the Brent function. However, a closer review of the speedups reveals that both algorithms have a similar performance regarding the execution time. To further clarify such an issue, we used the Wilcoxon test [58] to probe the null hypothesis that the execution times of both algorithms come from distributions with equal medians. The p-values for DE vs SDE are: (1) p = 0.1215 for N p = 256 and D = 100 ; (2) p = 0.09 for N p = 512 and D = 100 ; (3) p = 0.1323 for N p = 256 and D = 500 ; (4) p = 0.1115 for N p = 512 and D = 500 . According to these results, in every case, the null hypothesis cannot be rejected, suggesting (with 5% certainty) that the execution time of both algorithms is similar for each experimental instance.

4.3. Convergence Speed

For the convergence comparison between DE and SDE, we used the same four instances as in the previous experiment: dimensions of 100 and 500 and populations of 256 and 512 individuals. In this case, the iterations per run are increased to 20,000, with the same number of runs per experimental instance; nevertheless, both algorithms iterate either until the algorithm is near to the optimum (e.g., f x * = 0.99 d d + 4 d 1 6 for Trid), or until reaching 20,000 iterations. After either of the two conditions is true, then the best global fitness and the number of iterations are stored, in the case of DE. However, as SDE does not utilizes a global best, we use the fitness of the first individual in the population as the ‘global best’ to demonstrate the capability of the algorithm to distribute the improvements through all the candidate solutions but with decentralized control. Therefore, when either the fitness of the first individual is close to the real global optimum, or 20,000 iterations are achieved, a run is completed in SDE. The results are in Table 7, Table 8, Table 9 and Table 10 with the best results in bold letters.
It can be seen from the results of these experiments (Table 7, Table 8, Table 9 and Table 10) that the proposed algorithm does not outperform DE on two-dimensional problems. Although SDE reaches the optimum, it performs a higher number of iterations than DE, however, the performance of SDE in high dimensions is remarkable compared with DE.
Table 7 shows the experimental instance when D = 100 and N p = 256 . In this case, both algorithms have a good performance with the Sphere, Step, Sum Powers, Sum Squares, Schwefel 2.22, Ackley, Griewank and Levy functions, as they both achieve a good average fitness; however, except for the Sum Powers function where DE converges in fewer iterations (almost in half of the iterations), SDE is faster than DE in all other cases, with a reduction of more than 84% in the number of iterations for the best result (Levy function), as shown in Figure 3. For the functions Trid, Zakharov, Rastrigin and Schwefel, both algorithms have problems finding the global best, but sometimes SDE obtains better fitness values than DE, with fewer iterations on average.
Table 8 shows that the SDE behavior for more individuals ( N p = 512 ) is as reported in previous lines.
From Table 9 and Table 10 ( D = 500 ), it can be seen that DE struggles with high dimensionality, except for the Sum Powers function. In fact, DE no longer reaches the optimum in 20,000 iterations for the Sum Squares, Schwefel 2.22, Ackley, Griewank and Levy functions, whereas SDE has no problems achieving the optimum for these functions. Another important detail is that with a larger number of individuals ( N p = 512 ), SDE finds the optimum in slightly fewer iterations than when the population is smaller (see Table 9 and Table 10). This highlights the advantage of using the stigmergy concept for the indirect communication among the population.

5. Discussion

Stigmergic DE implements slight modifications to the original DE operators, so the computational time required to complete several iterations is practically the same, as suggested by the results in Table 5 and Table 6 and after applying the Wilcoxon test to such data. Figure 4 shows a comparison between the execution time of DE and SDE for each test function, considering D = 500 and N p = 512 . The comparison considers the difference in percentage between the execution times of each algorithm. In most cases (with the exception of the Brent and Griewank functions in these results), the proposal has a better time performance than its DE counterpart of about 17% in the best case. This behavior is similar in all these experiments; however, as already mentioned, SDE has a marginal advantage.
Moreover, both algorithms present similar problems in finding the global optimum in some functions. However, as can be seen from Table 7 and Table 8, in the case of the Rastrigin, Schwefel and Zacharov functions, the difference between the final fitness values obtained for SDE and DE is very large. SDE obtains fitness values lower than DE by an order of magnitude 2, and for the case of the Trid, Dixon and Price and Rosenbrock functions, DE performs better than SDE, but the difference is less significant. If the dimension increases (Table 9 and Table 10), only in the case of the Sum Powers function does DE have a better performance than SDE, but it is minimal, on the order of 0.0001. These results show the advantage of the proposal in high dimensions; empirically, this is due to the intrinsic parallelism of the stigmergy concept that was used as an inspiration to modify the original operators of a metaheuristic (DE, version best/1/bin in this case). The proposal avoids using the global best in the construction of new candidate solutions, thus simulating indirect communication among all individuals in the population.
In summary, under the same conditions for both metaheuristics, our proposal can find a similar result as DE (and better outcomes for some objective functions) but using fewer iterations on average (Table 7, Table 8, Table 9 and Table 10), especially when the problems are high-dimensional. Even if the iterations are insufficient to find the global optimum, SDE performs better than DE, so it can be a good alternative for solving multiple engineering optimization problems. As mentioned in [59], applications can be in industrial engineering for the job-shop scheduling problem [60,61], in image processing to perform a multilevel thresholding process on a 2D histogram to segment images [62,63], in path planning to solve robotics challenges [64], or to solve area coverage problems for wireless sensor networks (WSNs) [65], to mention a few.

6. Conclusions

In this paper, we presented a modification to Differential Evolution that uses the global best to construct new candidate solutions. The adjustment considers stigmergy, a term taken from entomology to explain the indirect communication used by social insects to achieve complex projects. In the proposal, we eliminate the use of global best and instead we employ a local best. Like the real ants or termites, we utilize two neighbors of that local leader in the approach. The new candidate solution uses the leader’s information and two neighbors in conjunction with a crossover operation as in the original DE. The final step in the approach is trace updating, which could modify any of the three individuals and their fitness values, simulating distributed and indirect communication with all the individuals in the artificial colony. We called this changed metaheuristic Stigmergic Differential Evolution (SDE). Due to the elimination of the global best, SDE has features that make it more parallelizable than DE, therefore, both algorithms were programmed in the CUDA-C language.
We compared DE and SDE with 27 objective functions under the same circumstances. The results suggest that SDE achieves faster convergence than DE when the dimensionality increases and there are a larger number of individuals in the population, and it performs similarly to DE for 2D test functions, but DE converges in fewer iterations. It was demonstrated in [66] that canonical DE is adversely altered by increments in dimensionality, which affects in a super-linear fashion the convergence to the global optimum of the metaheuristic. In that sense, even though SDE has average convergence properties (e.g., Table 5 and Table 6), we claim that its main feature is the capability to maintain a good convergence behavior as dimensionality grows.
The proposed modification could apply to other metaheuristics that use a global best to construct a candidate solution. A possible future work would consider the stigmergy-based adaptation of the original operators in other metaheuristic algorithms and their comparison.

Author Contributions

Conceptualization, V.O.-E.; methodology, V.O.-E.; software, V.O.-E. and E.G.-M.; validation, V.O.-E. and E.G.-M.; formal analysis, V.O.-E.; investigation, V.O.-E. and E.G.-M.; resources, V.O.-E. and E.G.-M.; writing—original draft preparation, V.O.-E.; writing—review and editing, E.G.-M.; visualization, V.O.-E. and E.G.-M.; supervision, V.O.-E. and E.G.-M.; project administration, V.O.-E. and E.G.-M.; funding acquisition, V.O.-E. and E.G.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to thank the Information and Communications Technology Laboratory of the Universidad Anáhuac México for allowing the use of computer equipment.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cui, Z.; Alex, R.; Akerkar, R.; Yang, X.S. Recent Advances on Bioinspired Computation. Sci. World J. 2014, 2014, 934890. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Ser, J.D.; Osaba, E.; Molina, D.; Yang, X.S.; Salcedo-Sanz, S.; Camacho, D.; Das, S.; Suganthan, P.N.; Coello, C.A.C.; Herrera, F. Bio-inspired computation: Where we stand and what’s next. Swarm Evol. Comput. 2019, 48, 220–250. [Google Scholar] [CrossRef]
  3. Črepinšek, M.; Liu, S.H.; Mernik, M. Exploration and exploitation in evolutionary algorithms. ACM Comput. Surv. 2013, 45, 1–33. [Google Scholar] [CrossRef]
  4. Geem, Z.W.; Kim, J.H.; Loganathan, G. A New Heuristic Optimization Algorithm: Harmony Search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  5. Lee, K.S.; Geem, Z.W. A new meta-heuristic algorithm for continuous engineering optimization: Harmony search theory and practice. Comput. Methods Appl. Mech. Eng. 2005, 194, 3902–3933. [Google Scholar] [CrossRef]
  6. Geem, Z.W. Music-Inspired Harmony Search Algorithm: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2009; Volume 191. [Google Scholar]
  7. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  8. Mezura-Montes, E.; Velázquez-Reyes, J.; Coello, C.A.C. A comparative study of differential evolution variants for global optimization. In Proceedings of the GECCO’06—Genetic and Evolutionary Computation Conference, Seattle, WA, USA, 8–12 July 2006; ACM Press: New York, NY, USA, 2006. [Google Scholar] [CrossRef] [Green Version]
  9. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995. [Google Scholar] [CrossRef]
  10. Tan, Y.; Zhu, Y. Fireworks Algorithm for Optimization. In LNCS; Springer: Berlin/Heidelberg, Germany, 2010; pp. 355–364. [Google Scholar] [CrossRef]
  11. Tan, Y. Fireworks Algorithm; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  12. Yang, X.S. Flower Pollination Algorithm for Global Optimization. In Unconventional Computation and Natural Computation; Springer: Berlin/Heidelberg, Germany, 2012; pp. 240–249. [Google Scholar] [CrossRef] [Green Version]
  13. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  14. Łukasik, S.; Żak, S. Firefly Algorithm for Continuous Constrained Optimization Tasks. In Computational Collective Intelligence. Semantic Web, Social Networks and Multiagent Systems; Springer: Berlin/Heidelberg, Germany, 2009; pp. 97–106. [Google Scholar] [CrossRef]
  15. Yang, X.S. Firefly Algorithms for Multimodal Optimization. In Stochastic Algorithms: Foundations and Applications; Springer: Berlin/Heidelberg, Germany, 2009; pp. 169–178. [Google Scholar] [CrossRef] [Green Version]
  16. Yang, X.S.; Deb, S. Cuckoo Search via Lévy flights. In Proceedings of the 2009 World Congress on Nature Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar] [CrossRef]
  17. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report—TR06; Erciyes University: Kayseri, Turkey, 2005. [Google Scholar]
  18. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  19. Meng, X.; Liu, Y.; Gao, X.; Zhang, H. A New Bio-inspired Algorithm: Chicken Swarm Optimization. In LNCS; Springer: Berlin/Heidelberg, Germany, 2014; pp. 86–94. [Google Scholar] [CrossRef]
  20. Raghul, S.; Jeyakumar, G. Parallel and Distributed Computing Approaches for Evolutionary Algorithms—A Review. In Advances in Intelligent Systems and Computing; Springer: Singapore, 2021; pp. 433–445. [Google Scholar] [CrossRef]
  21. Buck, I. GPU computing with NVIDIA CUDA. In ACM SIGGRAPH 2007 Courses on—SIGGRAPH’07; ACM Press: New York, NY, USA, 2007. [Google Scholar] [CrossRef]
  22. Essaid, M.; Idoumghar, L.; Lepagnot, J.; Brévilliers, M. GPU parallelization strategies for metaheuristics: A survey. Int. J. Parallel Emergent Distrib. Syst. 2018, 34, 497–522. [Google Scholar] [CrossRef] [Green Version]
  23. Zarrabi, A.; Samsudin, K.; Karuppiah, E.K. Gravitational search algorithm using CUDA: A case study in high-performance metaheuristics. J. Supercomput. 2014, 71, 1277–1296. [Google Scholar] [CrossRef]
  24. Wang, H.; Rahnamayan, S.; Wu, Z. Parallel differential evolution with self-adapting control parameters and generalized opposition-based learning for solving high-dimensional optimization problems. J. Parallel Distrib. Comput. 2013, 73, 62–73. [Google Scholar] [CrossRef]
  25. Qin, A.K.; Raimondo, F.; Forbes, F.; Ong, Y.S. An improved CUDA-based implementation of differential evolution on GPU. In Proceedings of the GECCO’12: 14th International Conference on Genetic and Evolutionary Computation, Philadelphia, PE, USA, 7–11 July 2012; ACM Press: New York, NY, USA, 2012. [Google Scholar] [CrossRef] [Green Version]
  26. Krömer, P.; Platoš, J.; Snášel, V. A brief survey of differential evolution on Graphic Processing Units. In Proceedings of the 2013 IEEE Symposium on Differential Evolution (SDE), Singapore, 16–19 April 2013; pp. 157–164. [Google Scholar] [CrossRef]
  27. de P. Veronese, L.; Krohling, R.A. Differential evolution algorithm on the GPU with C-CUDA. In Proceedings of the IEEE Congress on Evolutionary Computation, Barcelona, Spain, 18–23 July 2010; IEEE: Piscataway, NJ, USA, 2010. [Google Scholar] [CrossRef]
  28. Zhu, W. Massively parallel differential evolution—Pattern search optimization with graphics hardware acceleration: An investigation on bound constrained optimization problems. J. Glob. Optim. 2010, 50, 417–437. [Google Scholar] [CrossRef]
  29. Krömer, P.; Snåšel, V.; Platoš, J.; Abraham, A. Many-threaded implementation of differential evolution for the CUDA platform. In Proceedings of the 13th annual conference on Genetic and evolutionary computation—GECCO’11, Dublin, Ireland, 12–16 July 2011; ACM Press: New York, NY, USA, 2011. [Google Scholar] [CrossRef]
  30. Ugolotti, R.; Nashed, Y.S.; Mesejo, P.; Ivekovič, Š.; Mussi, L.; Cagnoni, S. Particle Swarm Optimization and Differential Evolution for model-based object detection. Appl. Soft Comput. 2013, 13, 3092–3105. [Google Scholar] [CrossRef] [Green Version]
  31. Fabris, F.; Krohling, R.A. A co-evolutionary differential evolution algorithm for solving min–max optimization problems implemented on GPU using C-CUDA. Expert Syst. Appl. 2012, 39, 10324–10333. [Google Scholar] [CrossRef]
  32. Zhou, X.; Wu, Z.; Wang, H. Elite Opposition-Based Differential Evolution for Solving Large-Scale Optimization Problems and Its Implementation on GPU. In Proceedings of the 2012 13th International Conference on Parallel and Distributed Computing, Applications and Technologies, Beijing, China, 14–16 December 2012; IEEE: Piscataway, NJ, USA, 2012. [Google Scholar] [CrossRef]
  33. Zibin, P. Performance Analysis and Improvement of Parallel Differential Evolution. arXiv 2021, arXiv:2101.06599. [Google Scholar]
  34. Bonabeau, E. Social Insect Colonies as Complex Adaptive Systems. Ecosystems 1998, 1, 437–443. [Google Scholar] [CrossRef]
  35. Feltell, D.; Bai, L.; Jensen, H.J. An individual approach to modelling emergent structure in termite swarm systems. Int. J. Model. Identif. Control 2008, 3, 29. [Google Scholar] [CrossRef]
  36. Korošec, P.; Tashkova, K.; Šilc, J. The differential Ant-Stigmergy Algorithm for large-scale global optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar] [CrossRef]
  37. Oberst, S.; Lai, J.C.; Martin, R.; Halkon, B.J.; Saadatfar, M.; Evans, T.A. Revisiting stigmergy in light of multi-functional, biogenic, termite structures as communication channel. Comput. Struct. Biotechnol. J. 2020, 18, 2522–2534. [Google Scholar] [CrossRef]
  38. Cimino, M.G.; Minici, D.; Monaco, M.; Petrocchi, S.; Vaglini, G. A hyper-heuristic methodology for coordinating swarms of robots in target search. Comput. Electr. Eng. 2021, 95, 107420. [Google Scholar] [CrossRef]
  39. Amorim, K.S.; Pavani, G.S. Ant Colony Optimization-based distributed multilayer routing and restoration in IP/MPLS over optical networks. Comput. Netw. 2021, 185, 107747. [Google Scholar] [CrossRef]
  40. Upeksha, R.G.C.; Pemarathne, W.P.J. Ant Colony Optimization Algorithms for Routing in Wireless Sensor Networks: A Review. In Lecture Notes in Electrical Engineering; Springer: Singapore, 2022; pp. 47–57. [Google Scholar] [CrossRef]
  41. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B Cybern. 1996, 26, 29–41. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Karimi, A.; Nobahari, H.; Siarry, P. Continuous ant colony system and tabu search algorithms hybridized for global minimization of continuous multi-minima functions. Comput. Optim. Appl. 2008, 45, 639–661. [Google Scholar] [CrossRef]
  43. Brest, J.; Korošec, P.; Šilc, J.; Zamuda, A.; Bošković, B.; Maučec, M.S. Differential evolution and differential ant-stigmergy on dynamic optimisation problems. Int. J. Syst. Sci. 2013, 44, 663–679. [Google Scholar] [CrossRef]
  44. Bilchev, G.; Parmee, I.C. The ant colony metaphor for searching continuous design spaces. In Evolutionary Computing; Springer: Berlin/Heidelberg, Germany, 1995; pp. 25–39. [Google Scholar] [CrossRef]
  45. Dréo, J.; Siarry, P. A New Ant Colony Algorithm Using the Heterarchical Concept Aimed at Optimization of Multiminima Continuous Functions. In Ant Algorithms; Springer: Berlin/Heidelberg, Germany, 2002; pp. 216–221. [Google Scholar] [CrossRef] [Green Version]
  46. Korošec, P.; Šilc, J. A Stigmergy-Based Algorithm for Continuous Optimization Tested on Real-Life-Like Environment. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; pp. 675–684. [Google Scholar] [CrossRef]
  47. Meghdadi, A.; Akbarzadeh-T, M. A stigmergic approach to teaching-learning-based optimization for continuous domains. Swarm Evol. Comput. 2021, 62, 100826. [Google Scholar] [CrossRef]
  48. Jebaraj, L.; Venkatesan, C.; Soubache, I.; Rajan, C.C.A. Application of differential evolution algorithm in static and dynamic economic or emission dispatch problem: A review. Renew. Sustain. Energy Rev. 2017, 77, 1206–1220. [Google Scholar] [CrossRef]
  49. Mashwani, W.K. Enhanced versions of differential evolution: State-of-the-art survey. Int. J. Comput. Sci. Math. 2014, 5, 107. [Google Scholar] [CrossRef]
  50. Wang, Y.; Cai, Z. Combining Multiobjective Optimization With Differential Evolution to Solve Constrained Optimization Problems. IEEE Trans. Evol. Comput. 2012, 16, 117–134. [Google Scholar] [CrossRef]
  51. Heylighen, F. Stigmergy as a universal coordination mechanism I: Definition and components. Cogn. Syst. Res. 2016, 38, 4–13. [Google Scholar] [CrossRef]
  52. Heylighen, F. Stigmergy as a universal coordination mechanism II: Varieties and evolution. Cogn. Syst. Res. 2016, 38, 50–59. [Google Scholar] [CrossRef]
  53. Nasim, A.; Burattini, L.; Fateh, M.F.; Zameer, A. Solution of Linear and Non-Linear Boundary Value Problems Using Population-Distributed Parallel Differential Evolution. J. Artif. Intell. Soft Comput. Res. 2019, 9, 205–218. [Google Scholar] [CrossRef] [Green Version]
  54. Solis-Muñoz, F.J.; Osornio-Rios, R.A.; Romero-Troncoso, R.J.; Jaen-Cuellar, A.Y. Differential Evolution Implementation for Power Quality Disturbances Monitoring using OpenCL. Adv. Electr. Comput. Eng. 2019, 19, 13–22. [Google Scholar] [CrossRef]
  55. Zuo, L.; Liu, B.; Wen, Z.; Sun, H.; Di, R.; Wu, P. Research of Dynamic Economic Emission Dispatch Based on Parallel Molecular Differential Evolution Algorithm. IOP Conf. Ser. Earth Environ. Sci. 2018, 170, 032003. [Google Scholar] [CrossRef]
  56. Laguna-Sánchez, G.A.; Olguín-Carbajal, M.; Cruz-Cortés, N.; Barrón-Fernández, R.; Martínez, R.C. A differential evolution algorithm parallel implementation in a GPU. J. Theor. Appl. Inf. Technol. 2016, 86, 184–195. [Google Scholar]
  57. Davarynejad, M.; van den Berg, J.; Rezaei, J. Evaluating center-seeking and initialization bias: The case of particle swarm and gravitational search algorithms. Inf. Sci. 2014, 278, 802–821. [Google Scholar] [CrossRef]
  58. Hollander, M.; Wolfe, D.A.; Chicken, E. Nonparametric Statistical Methods; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  59. Ahmad, M.F.; Isa, N.A.M.; Lim, W.H.; Ang, K.M. Differential evolution: A recent review based on state-of-the-art works. Alex. Eng. J. 2022, 61, 3831–3872. [Google Scholar] [CrossRef]
  60. Wisittipanich, W.; Kachitvichyanukul, V. Differential Evolution Algorithm for Job Shop Scheduling Problem. Ind. Eng. Manag. Syst. 2011, 10, 203–208. [Google Scholar] [CrossRef] [Green Version]
  61. Santucci, V.; Baioletti, M.; Milani, A. Algebraic Differential Evolution Algorithm for the Permutation Flowshop Scheduling Problem With Total Flowtime Criterion. IEEE Trans. Evol. Comput. 2016, 20, 682–694. [Google Scholar] [CrossRef]
  62. Sarkar, S.; Das, S. Multilevel Image Thresholding Based on 2D Histogram and Maximum Tsallis Entropy—A Differential Evolution Approach. IEEE Trans. Image Process. 2013, 22, 4788–4797. [Google Scholar] [CrossRef]
  63. Osuna-Enciso, V.; Cuevas, E.; Sossa, H. A comparison of nature inspired algorithms for multi-threshold image segmentation. Expert Syst. Appl. 2013, 40, 1213–1219. [Google Scholar] [CrossRef] [Green Version]
  64. Jain, S.; Sharma, V.K.; Kumar, S. Robot Path Planning Using Differential Evolution. In Advances in Computing and Intelligent Systems; Springer: Singapore, 2020; pp. 531–537. [Google Scholar] [CrossRef]
  65. Qin, N.; Chen, J. An area coverage algorithm for wireless sensor networks based on differential evolution. Int. J. Distrib. Sens. Netw. 2018, 14, 1–11. [Google Scholar] [CrossRef] [Green Version]
  66. Chen, S.; Montgomery, J.; Bolufé-Röhler, A. Measuring the curse of dimensionality and its effects on particle swarm optimization and differential evolution. Appl. Intell. 2014, 42, 514–526. [Google Scholar] [CrossRef]
Figure 1. A simple graphic of stigmergy as a closed loop.
Figure 1. A simple graphic of stigmergy as a closed loop.
Applsci 12 06093 g001
Figure 2. Termites sharing a working area.
Figure 2. Termites sharing a working area.
Applsci 12 06093 g002
Figure 3. Comparison between the number of SDE and DE iterations.
Figure 3. Comparison between the number of SDE and DE iterations.
Applsci 12 06093 g003
Figure 4. Percentage difference in execution times of DE and SDE for test functions with D = 500 and N p = 512 .
Figure 4. Percentage difference in execution times of DE and SDE for test functions with D = 500 and N p = 512 .
Applsci 12 06093 g004
Table 1. GPU differential evolution implementation proposals.
Table 1. GPU differential evolution implementation proposals.
Reference/YearVersionModified OperatorsOperators Not ModifiedType and Number of FunctionsMaximum Dimensionality
[27]/2010DE/rand/1/binNoneMutation, Crossover, SelectionBenchmark functions, 6100
[28]/2010DE/rand/1/binNoneMutation, Crossover, SelectionBenchmark functions, 1290
[29]/2011DE/rand/1/binNoneMutation, Crossover, SelectionTask scheduling benchmark, 12512
[25]/2012DE/rand/1/binNoneMutation, Crossover, SelectionBenchmark functions, 4100
[30]/2013DE/rand/1/binNoneMutation, Crossover, SelectionReal-world object detection, 232, and 9, pertaining to each parametric model
[31]/2012CDE/rand/1/bin (Coevolution DE)NoneMutation, Crossover, SelectionFunctions with restrictions, 610
[32]/2012EOBDE/rand/1/bin (Elite oppostion-based learning DE)NoneMutation, Crossover, SelectionBenchmark functions, 101000
[24]/2013GOjDE/rand/1/exp (Generalized opposition j DE)NoneMutation, Crossover, SelectionBenchmark functions, 191000
[33]/2021DE/*/*/expCrossoverMutation, SelectionBenchmark functions, 31000
Table 2. Population and fitness arrangement in SDE.
Table 2. Population and fitness arrangement in SDE.
AgentTrace
x 1 f x 1
x i 1 f x i 1
x i f x i
x i + 1 f x i + 1
x N p f x N p
Table 3. D > 2 objective functions. The optimum value is denoted as f * and x * is the candidate solution.
Table 3. D > 2 objective functions. The optimum value is denoted as f * and x * is the candidate solution.
Function f ( x ) Limits x i f * ; x *
F1: Sphere i = 1 d x i 2 [−5.12, 5.12]0; 0, …, 0
F2: Step i = 1 d x i + 0.5 2 [−100, 100]0; [−0.5, 0.5)
F3: Sum Powers i = 1 d x i i + 1 [−1, 1]0; 0, …, 0
F4: Sum Squares i = 1 d i x i 2 [−10, 10]0; 0, …, 0
F5: Dixon and Price x i 1 2 + i = 2 d i 2 x i 2 x i 1 2 [−10, 10] 0 ; x i = 2 2 i 2 2 i
F6: Rosenbrock i = 1 d 1 100 x i + 1 x i 2 2 + x i 1 2 [−30, 30]0; 1, …, 1
F7: Schwefel 2.22 i = 1 d x i + i = 1 d x i [−10, 10]0; 0, …, 0
F8: Trid i = 1 d x i 1 2 i = 2 d x i x i 1 d 2 , d 2 d d + 4 d 1 6 ; x i = i d + 1 i
F9: Zakharov i = 1 d x i 2 + i = 1 d 0.5 i x i 2 + i = 1 d 0.5 i x i 4 [−5, 10]0; 0, …, 0
F10: Ackley a exp b 1 d i = 1 d x i 2 exp 1 d i = 1 d cos c x i + a + exp 1 where a = 20 , b = 0.2 [−32.768, 32.768]0; 0, …, 0
F11: Rastrigin 10 d + i = 1 d x i 2 10 cos 2 π x i [−5.12, 5.12]0; 0, …, 0
F12: Schwefel 418.9829 d i = 1 d x i sin x i [−500, 500]0; 420.9687, …, 420.9687
F13: Griewank i = 1 d x i 2 4000 i = 1 d cos x i i + 1 [−600, 600]0; 0, …, 0
F14: Levy sin 2 π w i + i = 1 d 1 w i 1 2 1 + 10 sin 2 π w i + 1 + w d 1 2 · 1 + sin 2 2 π w d where w i = 1 + x i 1 4 [−10, 10]0; 1, …, 1
Table 4. D = 2 objective functions. The optimum value is denoted as f * and x * is the candidate solution.
Table 4. D = 2 objective functions. The optimum value is denoted as f * and x * is the candidate solution.
Function f ( x ) Limits x i f * ; x *
F15: Treccani x 1 4 + 4 x 1 3 + 4 x 1 2 + x 2 2 [−5, 5]0; 2 , 0 , 0 , 0
F16: Beale 1.5 x 1 + x 1 x 2 2 + 2.25 x 1 + x 1 x 2 2 2 + 2.625 x 1 + x 1 x 2 3 2 [−4.5, 4.5]0; (3,0.5)
F17: Booth x 1 + 2 x 2 7 2 + 2 x 1 + x 2 5 2 [−10, 10]0; (1,3)
F18: Brent x 1 + 10 2 + x 2 + 10 2 + exp x 1 2 x 2 2 [−10, 10 ]0; (−10,10)
F19: Cube 100 x 2 x 1 3 2 + 1 x 1 2 [−10, 10]0; (1,1)
F20: Davis x 1 2 + x 2 2 0.25 sin 2 50 ( 3 x 1 2 + x 2 2 ) 0.1 + 1 [−100, 100]0; (0,0)
F21: Matyas 0.26 x 1 2 + x 2 2 0.48 x 1 x 2 [−10, 10]0; (0,0)
F22: Schaffer 0.5 + sin 2 x 1 2 + x 2 2 0.5 1 + 0.001 x 1 2 + x 2 2 2 [−100, 100]0; (0,0)
F23: Bohachevsky1 x 1 2 + 2 x 2 2 0.3 cos 3 π x 1 0.4 cos 4 π x 2 + 0.7 [−100, 100]0; (0,0)
F24: Egg Crate x 1 2 + x 2 2 + 25 sin 2 x 1 + sin 2 x 2 [−5, 5]0; (0,0)
F25: Bohachevsky2 x 1 2 + 2 x 2 2 0.3 cos 3 π x 1 cos 4 π x 2 + 0.3 [−100, 100]0; (0,0)
F26: Bohachevsky3 x 1 2 + 2 x 2 2 0.3 cos 3 π x 1 + 4 π x 2 + 0.3 [−100, 100]0; (0,0)
F27: Three-Hump Camel 2 x 1 2 1.05 x 1 4 + x 1 6 6 + x 1 x 2 + x 2 2 [−5, 5]0; (0,0)
Table 5. Mean and averaged execution times in seconds of DE and SDE (D = 100).
Table 5. Mean and averaged execution times in seconds of DE and SDE (D = 100).
Execution Time μ σ for 256 IndividualsExecution Time μ σ for 512 Individuals
FunctionDESDESpeedupDESDESpeedup
F1: Sphere9.2106 (0.0782)8.7910 (0.0708)1.0510.3554 (0.0606)9.2490 (0.0638)1.12
F2: Step11.4135 (0.0608)10.6315 (0.0899)1.0712.6158 (0.0.0241)11.3259 (0.0703)1.11
F3: Sum Powers9.2367 (0.0949)8.8988 (0.0745)1.0410.0024 (0.0833)9.4303 (0.0487)1.06
F4: Sum Squares9.2517 (0.1358)8.7713 (0.0808)1.0510.0787 (0.0611)9.3388 (0.0978)1.08
F5: Dixon and Price14.4088 (0.1586)13.9450 (0.0560)1.0315.7625 (0.1041)14.8474 (0.0390)1.06
F6: Rosenbrock19.2443 (0.0919)18.8281 (0.0662)1.0221.1980 (0.1622)19.9684 (0.0738)1.06
F7: Schwefel 2.226.1812 (0.0072)5.7724 (0.0417)1.076.3052 (0.0066)6.0008 (0.0449)1.05
F8: Trid9.6917 (0.1798)8.8270 (0.0528)1.1010.3362 (0.0354)9.4783 (0.0824)1.09
F9: Zakharov9.6774 (0.1288)9.1474 (0.0544)1.0610.4172 (0.0930)9.7241 (0.0509)1.07
F10: Ackley12.0315 (0.0585)11.3902 (0.0685)1.0613.4087 (0.1139)12.1660 (0.0722)1.10
F11: Rastrigin11.8263 (0.0988)11.3737 (0.1139)1.0412.7528 (0.0724)12.1194 (0.1042)1.05
F12: Schwefel8.1851 (0.0569)7.9648 (0.0890)1.038.7612 (0.0524)8.2109 (0.0296)1.06
F13: Griewank13.7655 (0.0846)14.2824 (0.1054)0.9615.3910 (0.0995)15.6216 (0.0842)0.98
F14: Levy17.9642 (0.0813)16.7896 (0.1229)1.0720.6549 (0.1170)18.4108 (0.0842)1.12
F15: Treccani4.6935 (0.1371)4.2295 (0.0598)1.114.9562 (0.1360)4.3714 (0.0289)1.13
F16: Beale4.8087 (0.0155)4.6405 (0.0527)1.045.0649 (0.0409)4.8098 (0.0421)1.05
F17: Booth4.4926 (0.0541)4.4926 (0.0541)1.004.7859 (0.0215)4.6785 (0.0529)1.02
F18: Brent4.5868 (0.0186)4.7189 (0.0327)0.974.9435 (0.0130)4.9202 (0.0297)1.01
F19: Cube4.6144 (0.0115)4.1862 (0.0413)1.104.9616 (0.0120)4.3657 (0.0313)1.14
F20: Davis5.0567 (0.0565)4.4333 (0.0415)1.145.4204 (0.0099)4.6649 (0.0628)1.16
F21: Matyas5.0291 (0.0124)4.0977 (0.0641)1.235.2439 (0.0023)4.2874 (0.0416)1.22
F22: Schaffer4.8074 (0.0324)4.4074 (0.0373)1.095.0514 (0.0506)4.6265 (0.0282)1.09
F23: Bohachevsky14.5953 (0.0241)4.2020 (0.0644)1.094.8132 (0.0287)4.3676 (0.0325)1.10
F24: Egg Crate4.8835 (0.0081)4.2803 (0.0422)1.145.2487 (0.0579)4.4989 (0.0353)1.17
F25: Bohachevsky24.5948 (0.0202)4.1841 (0.0447)1.104.8038 (0.0175)4.3554 (0.0375)1.10
F26: Bohachevsky34.6091 (0.0156)4.1864 (0.0448)1.104.8021 (0.0221)4.3475 (0.0505)1.10
F27:Three-Hump Camel4.8569 (0.0200)4.1695 (0.0477)1.165.2030 (0.0.0237)4.3531 (0.0377)1.20
Table 6. Mean and averaged execution times in seconds of DE and SDE (D = 500).
Table 6. Mean and averaged execution times in seconds of DE and SDE (D = 500).
Execution Time μ σ for 256 IndividualsExecution Time μ σ for 512 Individuals
FunctionDESDESpeedupDESDESpeedup
F1: Sphere45.1006 (0.1674)43.5942 (0.1625)1.0347.8527 (0.0200)46.3508 (0.0062)1.03
F2 Step55.7603 (0.0492)55.1497 (0.1864)1.0159.7023 (0.0347)57.5648 (0.2401)1.04
F3: Sum Powers47.3859 (0.1270)44.0992 (0.2072)1.0751.0338 (0.0389)47.4896 (0.0307)1.07
F4: Sum Squares45.0150 (0.1787)43.3325 (0.1482)1.0448.2126 (0.0252)46.6274 (0.0068)1.03
F5: Dixon and Price70.2937 (0.2209)70.2279 (0.1731)1.0074.9822 (0.0298)71.0612 (0.0428)1.06
F6: Rosenbrock95.6219 (0.2426)94.9924 (0.1544)1.01101.3595 (0.036)95.4034 (0.0300)1.06
F7: Schwefel 2.2231.7049 (0.0719)30.0035 (0.1929)1.0633.6836 (0.2565)31.4718 (0.1376)1.07
F8: Trid47.2107 (0.1703)43.6624 (0.1782)1.0850.2172 (0.0326)48.7325 (0.0232)1.03
F9: Zakharov46.9736 (0.1404)45.1113 (0.1838)1.0450.1629 (0.0517)47.0111 (0.0762)1.07
F10: Ackley59.2495 (0.1712)56.5934 (0.1743)1.0562.5976 (0.1638)60.1282 (0.0707)1.04
F11: Rastrigin58.9961 (0.0955)56.4700 (0.1942)1.0462.2493 (0.0957)59.5437 (0.3830)1.05
F12: Schwefel39.6069 (0.1385)39.0363 (0.1085)1.0141.7337 (0.0551)40.8717 (0.0721)1.02
F13: Griewank67.4986 (0.1488)71.4827 (0.1742)0.9471.7673 (0.0992)75.6721 (0.0595)0.95
F:14 Levy88.1310 (0.1206)83.0872 (0.1613)1.0693.9297 (0.1430)89.1134 (0.0781)1.05
F15: Treccani23.8224 (0.6603)21.2687 (0.2196)1.1225.3578 (0.6672)22.4922 (0.2129)1.13
F16: Beale23.5141 (0.0369)23.1105 (0.3537)1.0225.1768 (0.0694)24.5135 (0.2085)1.03
F17: Booth23.2668 (0.0490)23.0368 (0.2329)1.0124.8573 (0.0480)24.4406 (0.1868)1.02
F18: Brent23.1873 (0.0809)23.8490 (0.1661)0.9724.7574 (0.0653)25.2045 (0.1205)0.98
F19: Cube23.6192 (0.0523)21.1600 (0.2700)1.1225.1667 (0.0651)22.5187 (0.2063)1.12
F20: Davis24.6654 (0.0647)21.3285 (0.2838)1.1626.2479 (0.0403)22.7148 (0.1962)1.16
F21: Matyas25.3824 (0.0144)21.1435 (0.1936)1.2027.0852 (0.0140)22.3725 (0.1781)1.21
F22: Schaffer23.3038 (0.0899)21.3305 (0.1633)1.0924.9515 (0.0555)22.7985 (0.2195)1.09
F23: Bohachevsky123.0564 (0.0598)21.3298 (0.2496)1.0824.6339 (0.0576)22.3473 (0.1723)1.10
F24: Egg Crate24.4434 (0.0727)21.1914 (0.1658)1.1526.0469 (0.0843)22.5681 (0.2205)1.15
F25: Bohachevsky223.0607 (0.1270)21.1913 (0.2459)1.0924.7679 (0.0580)22.3699 (0.1505)1.11
F26: Bohachevsky323.1568 (0.0570)21.1915 (0.2585)1.0924.8482 (0.0853)22.3834 (0.2782)1.11
F27:Three-Hump Camel24.6552 (0.1462)21.1373 (0.2239)1.1726.3881 (0.2053)22.2933 (0.2343)1.18
Table 7. Mean and averaged final fitness and iterations of DE and SDE ( N p = 256 ; D = 100 ).
Table 7. Mean and averaged final fitness and iterations of DE and SDE ( N p = 256 ; D = 100 ).
Final Fitness μ σ Final Iterations μ σ
FunctionSDEDESDEDE
F1: Sphere0.0096 ( 3.5238 × 10 4 ) 0.0096 ( 3.2083 × 10 4 ) 857.26 (21.30)3832.63 (106.76)
F2: Step0.00 (0)0.00 (0)829.8 (28.91)4612.2 (143.3675)
F3: Sum Powers0.0074 (0.0022)0.0073 (0.0021)53.16 (9.05)29.86 (5.96)
F4: Sum Squares0.0097 ( 3.0754 × 10 4 )0.0095 ( 4 . 5164 × 10 4 ) 1266.10 (18.23)5562.43 (122.29)
F5: Dixon and Price39.2690 (17.583)0.6667 ( 8 . 0515 × 10 7 ) 20,000.00 (0)20,000.00 (0)
F6: Rosenbrock366.7301 (130.1247)90.1404 (0.1371)20,000.00 (0)20,000.00 (0)
F7: Schwefel 2.220.0098 ( 1.2780 × 10 4 )0.0098 ( 2.2376 × 10 4 )1227.1 (19.36)6314.47 (136.45)
F8: Trid 7.889 × 10 6 ( 9.809 × 10 6 ) 4 . 821 × 10 5 ( 1 . 97 × 10 5 ) 20,000.00 (0)20,000.00 (0)
F9: Zakharov0.2932 (0.13712)262.7573 (19.5592)20,000.00 (0)20,000.00 (0)
F10: Ackley0.0098 ( 9.9644 × 10 5 )0.0097 ( 1 . 7791 × 10 4 ) 1367.93 (24.07)6499.70 (131.53)
F11: Rastrigin4.3725 (2.6246)592.3126 (17.7795)18841.96 (4407.1)20,000.00 (0)
F12: Schwefel55.5158 (74.2989)3723.5101 (5356.9)10405.80 (9128.0)20,000.00 (0)
F13: Griewank0.0097 ( 3.3617 × 10 4 )0.0095 ( 3 . 5729 × 10 4 ) 1307.30 (19.59)5935.20 (107.78)
F14: Levy0.0097 ( 2.7136 × 10 4 )0.0095 ( 4 . 6494 × 10 4 ) 988.06 (18.48)6339.90 (175.04)
F15: Treccani0.0062 (0.0032)0.0048 (0.0032)32.53 (11.66)6.53 (3.53)
F16: Beale0.0058 (0.0034)0.0058 (0.0025)92.83 (49.57)7.73 (4.69)
F17: Booth0.0071 (0.0032)0.0044 (0.0032)89.83 (40.78)21.4667 (7.64)
F18: Brent0.0076 (0.0021)0.0048 (0.0029)69.4 (22.60)15.7 (4.66)
F19: Cube0.0062 (0.0031)0.0054 (0.0024)243.23 (179.04)37 (12.16)
F20: Davis0.0709 (0.0703)0.0064 (0.0018)453.33 (59.00)153.03 (10.76)
F21: Matyas0.0065 (0.0032)0.0050 (0.0034)47.46 (24.72)3.16 (2.61)
F22: Schaffer0.0046 (0.0031)0.0048 (0.0029)193.36 (97.32)19.07 (10.65)
F23: Bohachevsky10.0067 (0.0029)0.0049 (0.0025)84.83 (40.78)36.23 (5.48)
F24: Egg Crate0.0061 (0.0029)0.0039 (0.0029)43.86 (11.56)15.8 (5.67)
F25: Bohachevsky20.0060 (0.0033)0.0058 (0.0026)126.66 (45.06)37.1 (5.87)
F26: Bohachevsky30.0050 (0.0035)0.0041 (0.0029)122.2 (42.91)33.93 (5.64)
F27: Three-Hump Camel0.0058 (0.0028)0.0056 (0.0027)39.1 (17.82)7.5 (3.42)
Table 8. Mean and averaged final fitness and iterations of DE and SDE ( N p = 512 ; D = 100 ).
Table 8. Mean and averaged final fitness and iterations of DE and SDE ( N p = 512 ; D = 100 ).
Final Fitness μ σ Final Iterations μ σ
FunctionSDEDESDEDE
F1: Sphere0.0096 ( 3.1580 × 10 4 ) 0.0095 ( 4 . 3491 × 10 4 ) 861.00 (20.67)4344.16 (114.73)
F2: Step0.0 (0)0.0 (0)828.56 (24.44)5486.53 (193.96)
F3: Sum Powers0.0077 (0.0020)0.0070 (0.0023)53.00 (8.47)26.86 (6.14)
F4: Sum Squares0.0097 ( 2.0231 × 10 4 )0.0095 ( 4 . 1157 × 10 4 ) 1263.26 (17.43)6216.26 (113.12)
F5: Dixon and Price36.3413 (16.9494)0.6667 ( 1 . 8257 × 10 7 ) 20,000.00 (0)20,000.00 (0)
F6: Rosenbrock348.7744 (127.6737)90.7098 (0.1733)20,000.00 (0)20,000.00 (0)
F7: Schwefel 2.220.0098 ( 1.3374 × 10 4 )0.0098 ( 2.0745 × 10 4 )1227.43 (17.91)6962.76 (111.36)
F8: Trid 7.746 × 10 6 ( 6.448 × 10 6 ) 3 . 087 × 10 6 ( 8 . 365 × 10 5 ) 20,000.00 (0)20,000.00 (0)
F9: Zakharov0.2058 (0.0850)257.4497 (14.3462)20,000.00 (0)20,000.00 (0)
F10: Ackley0.0098 ( 2.0868 × 10 4 )0.0097 ( 2 . 2258 × 10 4 ) 1373.53 (21.39)7401.03 (159.97)
F11: Rastrigin0.4355 (0.6197)600.0198 (16.0965)9035.86 (8486.8)20,000.00 (0)
F12: Schwefel52.4709 (65.0721)5852.3797 (5544.0)1981.50 (32.62)20,000.00 (0)
F13: Griewank0.0097 ( 2.8136 × 10 4 )0.0095 ( 3 . 6761 × 10 4 ) 1303.60 (22.27)6690.20 (152.02)
F14: Levy0.0097 ( 2.1518 × 10 4 )0.0096 ( 2 . 4228 × 10 4 ) 989.70 (25.23)7744.56 (209.23)
F15: Treccani0.0059 (0.0031)0.0049 (0.0028)33.5 (14.83)3.8 (3.14)
F16: Beale0.0066 (0.0032)0.0048 (0.0031)76.16 (36.74)4.73 (2.99)
F17: Booth0.0068 (0.0029)0.0051 (0.0031)82.66 (34.35)15.86 (6.83)
F18: Brent0.0068 (0.0025)0.0044 (0.0026)61.4 (21.82)13.63 (4.77)
F19: Cube0.0051 (0.0040)0.0048 (0.0028)329.83 (205.35)27.47 (11.98)
F20: Davis0.0038 (0.0346)0.0065 (0.0017)587.36 (99.63)149.76 (9.49)
F21: Matyas0.0059 (0.0035)0.0038 (0.0026)48.13 (23.34)2.1 (1.56)
F22: Schaffer0.0048 (0.0033)0.0048 (0.0027)258.2 (140.22)16.56 (8.22)
F23: Bohachevsky10.0070 (0.0028)0.0055 (0.0031)88.13 (24.36)33.8 (5.88)
F24: Egg Crate0.0067 (0.0029)0.0053 (0.0028)45.67 (15.39)13.23 (4.78)
F25: Bohachevsky20.0053 (0.0033)0.0049 (0.0029)108.23 (61.14)36.1 (4.79)
F26: Bohachevsky30.0059 (0.0032)0.0039 (0.0026)102.3 (34.34)30.86 (3.88)
F27: Three-Hump Camel0.0053 (0.0033)0.0070 (0.0023)53.00 (8.47)26.86 (6.14)
Table 9. Mean and averaged final fitness and iterations of DE and SDE ( N p = 256 ; D = 500 ).
Table 9. Mean and averaged final fitness and iterations of DE and SDE ( N p = 256 ; D = 500 ).
Final Fitness μ σ Final Iterations μ σ
FunctionSDEDESDEDE
F1: Sphere0.0099 ( 7 . 1685 × 10 5 ) 1.5596 (0.1287)3915.83 (37.75)20,000.00 (0)
F2: Step0.00(0)648.46 (49.02)3796.33 (208.28)20,000.00 (0)
F3: Sum Powers0.0073 (0.0018)0.0070 (0.0022)68.66 (14.84)65.60 (15.11)
F4: Sum Squares0.0099 ( 6 . 1043 × 10 5 ) )1114.1061 (97.5739)6257.00 (48.87)20,000.00 (0)
F5: Dixon and Price1116.7 (411.7089) 1.1704 × 10 4 (2084.0)20,000.00 (0)20,000.00 (0)
F6: Rosenbrock2015.3 (675.7089)3656.1 (321.2327)20,000.00 (0)20,000.00 (0)
F7 Schwefel 2.220.0099 ( 5 . 7235 × 10 5 ) 39.2896 (2.1531)5416.53 (37.09)20,000.00 (0)
F8: Trid 1 . 4111 × 10 10 ( 1 . 662 × 10 10 ) 3.6207 × 10 10 ( 4.049 × 10 9 ) 20,000.00 (0)20,000.00 (0)
F9: Zakharov3137.0 (397.5563)4175.8 (112.141)20,000.00 (0)20,000.00 (0)
F10: Ackley0.0099 ( 4 . 3961 × 10 5 )3.1094 (0.0798)5680.50 (117.0)20,000.00 (0)
F11: Rastrigin1568.87 (252.57)4620.0471 (101.5107)20,000.00 (0)20,000.00 (0)
F12: Schwefel 5 . 0077 × 10 4 ( 1 . 9810 × 10 4 ) 1.7129 × 10 5 (776.5318)20,000.00 (0)20,000.00 (0)
F13: Griewank0.0099 ( 7 . 9325 × 10 5 ) 6.3713 (0.5136)5270.16 (42.23)20,000.00 (0)
F14: Levy0.0247 (0.0811)17.0589 (1.7703)5233.73 (2780.0)20,000.00 (0)
F15: Treccani0.0049 (0.0034)0.0039 (0.0026)33.13 (7.89)6.86 (3.77)
F16: Beale0.0057 (0.0032)0.0052 (0.0029)67.76 (35.51)8.26 (4.40)
F17: Booth0.0064 (0.0036)0.0051 (0.0030)80.5 (41.65)21.63 (10.99)
F18: Brent0.0074 (0.0021)0.0049 (0.0029)63.8 (17.98)14.43 (5.87)
F19: Cube0.0049 (0.0038)0.0048 (0.0031)271.63 (117.09)38.03 (15.18)
F20: Davis0.0069 (0.0033)0.0065 (0.0018)584.75 (133.71)154.3 (9.18)
F21: Matyas0.0064 (0.0027)0.0047 (0.0028)41.63 (19.11)4 (3.36)
F22: Schaffer0.0047 (0.0029)0.0047 (0.0031)214.08 (107.21)21.7 (11.65)
F23: Bohachevsky10.0074 (0.0025)0.0047 (0.0028)94.2 (22.08)35.93 (5.19)
F24: Egg Crate0.0057 (0.0026)0.0060 (0.0027)48.23 (14.61)16.36 (3.92)
F25: Bohachevsky20.0047 (0.0034)0.0045 (0.0027)145.2 (68.44)37.3 (5.75)
F26: Bohachevsky30.0061 (0.0031)0.0048 (0.0026)123.07 (42.16)34.3 (4.06)
F27: Three-Hump Camel0.0070 (0.0026)0.0042 (0.0026)41.03 (21.40)7.4 (4.61)
Table 10. Mean and averaged final fitness and iterations of DE and SDE ( N p = 512 ; D = 500 ).
Table 10. Mean and averaged final fitness and iterations of DE and SDE ( N p = 512 ; D = 500 ).
Final Fitness μ σ Final Iterations μ σ
FunctionSDEDESDEDE
F1: Sphere0.0099 ( 6 . 0612 × 10 5 ) 27.3649 (1.4603)3685.16 (30.18)20,000.00 (0)
F2: Step0.0 (0) 1.0338 × 10 4 (696.03)3311.96 (107.64)20,000.00 (0)
F3: Sum Powers0.0077 (0.0017)0.0075 (0.0019)68.86 (13.06)62.63 (13.32)
F4: Sum Squares0.0099 ( 1 . 0435 × 10 4 ) 1.9638 × 10 4 (1214.0749)5884.63 (30.66)20,000.00 (0)
F5: Dixon and Price1186.7 (455.0704) 1.1704 × 10 4 (2084.0)20,000.00 (0)20,000.00 (0)
F6: Rosenbrock2266.0(683.9547)3656.1 (321.2327)20,000.00 (0)20,000.00 (0)
F7: Schwefel 2.220.0099 ( 4 . 9013 × 10 5 ) 202.2585 (8.5413)5181.37 (27.57)20,000.00 (0)
F8: Trid 1 . 4072 × 10 10 ( 1 . 775 × 10 10 ) 1.8198 × 10 11 ( 1.070 × 10 10 ) 20,000.00 (0)20,000.00 (0)
F9: Zakharov3109.1 (348.8842)4175.8 (112.1416)20,000.00 (0)20,000.00 (0)
F10: Ackley0.0099 ( 3 . 4051 × 10 5 ) 7.0142 (0.1369)5229.16 (48.38)20,000.00 (0)
F11: Rastrigin1567.5246 (388.10)4990.5538 (64.7224)20,000.00 (0)20,000.00 (0)
F12: Schwefel 4 . 0269 × 10 4 ( 2 . 1107 × 10 4 ) 1.7120 × 10 5 (760.9840)20,000.00 (0)20,000.00 (0)
F13: Griewank0.0098 ( 1 . 2733 × 10 4 ) 94.8377 (5.1547)4964.13 (42.25)20,000.00 (0)
F14: Levy0.0099 ( 8 . 3354 × 10 5 ) 96.7644 (6.7583)4355.60 (48.12)20,000.00 (0)
F15: Treccani0.0061 (0.0024)0.0049 (0.0032)33.16 (13.69)4.1 (2.23)
F16: Beale0.0063 (0.0032)0.0044 (0.0025)69.66 (31.39)7.4 (3.59)
F17: Booth0.0075 (0.0026)0.0056 (0.0029)87.63 (36.26)16.56 (9.58)
F18: Brent0.0075 (0.0024)0.0048 (0.0029)65.4 (19.48)14.23 (4.27)
F19: Cube0.0058 (0.0037)0.0059 (0.0030)252.03 (168.68)26.67 (11.38)
F20: Davis0.0037(0.0034)0.0067 (0.0015)571.11 (133.95)149.2 (12.54)
F21: Matyas0.0049 (0.0032)0.0035 (0.0028)47 (16.06)2.13 (1.63)
F22: Schaffer0.0051 (0.0031)0.0055 (0.0031)218.36 (168.65)14.96 (7.87)
F23: Bohachevsky10.0072 (0.0027)0.0043 (0.0032)92.13 (27.13)32.63 (6.32)
F24: Egg Crate0.0063 (0.0027)0.0050 (0.0031)53.67 (17.51)13.4 (3.63)
F25: Bohachevsky20.0058 (0.0034)0.0043 (0.0031)120.43 (52.24)31.83 (6.90)
F26: Bohachevsky30.0073 (0.0027)0.0050 (0.0032)140.73 (64.64)30.9 (5.54)
F27: Three-Hump Camel0.0062 (0.0028)0.0051 (0.0030)49.9 (19.74)4.93 (3.31)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Osuna-Enciso, V.; Guevara-Martínez, E. A Stigmergy-Based Differential Evolution. Appl. Sci. 2022, 12, 6093. https://doi.org/10.3390/app12126093

AMA Style

Osuna-Enciso V, Guevara-Martínez E. A Stigmergy-Based Differential Evolution. Applied Sciences. 2022; 12(12):6093. https://doi.org/10.3390/app12126093

Chicago/Turabian Style

Osuna-Enciso, Valentín, and Elizabeth Guevara-Martínez. 2022. "A Stigmergy-Based Differential Evolution" Applied Sciences 12, no. 12: 6093. https://doi.org/10.3390/app12126093

APA Style

Osuna-Enciso, V., & Guevara-Martínez, E. (2022). A Stigmergy-Based Differential Evolution. Applied Sciences, 12(12), 6093. https://doi.org/10.3390/app12126093

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop