Next Article in Journal
Multi-Criteria Analysis in Circular Economy Principles: Using AHP Model for Risk Assessment in Sustainable Whisky Production
Previous Article in Journal
Comparative Study of Deflector Configurations under Variable Vertical Angle of Incidence and Wind Speed through Transient 3D CFD Modeling of Savonius Turbine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Refining the Eel and Grouper Optimizer with Intelligent Modifications for Global Optimization

by
Glykeria Kyrou
,
Vasileios Charilogis
and
Ioannis G. Tsoulos
*
Department of Informatics and Telecommunications, University of Ioannina, 47150 Kostaki Artas, Greece
*
Author to whom correspondence should be addressed.
Computation 2024, 12(10), 205; https://doi.org/10.3390/computation12100205
Submission received: 8 September 2024 / Revised: 9 October 2024 / Accepted: 12 October 2024 / Published: 14 October 2024

Abstract

:
Global optimization is used in many practical and scientific problems. For this reason, various computational techniques have been developed. Particularly important are the evolutionary techniques, which simulate natural phenomena with the aim of detecting the global minimum in complex problems. A new evolutionary method is the Eel and Grouper Optimization (EGO) algorithm, inspired by the symbiotic relationship and foraging strategy of eels and groupers in marine ecosystems. In the present work, a series of improvements are proposed that aim both at the efficiency of the algorithm to discover the total minimum of multidimensional functions and at the reduction in the required execution time through the effective reduction in the number of functional evaluations. These modifications include the incorporation of a stochastic termination technique as well as an improvement sampling technique. The proposed modifications are tested on multidimensional functions available from the relevant literature and compared with other evolutionary methods.

Graphical Abstract

1. Introduction

The goal of global optimization method is to discover the global minimum of a continuous multidimensional function, and it is defined as
x = arg min x S f ( x )
with S as follows:
S = a 1 , b 1 × a 2 , b 2 × a n , b n
The function f ( x ) is defined as f : S R , S R n , and the set S denotes the bounds of x. In recent years, many researchers have published important reviews on global optimization [1,2,3]. Global optimization is a technique of vital importance in many fields of science and applications, as it allows us to find the optimal solution to problems with multiple local solutions. In mathematics [4,5,6,7], it is used to solve complex mathematical problems; in physics [8,9,10], it is used to analyze and improve models that describe natural phenomena; in chemistry [11,12,13], it analyzes and designs molecules and chemical diagnostic tools; and in medicine [14], it analyzes and designs therapeutic strategies and diagnostic tools.
The methods that aim to discover the global minimum has two main categories, deterministic [15,16,17] and stochastic [18,19,20]. In the first category, there are techniques aimed at identifying the total minimum with some certainty, such as interval methods  [21,22], and they are usually distinguished by their complex implementation. The vast majority of global optimization algorithms belong to stochastic methods that have simpler implementation and can also be applied to large-scale problems. Recently, Sergeyev et al. [23] published a systematic comparison between deterministic and stochastic methods for global optimization problems.
An important branch of the stochastic methods are the evolutionary methods, that attempt to mimic a series of natural processes. Among these methods one can find the Differential Evolution method [24,25], Particle Swarm Optimization (PSO) methods [26,27,28], Ant Colony optimization methods [29,30], Genetic algorithms [31,32], the Exponential Distribution Optimizer [33], the Brain Storm Optimization method [34], etc. Additionally, since in recent years there has been an extremely widespread increase in parallel computing units, many research studies have proposed evolutionary methods that exploit modern parallel processing units [35,36,37].
Among the evolutionary techniques, one finds a large group of methods that have been explored intensively in recent years, the so-called swarm intelligence algorithms. These methods [38,39,40] are inspired by the collective behavior of swarms. These algorithms mimic systems in which candidate solutions interact locally and cooperate worldwide to discover the global minimum of any problem. These algorithms are very important tools for dealing with complex optimization problems in many applications.
In addition to the previously mentioned Particle Swarm Optimization and Ant Colony techniques that are also included in swarm intelligence algorithms, other methods that belong to this category are the Fast Bacterial Swarming Algorithm (FBSA) [41], the Fish Swarm Algorithm [42], the Dolphin Swarm Algorithm [43], the Whale Optimization Algorithm (WOA) algorithm [44,45,46,47], the Tunicate Swarm Algorithm [48], the Salp Swarm algorithm (SSA) algorithm [49,50,51,52], the Artificial Bee Colony algorithm [53], etc. These methods simulate a series of complex interactions between biological species [54], such as the following:
1.
Naturalism, where two species can live without affecting each other.
2.
Predation, where one creature dies by feeding another.
3.
Parasitism, where one species can cause harm to another.
4.
In competitive mode, the same or different organizations compete for resources.
5.
Mutualism [55,56,57], when two organisms have a beneficial interaction.
Among swarm intelligence algorithms, one can find the Eel and Grouper (EGO) algorithm, which is inspired by the symbiotic interaction and foraging strategy of eels and groupers in marine ecosystems. Bshary et al. [58] consider that target ingestion, something observed in eels and groupers, is a necessary condition for interspecific cooperative hunting to occur. Intraspecific predation could increase the hunting efficiency of predators by mammals. According to Ali Mohammadzadeh and Seyedali Mirjalili the EGO optimization algorithm [59] generates a set of random answers, then stores the best answers found so far, allocates them to the target point, and changes the answers with them. As the number of iterations increases, the limits of the sine function are changed to enhance the phase of finding the best solution. This method stops the process when the iteration exceeds the maximum number. Because the EGO optimization algorithm generates and boosts a collection of random responses, it has the advantage of increased local optimum discovery and avoidance compared to individual methods. According to Ali Mohammadzadeh and Seyedali Mirjalili, the algorithm’s capabilities extend to NP-hard problems in wireless sensor networks [60], IoT [61], logistics [62], smart agriculture [63], bioinformatics [64], and machine learning [65] in various fields such as programming, image segmentation [66], electrical circuit design [67], feature selection, and 3D path planning in robotics [68].
This paper introduces some modifications to the EGO algorithm in order to improve its efficiency. The proposed amendments are presented below:
  • The addition of a sampling technique based on the K-means method  [69,70,71]. The sampling points facilitate finding the global minimum of the function in the most efficient way. Additionally, by applying this method, nearby points are discarded. The initialization of the population in evolutionary techniques is a crucial factor which can push these techniques to more efficiently locate the global minimum, and in this direction, a multitude of research works have been presented in recent years, such as the work of Maaranen et al. [72], where they apply quasi-random sequences to the initial population of a genetic algorithm. Likewise, Paul et al. [73] suggested a method for the initialization of the population of genetic algorithms using a Vari-begin and Vari-diversity (VV) seeding method. Ali et al. proposed a series of initialization methods for the Differential Evolution method [74]. A novel method that initializes the population of evolutionary algorithms using clustering and Cauchy deviates is suggested in the work of Bajer et al. [75]. A systematic review of initialization techniques for evolutionary algorithms can be found in the work of Kazimipour et al. [76].
  • Using a termination technique that is developed with random measurements. Each time the algorithm is repeated, the minimum value is recorded. When this remains constant for a predefined number of iterations, the process is terminated. Therefore, the method terminates without wasting execution time in iterations, avoiding unnecessary consumption of computing resources. There are several methods found in the recent bibliography to terminate optimization methods. An overview of methods used to terminate evolutionary algorithms can be found in the work of Jain et al. [77]. Also, Zielinski et al. outlined some stopping rules used particularly in the Differential Evolution method [78]. Recently, Ghoreishi et al. published a literature study concerning various termination criteria on evolutionary algorithms [79]. Moreover, Ravber et al. performed extended research on the impact of the maximum number of iterations on the effectiveness of evolutionary algorithms [80].
  • The application of randomness in the definition of the range of the positions of candidate solutions.
The rest of this paper is divided into the following sections: in Section 2, the proposed method is fully described, in Section 3 the experimental results and statistical comparisons are outlined and finally, in Section 4, some conclusions and guidelines for future improvements are discussed.

2. The Proposed Method

The main steps of the proposed algorithm are discussed in this section. Also, the mentioned modifications are fully described.

2.1. The Main Steps of the Algorithm

The EGO optimization algorithm starts by initializing a population consisting of “search agents” that search to find the optimal solution. At each iteration, the position of the “prey” (optimal solution) is calculated. Agent positions are adjusted based on random variables and their distance based on the optimal position. At the end of each iteration, the current solutions are compared, and it is decided whether the algorithm should continue or terminate. The steps of the proposed method are provided in Algorithm 1. Also, the algorithm is presented as a series of steps in the flowchart of Figure 1. Using a flowchart and algorithm simultaneously enhances visual perception and provides a detailed analysis of logic and processes.
Algorithm 1 EGO algorithm
  • Initialization step.
    1.
    Define as N c the number of elements in the search agents
    2.
    Define as N g the maximum number of allowed iterations.
    3.
    Initialize randomly the search agents x i , i = 1 , , N c in set S.
    4.
    Set  t = 0 , the iteration counter.
    5.
    Set  s r = 0 , the starvation rate of the algorithm.
    6.
    Set  m = 1 ; this parameter influences how the variables f 1 , f 2 are defined, which in turn affects the calculation of new positions. When m = 2 , it introduces randomness to the range of positions before the update, while in the inactive state, the range remains fixed.
  • Calculation step.
    1.
    While termination criteria are not met do
    (a)
    Update variables a and s r :
    • a = 2 2 t N g
    • s r = 100 t N g
    (b)
    Compute the fitness of each search agents.
    (c)
    Sort all solutions according to their fitness values.
    (d)
    Set X P , the estimated position of the prey.
    (e)
    For   i = 1 , , N c   do
    i.
    Update random variables  r 1 , r 2 , r 3 , r 4 , C 1 , C 2 , b :
    • r 1 and r 2 are random numbers in [ 0 , 1 ]
    • r 3 = ( a 2 ) r 1 + 2
    • r 4 = 100 r 2
    • C 1 = 2 a r 1 a
    • C 2 = 2 r 1
    • b = a r 2
    • Select a random position v 1 , , N c
    • if r 4 s r then set X E = C 2 X P else X E = p o s i t i o n [ r a n d o m i n d e x v ]
    ii.
    Create the vector y = y 1 , y 2 , , y n with the following procedure
    iii.
    For  j = 1 , , n   do
    A.
    Set  X 1 = e b r 3 sin 2 π r 3 C 1 X E X P + X E , where X E = C 2 x i , j
    B.
    Set  X 2 = x i , j + C 1 x i , j X P
    C.
    if m = 1 then set f 1 = 0.8 ,   f 2 = 0.2 else set f 1 to a random number in [ 0 , 2 ] and f 2 to a random number in [ 2 , 2 ]
    D.
    Set  p [ 0 , 1 ] a random number.
    E.
    if p 0.5 then set x i , j = f 1 X 1 + f 2 X 2 2 else set x i , j = f 2 X 1 + f 1 X 2 2
    iv.
    EndFor
    (f)
    End For
    • Termination check step
      (a)
      Set t = t + 1
      (b)
      If t N g terminate.
      (c)
      Else
      (d)
      Calculate the stopping criterion proposed in the work of Charilogis [81]. In the similarity stopping rule, at every iteration t, the absolute difference between the current located global minimum f m i n ( t ) and the previous best value f m i n ( t 1 ) is calculated:
      δ t = f min ( t ) f min ( t 1 )
    • The algorithm terminates when δ t ϵ for N k consecutive iterations, where ϵ is a small positive value.
    2.
    End While
The main modifications introduced by the proposed algorithm are the following:
1.
The members of the population are initialized using a procedure that incorporates the K-means algorithm. This procedure is fully described in Section 2.2. The purpose of this sampling technique is to produce points that are close to the local minima of the objective problem through systematic clustering. This potentially significantly reduces the time required to complete the technique. The samples used in the proposed algorithm are the calculated centers of the K-means algorithm. This sampling technique has also been utilized in some recent papers related to Global Optimization techniques, such as the extended version of the Optimal Foraging algorithm [82] or the new genetic algorithm proposed by Charilogis et al. [83].
2.
The second modification proposed is the stopping rule invoked at every step of the algorithm. This rule measures the similarity between the best fitness values obtained between consecutive iterations of the algorithm. If this difference takes low values for a consecutive number of iterations, then the algorithm may not be able to find a lower value for the global minimum and should stop. This stopping rule has been applied in recent years in various methods such as in the work of Charilogis and Tsoulos, which presented an improved parallel PSO method [84], the work of Kyrou at al., which suggested an improved version of the Giant-Armadillo optimization method [85], or the recent work of Kyrou et al., which proposed an extended version of the Optimal Foraging Algorithm [82]. Of course, this method is general enough for application in any global optimization procedure.
3.
The third modification is the m flag, which controls the randomness in the range of candidate solutions. When this value is set to 2, then the critical parameters f 1 , f 2 are calculated using random numbers.
The overall procedure that outlines the basic steps of the method and the added modifications is shown in Algorithm 2.
Algorithm 2 The basic steps of the algorithm accompanied with the proposed modifications
1.
Initialize the population members using a procedure incorporating the K-means algorithm, shown in Section 2.2.
2.
Compute the fitness of each search agent .
3.
Sort all solutions according to their fitness.
4.
Calculate the similarity stopping rule proposed in [81].
  • If the termination criteria are not satisfied then go to step 2;
  • Else terminate and return the best solution.
  • End if

2.2. The Used Sampling Procedure

The used sampling procedure incorporated in this work initially generates samples from the objective problem. Then, using the K-means method, only the estimated centers are selected as samples for the proposed algorithm. This technique, due to James MacQueen [71], is one of the best-known clustering algorithms in the broad research community, both in data analysis and in machine learning [86] and pattern recognition [87]. The algorithm aims to divide a data set into K clusters. The K-means algorithm tries to divide the data into groups in such a way that the internal points of each group are as close as possible to each other. At the same time, it tries to place the central points of each group in positions which are as representative as possible of the points of their group. Over the years, a series of variants of this algorithm have been proposed, such as the genetic K-means algorithm [88], the unsupervised K-means algorithm [89], the fixed-centered K-means algorithm [90], etc. A review of K-means clustering algorithms can be found in the work of Oti et al. [91]. Next, the basic steps of the algorithm are provided in Algorithm 3. A flowchart of the K-means procedure is also depicted in Figure 2.
Algorithm 3 K-means algorithm
1.
Initialization
(a)
Set k, the number of clusters.
(b)
Obtain randomly the initial samples x i , i = 1 , , N m .
(c)
Set  S = j { } , from j = 1 , , k .
2.
Repeat
(a)
For every point x i , i = 1 , , N m  do
i.
Set  j = argmin m = 1 k D x i , c m . j is the nearest center from x i .
ii.
Set  S j = S j x i .
(b)
End For
(c)
For every center c j , j = 1 k  do
i.
Set  M j the number of samples in S j .
ii.
Update the center c j as
c j = 1 M j x i S j x i
(d)
End For
3.
Terminate when c j no longer changes.
4.
The final samples of the algorithm are the centers c j .

3. Results

This section begins with a detailed description of the functions that were used in the experiments, followed by an analysis of the experiments performed and comparisons with other global optimization techniques.

3.1. Test Functions

The test functions used in the experiments were suggested in a series of relative works [92,93] and they originated in a series of scientific fields. Also, these objective functions were studied in various publications [94,95,96,97,98]. Also, a series of function found in [99] were used as test functions. The used functions were defined as follows:
  • Ackley’s function:
    f ( x ) = a exp b 1 n i = 1 n x i 2 exp 1 n i = 1 n cos c x i + a + exp ( 1 )
    with a = 20.0.
  • Bf1 (Bohachevsky 1) function:
f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 4 10 cos 4 π x 2 + 7 10
  • Bf2 (Bohachevsky 2) function:
    f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 cos 4 π x 2 + 3 10
  • Bf3 (Bohachevsky 3) function:
    f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 + 4 π x 2 + 3 10
  • Branin function:
    f ( x ) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos ( x 1 ) + 10
    with 5 x 1 10 , 0 x 2 15 .
  • Camel function:
    f ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 , x [ 5 , 5 ] 2
  • Easom function:
    f ( x ) = cos x 1 cos x 2 exp x 2 π 2 x 1 π 2
    with x [ 100 , 100 ] 2 .
  • Equal maxima function, defined as:
    f ( x ) = sin 6 5 π x
  • Exponential function, with the following definition:
    f ( x ) = exp 0.5 i = 1 n x i 2 , 1 x i 1
    The values n = 4 , 8 , 16 , 32 were used in the conducted experiments.
  • F9 test function:
    f ( x ) = i = 1 n 10 + 9 cos 2 π k i x i
    with x [ 0 , 1 ] n .
  • Extended F10 function:
    f ( x ) = i = 1 n x i 2 10 cos 2 π x i
  • F14 function:
    f ( x ) = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 6 1
  • F15 function:
    f ( x ) = i = 1 11 a i x 1 b i + b i x 2 b i 2 + b i x 3 + x 4 2
  • F17 function:
    f ( x ) = 1 + x 1 + x 2 + 1 2 × 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × 30 + 2 x 1 3 x 2 2 × 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2
  • Five-uneven-peak trap function:
    f ( x ) = 80 ( 2.5 x ) 0 x < 2.5 64 ( x 2.5 ) 2.5 x < 5.0 64 ( 7.5 x ) 5.0 x < 7.5 28 ( x 7.5 ) 7.5 x < 12.5 28 ( 17.5 x ) 12.5 x < 17.5 32 ( x 17.5 ) 17.5 x < 22.5 32 ( 27.5 x ) 22.5 x < 27.5 80 ( x 27.5 ) 27.5 x 30
  • Himmelblau’s function:
    f ( x ) = 200 x 1 2 + x 2 11 2 x 1 + x 2 2 7 2
    with x [ 6 , 6 ] 2 .
  • Griewank2 function:
    f ( x ) = 1 + 1 200 i = 1 2 x i 2 i = 1 2 cos ( x i ) ( i ) , x [ 100 , 100 ] 2
  • Griewank10 function, given by the equation
    f ( x ) = i = 1 n x i 2 4000 i = 1 n cos x i i + 1
    with n = 10 .
  • Gkls function [100]: The function f ( x ) = G k l s ( x , n , w ) is a test function proposed in [100] with w local minima. The values n = 2 , 3 , and w = 50 were used in the conducted experiments.
  • Goldstein and Price’s function
    f ( x ) = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ] × [ 30 + 2 x 1 3 x 2 2 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ]
    with x [ 2 , 2 ] 2 .
  • Hansen’s function: f ( x ) = i = 1 5 i cos ( i 1 ) x 1 + i j = 1 5 j cos ( j + 1 ) x 2 + j , x [ 10 , 10 ] 2 .
  • Hartman 3 function:
    f ( x ) = i = 1 4 c i exp j = 1 3 a i j x j p i j 2
    with x [ 0 , 1 ] 3 and a = 3 10 30 0.1 10 35 3 10 30 0.1 10 35 , c = 1 1.2 3 3.2 and
    p = 0.3689 0.117 0.2673 0.4699 0.4387 0.747 0.1091 0.8732 0.5547 0.03815 0.5743 0.8828
  • Hartman 6 function:
    f ( x ) = i = 1 4 c i exp j = 1 6 a i j x j p i j 2
    with x [ 0 , 1 ] 6 and a = 10 3 17 3.5 1.7 8 0.05 10 17 0.1 8 14 3 3.5 1.7 10 17 8 17 8 0.05 10 0.1 14 , c = 1 1.2 3 3.2 and
    p = 0.1312 0.1696 0.5569 0.0124 0.8283 0.5886 0.2329 0.4135 0.8307 0.3736 0.1004 0.9991 0.2348 0.1451 0.3522 0.2883 0.3047 0.6650 0.4047 0.8828 0.8732 0.5743 0.1091 0.0381
  • Potential function: this function represents the energy of a molecular conformation of N atoms. The interaction of these atoms is determined by the Lennard–Jones potential [101]. The definition of this potential is:
    V L J ( r ) = 4 ϵ σ r 12 σ r 6
    For the conducted experiments, the values N = 3 , 5 were used.
  • Rastrigin’s function.
    f ( x ) = x 1 2 + x 2 2 cos ( 18 x 1 ) cos ( 18 x 2 ) , x [ 1 , 1 ] 2
  • Rosenbrock function.
    f ( x ) = i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 , 30 x i 30 .
    The values n = 4 , 8 , 16 were incorporated in the conducted experiments.
  • Shekel 5 function.
f ( x ) = i = 1 5 1 ( x a i ) ( x a i ) T + c i
with x [ 0 , 10 ] 4 and a = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 , c = 0.1 0.2 0.2 0.4 0.4
  • Shekel 7 function.
f ( x ) = i = 1 7 1 ( x a i ) ( x a i ) T + c i
with x [ 0 , 10 ] 4 and a = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 2 9 2 9 5 3 5 3 , c = 0.1 0.2 0.2 0.4 0.4 0.6 0.3 .
  • Shekel 10 function.
f ( x ) = i = 1 10 1 ( x a i ) ( x a i ) T + c i
with x [ 0 , 10 ] 4 and a = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 2 9 2 9 5 5 3 3 8 1 8 1 6 2 6 2 7 3.6 7 3.6 , c = 0.1 0.2 0.2 0.4 0.4 0.6 0.3 0.7 0.5 0.6 .
  • Sinusoidal function, defined as:
    f ( x ) = 2.5 i = 1 n sin x i z + i = 1 n sin 5 x i z , 0 x i π .
    The values n = 4 , 8 , 16 were incorporated in the conducted experiments.
  • Schaffer’s function:
    f ( x ) = i = 1 n j = 1 i x j 2
  • Schwefel221 function:
    f ( x ) = 418.9829 n + i = 1 n x i sin x i
  • Schwefel222 function:
    f ( x ) = i = 1 n x i + i = 1 n x i
  • Shubert function:
    f ( x ) = i = 1 n j = 1 5 j cos ( j + 1 ) x i + j
    with x [ 10 , 10 ] n
  • Sphere function:
    f ( x ) = i = 1 n x i 2
  • Test2N function:
    f ( x ) = 1 2 i = 1 n x i 4 16 x i 2 + 5 x i , x i [ 5 , 5 ] .
    The values n = 4 , 5 , 6 , 7 were incorporated for the conducted experiments.
  • Test30N function:
    f ( x ) = 1 10 sin 2 3 π x 1 i = 2 n 1 x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x n
    For the conducted experiments, the values n = 3 , 4 were used.
  • Uneven decreasing maxima function:
    f ( x ) = exp 2 log ( 2 ) x 0.08 0.854 2 sin 6 5 π x 3 4 0.05
  • Vincent’s function:
    f ( x ) = 1 n i = 1 n sin 10 log x i
    with x [ 0.25 , 10 ] n .
Also, the dimension for each problem, as used in the experiments, is shown in Table 1.

3.2. Experimental Results

A series of global optimization methods were applied to the mentioned test functions. All experiments were performed 30 times using different seeds for the random generator and the average value of function calls was measured. The used software was coded in ANSI C++ using the freely available OPTIMUS optimization environment v1.0, available from https://github.com/itsoulos/OPTIMUS (accessed on 27 September 2024). The experiments were executed on a Debian Linux system with an AMD Ryzen 5950X processor and 128 GB of RAM. In all cases, the BFGS [102] local optimization method was used at the end of each global optimization technique to ensure that an actual minimum was discovered by the global optimization method. The values for all experimental parameters are shown in Table 2.
The experimental results for the test functions and a series of optimization methods are shown in Table 3, where the following applies to this table:
  • The column function represents the used test function.
  • The column genetic represents the usage of a genetic algorithm [31,32] into the test function. This genetic algorithm was equipped with N c chromosomes, and the maximum number of generations was set to N g . The modified version of Tsoulos [103] was used as a genetic algorithm here.
  • The column PSO represents the incorporation of a Particle Swarm Optimizer [27,28] into each test function. This algorithm had N c particles, and the maximum number of allowed iterations was set to N g . For the conducted experiments, the improved PSO method proposed by Charilogis and Tsoulos was used [81].
  • The column DE refers to the Differential Evolution method [24,25].
  • The column EGO represents the initial method without the modifications suggested in this work.
  • The column EEGO represents the usage of the proposed method. The corresponding settings are shown in Table 2.
  • The row sum is used to measure the total function calls for all problems.
  • If a method failed to find the global minimum over all runs, this was noted in the corresponding table with a percentage enclosed in parentheses next to the average function calls.
The same termination method, as detailed in Section 2.1, was used in all global optimization techniques in order to be able to evaluate all techniques fairly and with the same rules. In Figure 3, we present the total function calls of every optimization method graphically. The proposed method had excellent results compared to the other optimization techniques according to the experiments conducted. As we can observe, it had the least number of calls among all other techniques.
Furthermore, as the experimental results clearly indicate, the modified version outperformed the original EGO method in terms of average and function calls and this is depicted in Figure 4, which outlines box plots for the mentioned methods.
A box plot between all the used methods is depicted in Figure 5.
Moreover, a statistical comparison of all used methods is outlined in Figure 6.
According to Table 3 and Figure 6, the EEGO method proved to be more efficient, as it consistently required fewer function calls compared to the DE, PSO, genetic, and EGO methods, and these differences were statistically significant (p < 0.05). For example, in the Ackley function, EEGO required 4199 calls, while DE required 10.220, PSO 6885, and the genetic algorithm 6749, with a p-value < 0.05, showing that EEGO is significantly more efficient. Similar examples are observed in the BF1 function, where EEGO had 3228 calls, while DE required 8268 (p < 0.05), and in the Branin function, where EEGO required 1684 calls, compared to 4101 for DE (p < 0.05). Overall, EEGO showed the best performance in most cases, with statistically significant differences in function calls, as it effectively reduced the number of calls compared to the other methods. While PSO and the genetic algorithm performed better than DE in some instances, they still lagged significantly behind EEGO. The p-values confirmed that these differences were statistically significant, indicating that EEGO was the most efficient method overall.
One more experiment was performed with the ultimate goal of measuring the importance of K-means sampling in the proposed method. The results for this experiment are outlined in Table 4 and the following sampling methods were used:
1.
The column uniform represents the usage of uniform sampling in the current method.
2.
The column triangular stands for the usage of the triangular distribution [104] for sampling.
3.
The column Maxwell represents for the application of the Maxwell distribution [105] to produce initial samples for the used method.
4.
The column K-means represents the usage of the method described in Section 2.2 to produce initial samples for the used method.
The initial distributions play a critical role in a wide range of applications, including optimization, statistical analysis, and machine learning. Table 4 presents the proposed distribution alongside other established distributions. The uniform distribution is widely used due to its ability to evenly cover the search space, making it suitable for initializing optimization algorithms [106]. The triangular distribution is applied in scenarios where there is knowledge of the bounds and the most probable value of a phenomenon, making it useful in risk management models [107]. The Maxwell distribution, although originating from physics, finds applications in simulating communication networks, where data transfer speeds can be modeled as random variables [108]. Finally, the K-means method is used for data clustering, with K-means++ initialization offering improved performance compared to random distributions, particularly in high-dimensional problems [109]. As observed in Table 4, the choice of an appropriate initial distribution can significantly affect the performance of the algorithms that utilize them.
In the scatter plot in Figure 7, the critical parameter “p” was found to be very small, leading to the rejection of the null hypothesis and indicating that the experimental results were highly significant.
In addition, in order to investigate the impact that the choice of sampling method had on the optimization method, an additional experiment was performed, in which the execution time of the proposed method was recorded with different sampling techniques for the ELP function, whose dimension was varied between 5 and 50. This function is defined as:
f ( x ) = i = 1 n 10 6 i 1 n 1 x i 2
where the parameter n defines the dimension of the function. The results from this experiment are graphically outlined in Figure 8.
The proposed sampling method significantly increased the required execution time, since an iterative process was required before starting the algorithm. The next sampling procedure in required execution time appeared to be the Maxwell sampling, but it was not significantly different compared to uniform sampling.

4. Conclusions

The article proposed some modifications to the EEGO optimization method, aimed to improve the overall performance and reduce the needed function calls to discover the global minimum. The first modification concerned the application of a sampling technique based on the K-means method. This technique allowed us to significantly minimize the number of function calls to find the global minimum and further improved the accuracy with which it is located. In particular, the application of the K-means method accelerated the finding of a solution, as it more efficiently located the points of interest in the search space and led to the fastest convergence to the global minimum. Compared to other methods based on random distributions, the proposed technique proved its superiority, especially in multidimensional and complex functions.
The second proposed amendment concerned the termination rule based on the similarity of solutions during iterations. The main purpose of this rule was to stop the optimization process when the iterated solutions were too close to each other, thus preventing pointless iterations that did not provide any significant improvement. It therefore avoided wasting computing time in cases where the process was already very close to the desired result. The use of the termination rule significantly improved the efficiency of the algorithm.
Furthermore, one more improvement was suggested in this research paper. This optimization added randomness to the generation of new candidate solutions from the old ones aiming to better explore the search space of the objective problem in search of the global minimum.
To verify the effectiveness of the new method, a series of experiments were performed on a large group of objective problems from the recent literature. In these experiments both the efficiency and speed improvement of the original technique was measured, and a comparison of the speed of the new method was made in relation to other known techniques from the relevant literature. Furthermore, a series of extensive experiments were carried out to study the dynamics of the proposed initialization technique as well as the additional time required by its implementation. A number of useful conclusions were drawn from the execution of these experiments. First of all, the new method significantly improved the efficiency and speed of the original method. Furthermore, the experiments revealed that the new method required a significantly lower number of function calls on average than other global optimization methods. In addition, the sampling method proved to be highly efficient in finding the global minimum and significantly reduced the required number of function calls compared to other initialization techniques. The additional time required by the new initialization method was noticeable compared to other techniques, but the gains it brought were equally significant.
Future extensions of the proposed technique could be the use of parallel programming techniques to speed up the overall process, such as the MPI programming technique [110] or the integration of the OpenMP library [111], as well as the use of other termination techniques that could potentially speed up the termination of the method.

Author Contributions

G.K., V.C. and I.G.T. created the software. G.K. conducted the experiments, using a series of objective functions from various sources. V.C. conducted the needed statistical tests. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been financed by the European Union: Next Generation EU through the Program Greece 2.0 National Recovery and Resilience Plan, under the call RESEARCH–CREATE–INNOVATE, project name “iCREW: Intelligent small craft simulator for advanced crew training using Virtual Reality techniques” (project code: TAEDK-06195).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Törn, A.; Ali, M.M.; Viitanen, S. Stochastic global optimization: Problem classes and solution techniques. J. Glob. Optim. 1999, 14, 437–447. [Google Scholar] [CrossRef]
  2. Floudas, C.A.; Pardalos, P.M. (Eds.) State of the Art in Global Optimization: Computational Methods and Applications; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  3. Horst, R.; Pardalos, P.M. Handbook of Global Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 2. [Google Scholar]
  4. Intriligator, M.D. Mathematical Optimization and Economic Theory; Society for Industrial and Applied Mathematics, SIAM: New Delhi, India, 2002. [Google Scholar]
  5. Cánovas, M.J.; Kruger, A.; Phu, H.X.; Théra, M. Marco A. López, a Pioneer of Continuous Optimization in Spain. Vietnam. J. Math. 2020, 48, 211–219. [Google Scholar] [CrossRef]
  6. Mahmoodabadi, M.J.; Nemati, A.R. A novel adaptive genetic algorithm for global optimization of mathematical test functions and real-world problems. Eng. Sci. Technol. Int. J. 2016, 19, 2002–2021. [Google Scholar] [CrossRef]
  7. Li, J.; Xiao, X.; Boukouvala, F.; Floudas, C.A.; Zhao, B.; Du, G.; Su, X.; Liu, H. Data-driven mathematical modeling and global optimization framework for entire petrochemical planning operations. AIChE J. 2016, 62, 3020–3040. [Google Scholar] [CrossRef]
  8. Iuliano, E. Global optimization of benchmark aerodynamic cases using physics-based surrogate models. Aerosp. Sci. Technol. 2017, 67, 273–286. [Google Scholar] [CrossRef]
  9. Duan, Q.; Sorooshian, S.; Gupta, V. Effective and efficient global optimization for conceptual rainfall-runoff models. Water Resour. Res. 1992, 28, 1015–1031. [Google Scholar] [CrossRef]
  10. Yang, L.; Robin, D.; Sannibale, F.; Steier, C.; Wan, W. Global optimization of an accelerator lattice using multiobjective genetic algorithms. Nucl. Instrum. Methods Phys. Res. Sect. Accel. Spectrom. Detect. Assoc. Equip. 2009, 609, 50–57. [Google Scholar] [CrossRef]
  11. Heiles, S.; Johnston, R.L. Global optimization of clusters using electronic structure methods. Int. J. Quantum Chem. 2013, 113, 2091–2109. [Google Scholar] [CrossRef]
  12. Shin, W.H.; Kim, J.K.; Kim, D.S.; Seok, C. GalaxyDock2: Protein–ligand docking using beta-complex and global optimization. J. Comput. Chem. 2013, 34, 2647–2656. [Google Scholar] [CrossRef] [PubMed]
  13. Liwo, A.; Lee, J.; Ripoll, D.R.; Pillardy, J.; Scheraga, H.A. Protein structure prediction by global optimization of a potential energy function. Proc. Natl. Acad. Sci. USA 1999, 96, 5482–5485. [Google Scholar] [CrossRef]
  14. Houssein, E.H.; Hosney, M.E.; Mohamed, W.M.; Ali, A.A.; Younis, E.M. Fuzzy-based hunger games search algorithm for global optimization and feature selection using medical data. Neural Comput. Appl. 2023, 35, 5251–5275. [Google Scholar] [CrossRef]
  15. Ion, I.G.; Bontinck, Z.; Loukrezis, D.; Römer, U.; Lass, O.; Ulbrich, S.; Schöps, S.; De Gersem, H. Robust shape optimization of electric devices based on deterministic optimization methods and finite-element analysis with affine parametrization and design elements. Electr. Eng. 2018, 100, 2635–2647. [Google Scholar] [CrossRef]
  16. Cuevas-Velásquez, V.; Sordo-Ward, A.; García-Palacios, J.H.; Bianucci, P.; Garrote, L. Probabilistic model for real-time flood operation of a dam based on a deterministic optimization model. Water 2020, 12, 3206. [Google Scholar] [CrossRef]
  17. Pereyra, M.; Schniter, P.; Chouzenoux, E.; Pesquet, J.C.; Tourneret, J.Y.; Hero, A.O.; McLaughlin, S. A survey of stochastic simulation and optimization methods in signal processing. IEEE J. Sel. Top. Signal Process. 2015, 10, 224–241. [Google Scholar] [CrossRef]
  18. Hannah, L.A. Stochastic optimization. Int. Encycl. Soc. Behav. Sci. 2015, 2, 473–481. [Google Scholar]
  19. Kizielewicz, B.; Sałabun, W. A new approach to identifying a multi-criteria decision model based on stochastic optimization techniques. Symmetry 2020, 12, 1551. [Google Scholar] [CrossRef]
  20. Chen, T.; Sun, Y.; Yin, W. Solving stochastic compositional optimization is nearly as easy as solving stochastic optimization. IEEE Trans. Signal Process. 2021, 69, 4937–4948. [Google Scholar] [CrossRef]
  21. Wolfe, M.A. Interval methods for global optimization. Appl. Math. Comput. 1996, 75, 179–206. [Google Scholar]
  22. Csendes, T.; Ratz, D. Subdivision direction selection in interval methods for global optimization. SIAM J. Numer. Anal. 1997, 34, 922–938. [Google Scholar] [CrossRef]
  23. Sergeyev, Y.D.; Kvasov, D.E.; Mukhametzhanov, M.S. On the efficiency of nature-inspired metaheuristics in expensive global optimization with limited budget. Sci. Rep. 2018, 8, 453. [Google Scholar] [CrossRef]
  24. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  25. Liu, J.; Lampinen, J. A fuzzy adaptive differential evolution algorithm. Soft Comput. 2005, 9, 448–462. [Google Scholar] [CrossRef]
  26. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, USA, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  27. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization: An overview. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  28. Trelea, I.C. The particle swarm optimization algorithm: Convergence analysis and parameter selection. Inf. Process. Lett. 2003, 85, 317–325. [Google Scholar] [CrossRef]
  29. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  30. Socha, K.; Dorigo, M. Ant colony optimization for continuous domains. Eur. J. Oper. Res. 2008, 185, 1155–1173. [Google Scholar] [CrossRef]
  31. Goldberg, D. Genetic Algorithms in Search, Optimization and Machine Learning; Addison-Wesley Publishing Company: Reading, MA, USA, 1989. [Google Scholar]
  32. Michalewicz, Z. Genetic Algorithms + Data Structures = Evolution Programs; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  33. Abdel-Basset, M.; El-Shahat, D.; Jameel, M.; Abouhawwash, M. Exponential distribution optimizer (EDO): A novel math-inspired algorithm for global optimization and engineering problems. Artif. Intell. Rev. 2023, 56, 9329–9400. [Google Scholar] [CrossRef]
  34. Ma, L.; Cheng, S.; Shi, Y. Enhancing learning efficiency of brain storm optimization via orthogonal learning design. IEEE Trans. Syst. Man Cybern. Syst. 2020, 51, 6723–6742. [Google Scholar] [CrossRef]
  35. Zhou, Y.; Tan, Y. GPU-based parallel particle swarm optimization. In Proceedings of the 2009 IEEE Congress on Evolutionary Computation, Trondheim, Norway, 18–21 May 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 1493–1500. [Google Scholar]
  36. Dawson, L.; Stewart, I. Improving Ant Colony Optimization performance on the GPU using CUDA. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1901–1908. [Google Scholar]
  37. Barkalov, K.; Gergel, V. Parallel global optimization on GPU. J. Glob. Optim. 2016, 66, 3–20. [Google Scholar] [CrossRef]
  38. Hassanien, A.E.; Emary, E. Swarm Intelligence: Principles, Advances, and Applications; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  39. Tang, J.; Liu, G.; Pan, Q. A review on representative swarm intelligence algorithms for solving optimization problems: Applications and trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  40. Brezočnik, L.; Fister, I., Jr.; Podgorelec, V. Swarm intelligence algorithms for feature selection: A review. Appl. Sci. 2018, 8, 1521. [Google Scholar] [CrossRef]
  41. Chu, Y.; Mi, H.; Liao, H.; Ji, Z.; Wu, Q.H. A fast bacterial swarming algorithm for high-dimensional function optimization. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 3135–3140. [Google Scholar]
  42. Neshat, M.; Sepidnam, G.; Sargolzaei, M.; Toosi, A.N. Artificial fish swarm algorithm: A survey of the state-of-the-art, hybridization, combinatorial and indicative applications. Artif. Intell. Rev. 2014, 42, 965–997. [Google Scholar] [CrossRef]
  43. Wu, T.Q.; Yao, M.; Yang, J.H. Dolphin swarm algorithm. Front. Inf. Technol. Electron. Eng. 2016, 17, 717–729. [Google Scholar] [CrossRef]
  44. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  45. Nasiri, J.; Khiyabani, F.M. A whale optimization algorithm (WOA) approach for clustering. Cogent Math. Stat. 2018, 5, 1483565. [Google Scholar] [CrossRef]
  46. Gharehchopogh, F.S.; Gholizadeh, H. A comprehensive survey: Whale Optimization Algorithm and its applications. Swarm Evol. Comput. 2019, 48, 1–24. [Google Scholar] [CrossRef]
  47. Wang, J.; Bei, J.; Song, H.; Zhang, H.; Zhang, P. A whale optimization algorithm with combined mutation and removing similarity for global optimization and multilevel thresholding image segmentation. Appl. Soft Comput. 2023, 137, 110130. [Google Scholar] [CrossRef]
  48. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  49. Wan, Y.; Mao, M.; Zhou, L.; Zhang, Q.; Xi, X.; Zheng, C. A novel nature-inspired maximum power point tracking (MPPT) controller based on SSA-GWO algorithm for partially shaded photovoltaic systems. Electronics 2019, 8, 680. [Google Scholar] [CrossRef]
  50. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  51. Bairathi, D.; Gopalani, D. Salp swarm algorithm (SSA) for training feed-forward neural networks. In Proceedings of the Soft Computing for Problem Solving: SocProS, Bhubaneswar, India, 23–24 December 2017; Springer: Singapore, 2019; Volume 1, pp. 521–534. [Google Scholar]
  52. Abualigah, L.; Shehab, M.; Alshinwan, M.; Alabool, H. Salp swarm algorithm: A comprehensive survey. Neural Comput. Appl. 2020, 32, 11195–11215. [Google Scholar] [CrossRef]
  53. Karaboga, D.; Akay, B. A comparative study of artificial bee colony algorithm. Appl. Math. Comput. 2009, 214, 108–132. [Google Scholar] [CrossRef]
  54. Abdullahi, M.; Ngadi, M.A.; Dishing, S.I.; Abdulhamid, S.I.M.; Usman, M.J. A survey of symbiotic organisms search algorithms and applications. Neural Comput. Appl. 2020, 32, 547–566. [Google Scholar] [CrossRef]
  55. Wang, Y.; DeAngelis, D.L. A mutualism-parasitism system modeling host and parasite withmutualism at low density. Math. Biosci. Eng. 2012, 9, 431–444. [Google Scholar]
  56. Aubier, T.G.; Joron, M.; Sherratt, T.N. Mimicry among unequally defended prey should be mutualistic when predators sample optimally. Am. Nat. 2017, 189, 267–282. [Google Scholar] [CrossRef]
  57. Addicott, J.F. Competition in mutualistic systems. In The biology of Mutualism: Ecology and Evolution; Croom Helm: London, UK, 1985; pp. 217–247. [Google Scholar]
  58. Bshary, R.; Hohner, A.; Ait-el-Djoudi, K.; Fricke, H. Interspecific communicative and coordinated hunting between groupers and giant moray eels in the Red Sea. Plos Biol. 2006, 4, e431. [Google Scholar] [CrossRef]
  59. Mohammadzadeh, A.; Mirjalili, S. Eel and Grouper Optimizer: A Nature-Inspired Optimization Algorithm; Springer Science+Business Media, LLC.: Berlin/Heidelberg, Germany, 2024. [Google Scholar]
  60. Gogu, A.; Nace, D.; Dilo, A.; Meratnia, N.; Ortiz, J.H. Review of optimization problems in wireless sensor networks. In Telecommunications Networks—Current Status and Future Trends; BoD: Norderstedt, Germany, 2012; pp. 153–180. [Google Scholar]
  61. Goudos, S.K.; Boursianis, A.D.; Mohamed, A.W.; Wan, S.; Sarigiannidis, P.; Karagiannidis, G.K.; Suganthan, P.N. Large Scale Global Optimization Algorithms for IoT Networks: A Comparative Study. In Proceedings of the 2021 17th International Conference on Distributed Computing in Sensor Systems (DCOSS), Pafos, Cyprus, 14–16 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 272–279. [Google Scholar]
  62. Arayapan, K.; Warunyuwong, P. Logistics Optimization: Application of Optimization Modeling in Inbound Logistics. Master’s Thesis, Malardalen University, Västerås, Sweeden, 2009. [Google Scholar]
  63. Singh, S.P.; Dhiman, G.; Juneja, S.; Viriyasitavat, W.; Singal, G.; Kumar, N.; Johri, P. A New QoS Optimization in IoT-Smart Agriculture Using Rapid Adaption Based Nature-Inspired Approach. IEEE Internet Things J. 2023, 11, 5417–5426. [Google Scholar] [CrossRef]
  64. Wang, H.; Ersoy, O.K. A novel evolutionary global optimization algorithm and its application in bioinformatics. ECE Tech. Rep. 2005, 65. Available online: https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1065&context=ecetr (accessed on 7 September 2024).
  65. Cassioli, A.; Di Lorenzo, D.; Locatelli, M.; Schoen, F.; Sciandrone, M. Machine learning for global optimization. Comput. Optim. Appl. 2012, 51, 279–303. [Google Scholar] [CrossRef]
  66. Houssein, E.H.; Helmy, B.E.D.; Elngar, A.A.; Abdelminaam, D.S.; Shaban, H. An improved tunicate swarm algorithm for global optimization and image segmentation. IEEE Access 2021, 9, 56066–56092. [Google Scholar] [CrossRef]
  67. Torun, H.M.; Swaminathan, M. High-dimensional global optimization method for high-frequency electronic design. IEEE Trans. Microw. Theory Tech. 2019, 67, 2128–2142. [Google Scholar] [CrossRef]
  68. Wang, L.; Kan, J.; Guo, J.; Wang, C. 3D path planning for the ground robot with improved ant colony optimization. Sensors 2019, 19, 815. [Google Scholar] [CrossRef] [PubMed]
  69. Arora, P.; Varshney, S. Analysis of k-means and k-medoids algorithm for big data. Procedia Comput. Sci. 2016, 78, 507–512. [Google Scholar] [CrossRef]
  70. Ahmed, M.; Seraj, R.; Islam, S.M.S. The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 2020, 9, 1295. [Google Scholar] [CrossRef]
  71. MacQueen, J.B. Some Methods for classification and Analysis of Multivariate Observations. In Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, Oakland, CA, USA, 21 June–18 July 1967; pp. 281–297. [Google Scholar]
  72. Maaranen, H.; Miettinen, K.; Mäkelä, M.M. Quasi-random initial population for genetic algorithms. Comput. Math. Appl. 2004, 47, 1885–1895. [Google Scholar] [CrossRef]
  73. Paul, P.V.; Dhavachelvan, P.; Baskaran, R. A novel population initialization technique for genetic algorithm. In Proceedings of the 2013 International Conference on Circuits, Power and Computing Technologies (ICCPCT), Nagercoil, India, 20–21 March 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1235–1238. [Google Scholar]
  74. Ali, M.; Pant, M.; Abraham, A. Unconventional initialization methods for differential evolution. Appl. Math. Comput. 2013, 219, 4474–4494. [Google Scholar] [CrossRef]
  75. Bajer, D.; Martinović, G.; Brest, J. A population initialization method for evolutionary algorithms based on clustering and Cauchy deviates. Expert Syst. Appl. 2016, 60, 294–310. [Google Scholar] [CrossRef]
  76. Kazimipour, B.; Li, X.; Qin, A.K. A review of population initialization techniques for evolutionary algorithms. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 2585–2592. [Google Scholar]
  77. Jain, B.J.; Pohlheim, H.; Wegener, J. On termination criteria of evolutionary algorithms. In Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation, Francisc, CA, USA, 7–11 July 2001; p. 768. [Google Scholar]
  78. Zielinski, K.; Weitkemper, P.; Laur, R.; Kammeyer, K.D. Examination of stopping criteria for differential evolution based on a power allocation problem. In Proceedings of the 10th International Conference on Optimization of Electrical and Electronic Equipment, Brasov, Romania, 18–20 May 2006; Volume 3, pp. 149–156. [Google Scholar]
  79. Ghoreishi, S.N.; Clausen, A.; Jørgensen, B.N. Termination Criteria in Evolutionary Algorithms: A Survey. In Proceedings of the IJCCI, Funchal, Portugal, 1–3 November 2017; pp. 373–384. [Google Scholar]
  80. Ravber, M.; Liu, S.H.; Mernik, M.; Črepinšek, M. Maximum number of generations as a stopping criterion considered harmful. Appl. Soft Comput. 2022, 128, 109478. [Google Scholar] [CrossRef]
  81. Charilogis, V.; Tsoulos, I.G. Toward an ideal particle swarm optimizer for multidimensional functions. Information 2022, 13, 217. [Google Scholar] [CrossRef]
  82. Kyrou, G.; Charilogis, V.; Tsoulos, I.G. EOFA: An Extended Version of the Optimal Foraging Algorithm for Global Optimization Problems. Computation 2024, 12, 158. [Google Scholar] [CrossRef]
  83. Charilogis, V.; Tsoulos, I.G.; Stavrou, V.N. An Intelligent Technique for Initial Distribution of Genetic Algorithms. Axioms 2023, 12, 980. [Google Scholar] [CrossRef]
  84. Charilogis, V.; Tsoulos, I.G.; Tzallas, A. An improved parallel particle swarm optimization. SN Comput. Sci. 2023, 4, 766. [Google Scholar] [CrossRef]
  85. Kyrou, G.; Charilogis, V.; Tsoulos, I.G. Improving the Giant-Armadillo Optimization Method. Analytics 2024, 3, 225–240. [Google Scholar] [CrossRef]
  86. Li, Y.; Wu, H. A clustering method based on K-means algorithm. Phys. Procedia 2012, 25, 1104–1109. [Google Scholar] [CrossRef]
  87. Ali, H.H.; Kadhum, L.E. K-means clustering algorithm applications in data mining and pattern recognition. Int. J. Sci. Res. (IJSR) 2017, 6, 1577–1584. [Google Scholar]
  88. Krishna, K.; Murty, M.N. Genetic K-means algorithm. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 1999, 29, 433–439. [Google Scholar] [CrossRef]
  89. Sinaga, K.P.; Yang, M.S. Unsupervised K-means clustering algorithm. IEEE Access 2020, 8, 80716–80727. [Google Scholar] [CrossRef]
  90. Ay, M.; Özbakır, L.; Kulluk, S.; Gülmez, B.; Öztürk, G.; Özer, S. FC-Kmeans: Fixed-centered K-means algorithm. Expert Syst. Appl. 2023, 211, 118656. [Google Scholar] [CrossRef]
  91. Oti, E.U.; Olusola, M.O.; Eze, F.C.; Enogwe, S.U. Comprehensive review of K-Means clustering algorithms. Criterion 2021, 12, 22–23. [Google Scholar] [CrossRef]
  92. Ali, M.M.; Khompatraporn, C.; Zabinsky, Z.B. A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems. J. Glob. Optim. 2005, 31, 635–672. [Google Scholar] [CrossRef]
  93. Floudas, C.A.; Pardalos, P.M.; Adjiman, C.; Esposito, W.R.; Gümüs, Z.H.; Harding, S.T.; Klepeis, J.L.; Meyer, C.A.; Schweiger, C.A. Handbook of Test Problems in Local and Global Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 33. [Google Scholar]
  94. Ali, M.M.; Kaelo, P. Improved particle swarm algorithms for global optimization. Appl. Math. Comput. 2008, 196, 578–593. [Google Scholar] [CrossRef]
  95. Koyuncu, H.; Ceylan, R. A PSO based approach: Scout particle swarm algorithm for continuous global optimization problems. J. Comput. Des. Eng. 2019, 6, 129–142. [Google Scholar] [CrossRef]
  96. Siarry, P.; Berthiau, G.; Durdin, F.; Haussy, J. Enhanced simulated annealing for globally minimizing functions of many-continuous variables. ACM Trans. Math. Softw. (TOMS) 1997, 23, 209–228. [Google Scholar] [CrossRef]
  97. Tsoulos, I.G.; Lagaris, I.E. GenMin: An enhanced genetic algorithm for global optimization. Comput. Phys. Commun. 2008, 178, 843–851. [Google Scholar] [CrossRef]
  98. LaTorre, A.; Molina, D.; Osaba, E.; Poyatos, J.; Del Ser, J.; Herrera, F. A prescription of methodological guidelines for comparing bio-inspired optimization algorithms. Swarm Evol. Comput. 2021, 67, 100973. [Google Scholar] [CrossRef]
  99. Li, X.; Engelbrecht, A.; Epitropakis, M.G. Benchmark Functions for CEC’2013 Special Session and Competition on Niching Methods for Multimodal Function Optimization; Technology Report; RMIT University, Evolutionary Computation and Machine Learning Group: Melbourne, Australia, 2013. [Google Scholar]
  100. Gaviano, M.; Kvasov, D.E.; Lera, D.; Sergeyev, Y.D. Algorithm 829: Software for generation of classes of test functions with known local and global minima for global optimization. ACM Trans. Math. Softw. (TOMS) 2003, 29, 469–480. [Google Scholar] [CrossRef]
  101. Jones, J.E. On the Determination of Molecular Fields.—II. From the Equation of State of a Gas; Series A, Containing Papers of a Mathematical and Physical Character; Royal Society: London, UK, 1924; Volume 106, pp. 463–477. [Google Scholar]
  102. Powell, M.J.D. A Tolerant Algorithm for Linearly Constrained Optimization Calculations. Math. Program 1989, 45, 547–566. [Google Scholar] [CrossRef]
  103. Tsoulos, I.G. Modifications of real code genetic algorithm for global optimization. Appl. Math. Comput. 2008, 203, 598–607. [Google Scholar] [CrossRef]
  104. Stein, W.E.; Keblis, M.F. A new method to simulate the triangular distribution. Math. Comput. Model. 2009, 49, 1143–1147. [Google Scholar] [CrossRef]
  105. Sharma, V.K.; Bakouch, H.S.; Suthar, K. An extended Maxwell distribution: Properties and applications. Commun. Stat. Simul. Comput. 2017, 46, 6982–7007. [Google Scholar] [CrossRef]
  106. Sengupta, R.; Pal, M.; Saha, S.; Bandyopadhyay, S. Uniform distribution driven adaptive differential evolution. Appl. Intell. 2020, 50, 3638–3659. [Google Scholar] [CrossRef]
  107. Glickman, T.S.; Xu, F. Practical risk assessment with triangular distributions. Int. J. Risk Assess. Manag. 2009, 13, 313–327. [Google Scholar] [CrossRef]
  108. Ishaq, A.I.; Abiodun, A.A. The Maxwell–Weibull distribution in modeling lifetime datasets. Ann. Data Sci. 2020, 7, 639–662. [Google Scholar] [CrossRef]
  109. Beretta, L.; Cohen-Addad, V.; Lattanzi, S.; Parotsidis, N. Multi-swap k-means++. Adv. Neural Inf. Process. Syst. 2023, 36, 26069–26091. [Google Scholar]
  110. Gropp, W.; Lusk, E.; Doss, N.; Skjellum, A. A high-performance, portable implementation of the MPI message passing interface standard. Parallel Comput. 1996, 22, 789–828. [Google Scholar] [CrossRef]
  111. Chandra, R. Parallel Programming in OpenMP; Academic Press: Cambridge, MA, USA, 2001. [Google Scholar]
Figure 1. Flowchart of the suggested global optimization procedure.
Figure 1. Flowchart of the suggested global optimization procedure.
Computation 12 00205 g001
Figure 2. The flowchart of the K-means procedure.
Figure 2. The flowchart of the K-means procedure.
Computation 12 00205 g002
Figure 3. Total function calls for the considered optimization methods. The numbers represent the sum of function calls for each mentioned method.
Figure 3. Total function calls for the considered optimization methods. The numbers represent the sum of function calls for each mentioned method.
Computation 12 00205 g003
Figure 4. Box plot used to compare the EGO method and the modified version as suggested in the current work.
Figure 4. Box plot used to compare the EGO method and the modified version as suggested in the current work.
Computation 12 00205 g004
Figure 5. Comparison of average function calls for the incorporated optimization methods, using the proposed initial distribution.
Figure 5. Comparison of average function calls for the incorporated optimization methods, using the proposed initial distribution.
Computation 12 00205 g005
Figure 6. Statistical comparison of all used methods.
Figure 6. Statistical comparison of all used methods.
Computation 12 00205 g006
Figure 7. Scatter plot for different initial distributions.
Figure 7. Scatter plot for different initial distributions.
Computation 12 00205 g007
Figure 8. Time comparison for the ELP function and the proposed optimization method using the four sampling techniques mentioned before. The time depicted in the figure is the sum of the execution times for 30 independent runs.
Figure 8. Time comparison for the ELP function and the proposed optimization method using the four sampling techniques mentioned before. The time depicted in the figure is the sum of the execution times for 30 independent runs.
Computation 12 00205 g008
Table 1. The dimension for every function used in the experiments.
Table 1. The dimension for every function used in the experiments.
FunctionDimension
Ackley n = 2
Bf1 n = 2
Bf2 n = 2
Bf3 n = 2
Branin n = 2
Camel n = 2
Easom n = 2
EQUAL_MAXIMA n = 1
EXP n = 4 , 8 , 16 , 32
EXTENDEDF10 n = 2
FIVE_UNEVEN n = 1
F9 n = 2
F14 n = 2
F15 n = 4
F17 n = 2
HIMMELBLAU n = 2
GKLS n = 2 , 3
GRIEWANK2 n = 2
GRIEWANK10 n = 10
HANSEN n = 2
HARTMAN3 n = 3
HARTMAN6 n = 6
POTENTIAL n = 9 , 15
RASTRIGIN n = 2
ROSENBROCK n = 4 , 8 , 16
SHEKEL5 n = 4
SHEKEL7 n = 4
SHEKEL10 n = 4
SHUBERT n = 2 , 4 , 8
SCHWEFEL221 n = 2
SCHWEFEL222 n = 2
SPHERE n = 2
TEST2N n = 4 , 5 , 6 , 7
SINU n = 4 , 8 , 16
TEST30N n = 3 , 4
UNEVEN_MAXIMA n = 1
VINCENT n = 2 , 4 , 8
Table 2. The values used for every parameter of the used algorithms.
Table 2. The values used for every parameter of the used algorithms.
PARAMETERMEANINGVALUE
N c Number of chromosomes/particles200
N g Maximum number of allowed iterations200
N m Number of samples for the K-means 10 × N c
N k Number of maximum iterations for the stopping rule5
p s Selection rate of the genetic algorithm0.1
p m Mutation rate of the genetic algorithm0.05
Table 3. Experimental results using the incorporated optimization methods. The numbers in parentheses denote the standard deviation for the number of function calls.
Table 3. Experimental results using the incorporated optimization methods. The numbers in parentheses denote the standard deviation for the number of function calls.
FunctionGeneticPSODEEGOEEGO
ACKLEY6749 (869)6885 (1108)10,220 (1342)8714 (1009)4199 (768)
BF14007 (308)4142 (397)8268 (299)4762 (379)3228 (356)
BF23794 (350)3752 (302)7913 (320)4299 (325)2815 (310)
BF33480 (266)3306 (261)10270 (294)3747 (303)2501 (218)
BRANIN2376 (136)2548 (142)4101 (559)2659 (150)1684 (180)
CAMEL2869 (195)2933 (180)5609 (530)3317 (229)2262 (295)
EASOM1958 (67)1982 (92)2978 (344)2235 (196)1334 (133)
EQUAL_MAXIMA2651 (131)1499 (176)2374 (117)2013 (157)1286 (121)
EXP42946 (196)3404 (201)5166 (430)3392 (313)2166 (243)
EXP83120 (187)3585 (212)5895 (615)3347 (288)2802 (304)
EXP163250 (234)3735 (206)6498 (784)3345 (233)3279 (416)
EXP323561 (233)3902 (233)7606 (475)3332 (250)3430 (401)
EXTENDEDF104862 (959)3653 (329)5728 (976) (0.87)4737 (428)2609 (321)
FIVE_UNEVEN3412 (422) (0.67)3913 (378) (0.87)4042 (391) (0.14)5006 (519) (0.97)3849 (388) (0.90)
F92604 (197)1888 (306)2271 (364)2748 (333)1439 (229)
F146686 (1503)5498 (550)5279 (1988) (0.63)9228 (3725) (0.94)6063 (1835)
F154373 (624)6696 (856)5874 (1880) (0.80)7342 (1441)4397 (1031)
F173667 (267)3805 (333)10441 (1435)4057 (339)2766 (368)
HIMMELBLAU2481 (27)1013 (34)6636 (282)1718 (84)1119 (81)
GKLS2502280 (184)2411 (158)3834 (416)3332 (203)1603 (150)
GKLS3502613 (269)2234 (225)3919 (469)2493 (233)1298 (178)
GOLDSTEIN3687 (278)3865 (356)6781 (483)4015 (323)2784 (290)
GRIEWANK24501 (918)3076 (218) (0.73)7429 (1472)4682 (586)2589 (532) (0.96)
GRIEWANK106410 (1264) (0.97)8006 (732)18490 (1716)8772 (1138)7435 (900)
HANSEN3210 (458)2856 (208)4185 (627)3789 (864)2484 (417)
HARTMAN32752 (188)3140 (215)5190 (294)3078 (267)1793 (231)
HARTMAN63219 (217)3710 (235)5968 (548)3583 (309)2478 (282)
POTENTIAL34352 (461)4865 (468)6118 (1047)6027 (793)4081 (391)
POTENTIAL57705 (892)9183 (1180)9119 (570)9968 (1171)8886 (1146)
RASTRIGIN4107 (729)3477 (348)6216 (428)4201 (401)2304 (302)
ROSENBROCK43679 (393)6372 (549)8452 (643)6137 (720)4019 (324)
ROSENBROCK85270 (514)8284 (849)11,530 (1632)8569 (872)6801 (633)
ROSENBROCK168509 (1026)11,872 (1018)17,432 (1738)11,777 (1268)11,996 (1331)
SHEKEL53325 (227)4259 (305)6662 (974)3948 (340)2495 (310)
SHEKEL73360 (283)4241 (300)6967 (1035)4043 (379)2432 (240)
SHEKEL103488 (240)4237 (268)6757 (897)3932 (355)2516 (326)
SHUBERT23567 (413)2123 (188)3526 (885)3622 (446)2300 (527)
SHUBERT43358 (380)1823 (166)3067 (699)3593 (674)1967 (323) (0.97)
SHUBERT83569 (357)2348 (203)3120 (452) (0.94)2862 (383)2267 (296)
SCHAFFER18,787 (3105)15,176 (2401)6315 (1548)28,679 (6267)23,531 (5904)
SCHWEFEL2212667 (416)2529 (163)5415 (1161)3426 (787)2203 (478)
SCHWEFEL22233,725 (4809)42,898 (6978)12,200 (1737)51,654 (7150)38,876 (5659)
SPHERE1588 (23)1521 (12)3503 (199)1642 (38)1162 (84)
TEST2N43331 (392)3437 (223)6396 (999)3695 (411)2277 (464)
TEST2N54000 (815)3683 (287)6271 (1017)4234 (636)2734 (479) (0.96)
TEST2N64312 (901) (0.93)3781 (241)5410 (822) (0.93)4599 (589)2905 (832) (0.86)
TEST2N74775 (905) (0.90)4060 (312)7074 (1523) (0.97)5146 (767)3559 (763) (0.73)
SINU42991 (298)3504 (231)5953 (1509)3478 (463)2005 (332)
SINU83442 (293)4213 (309)6973 (1637)4420 (560)3158 (467)
SINU164320 (458)5019 (312)6979 (634)7033 (1017)5891 (553)
TEST30N33211 (1207)4610 (1615)6168 (1297)3971 (1348)2362 (884)
TEST30N43679 (1435)4629 (1708)7006 (2745)4908 (1333)2978 (1719)
UNEVEN_MAXIMA2969 (283)2729 (292)2393 (319)2972 (288)1560 (208)
VINCENT212,779 (563)1797 (140) (0.87)6216 (1152)2094 (148)1834 (388) (0.90)
VINCENT419,385 (1179)1830 (190) (0.67)4691 (1019) (0.83)2674 (244)2697 (1443) (0.64)
VINCENT819,882 (2548)2717 (256)4417 (1118) (0.77)3423 (358)4368 (2253) (0.77)
SUM297,650268,654288,413320,469231,856
Table 4. A series of sampling techniques was used in the proposed method. The number in parentheses denotes the standard deviation for the number of function calls.
Table 4. A series of sampling techniques was used in the proposed method. The number in parentheses denotes the standard deviation for the number of function calls.
Function UniformTriangularMaxwellK-means
ACKLEY6118 (736)5912 (741)5986 (845)4199 (768)
BF14513 (423)4318 (368)4055 (346)3228 (356)
BF23959 (333)3879 (363)3587 (339)2815 (310)
BF33506 (286)3344 (254)3129 (298)2501 (218)
BRANIN2282 (162)2131 (174)2066 (167)1684 (180)
CAMEL3156 (251)2919 (244)2848 (261)2262 (295)
EASOM1756 (138)1650 (140)1321 (106)1334 (133)
EQUAL_MAXIMA1445 (130)1293 (115)1165 (139)1286 (121)
EXP43438 (335)3273 (232)3194 (330)2166 (243)
EXP83432 (284)3387 (332)3152 (266)2802 (304)
EXP163369 (349)3326 (329)3291 (224)3279 (416)
EXP323216 (274)3225 (207)3344 (374)3430 (401)
EXTENDEDF104304 (683)3913 (596)3718 (673)2609 (321)
FIVE_UNEVEN4685 (427)4330 (474)4358 (544)3849 (388)
F91958 (168)1681 (188)1878 (186)1439 (229)
F147552 (1377)7573 (2049)6148 (724)6063 (1835)
F156806 (1065)6466 (722)6543 (974)4397 (1031)
F173805 (326)3700 (404)3543 (236)2766 (368)
HIMMELBLAU1333 (91)1173 (72)1114 (102)1119 (81)
GKLS2502268 (177)2023 (170)1778 (208)1603 (150)
GKLS3502151 (291)1841 (190)2069 (893)1298 (178)
GOLDSTEIN3855 (291)3731 (335)3530 (309)2784 (290)
GRIEWANK24310 (1205)4510 (1310)4035 (995)2589 (532)
GRIEWANK108640 (1106)8773 (1265)8232 (911)7435 (900)
HANSEN3329 (653)3071 (466)2734 (521)2484 (417)
HARTMAN32849 (240)2673 (246)2678 (317)1793 (231)
HARTMAN63456 (344)3249 (330)3119 (299)2478 (282)
POTENTIAL34554 (511)5095 (469)3928 (509)4081 (391)
POTENTIAL58356 (702)10,032 (1078)7504 (996)8886 (1146)
RASTRIGIN3310 (414)3187 (336)2751 (600)2304 (302)
ROSENBROCK46566 (719)6353 (742)5588 (599)4019 (324)
ROSENBROCK88379 (864)8717 (882)7783 (782)6801 (633)
ROSENBROCK1611,921 (1389)12,471 (1025)11,677 (1212)11,996 (1331)
SHEKEL53946 (333)3731 (398)3859 (460)2495 (310)
SHEKEL73990 (361)3646 (442)3944 (326)2432 (240)
SHEKEL103836 (316)3630 (385)3694 (379)2516 (326)
SHUBERT23288 (562)3212 (728)2631 (373)2300 (527)
SHUBERT43116 (548)2919 (514)2499 (422)1967 (323)
SHUBERT82815 (500)2810 (513)2337 (296)2267 (296)
SCHAFFER55,131 (10,062)40,715 (9866)60,717 (9950)23531 (5904)
SCHWEFEL2212724 (406)2901 (547)2799 (505)2203 (478)
SCHWEFEL22253,118 (7011)54,593 (8763)55,354 (7330)38,876 (5659)
SPHERE1346 (68)1188 (72)1084 (82)1162 (84)
TEST2N43345 (416)3233 (401)2867 (253)2277 (464)
TEST2N53937 (757)3742 (584)3094 (278)2734 (479)
TEST2N64008 (718)4473 (1060)3266 (297)2905 (832)
TEST2N74545 (1169)4612 (1046)3549 (421)3559 (763)
SINU43128 (410)2879 (240)3559 (750)2005 (332)
SINU84126 (339)3767 (410)5637 (964)3158 (467)
SINU166774 (1166)5977 (601)7739 (1791)5891 (553)
TEST30N33704 (1289)3384 (1083)3175 (1012)2362 (884)
TEST30N44262 (1805)4327 (1838)3491 (920)2978 (1719)
UNEVEN_MAXIMA1877 (236)1810 (239)1666 (189)1560 (208)
VINCENT21598 (119)1482 (133)1849 (121)1834 (388)
VINCENT42471 (229)2282 (167)2296 (206)2697 (1443)
VINCENT83074 (524)2797 (201)2883 (423)4368 (2253)
SUM324,736307,329315,835231,856
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kyrou, G.; Charilogis, V.; Tsoulos, I.G. Refining the Eel and Grouper Optimizer with Intelligent Modifications for Global Optimization. Computation 2024, 12, 205. https://doi.org/10.3390/computation12100205

AMA Style

Kyrou G, Charilogis V, Tsoulos IG. Refining the Eel and Grouper Optimizer with Intelligent Modifications for Global Optimization. Computation. 2024; 12(10):205. https://doi.org/10.3390/computation12100205

Chicago/Turabian Style

Kyrou, Glykeria, Vasileios Charilogis, and Ioannis G. Tsoulos. 2024. "Refining the Eel and Grouper Optimizer with Intelligent Modifications for Global Optimization" Computation 12, no. 10: 205. https://doi.org/10.3390/computation12100205

APA Style

Kyrou, G., Charilogis, V., & Tsoulos, I. G. (2024). Refining the Eel and Grouper Optimizer with Intelligent Modifications for Global Optimization. Computation, 12(10), 205. https://doi.org/10.3390/computation12100205

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop