Next Article in Journal
Systematic Review of Forecasting Models Using Evolving Fuzzy Systems
Previous Article in Journal
The Effect of Proportional, Proportional-Integral, and Proportional-Integral-Derivative Controllers on Improving the Performance of Torsional Vibrations on a Dynamical System
Previous Article in Special Issue
Evolutionary Computation Techniques for Path Planning Problems in Industrial Robotics: A State-of-the-Art Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EOFA: An Extended Version of the Optimal Foraging Algorithm for Global Optimization Problems

by
Glykeria Kyrou
,
Vasileios Charilogis
and
Ioannis G. Tsoulos
*
Department of Informatics and Telecommunications, University of Ioannina, 47150 Kostaki Artas, Greece
*
Author to whom correspondence should be addressed.
Computation 2024, 12(8), 158; https://doi.org/10.3390/computation12080158
Submission received: 4 July 2024 / Revised: 1 August 2024 / Accepted: 2 August 2024 / Published: 5 August 2024

Abstract

:
The problem of finding the global minimum of a function is applicable to a multitude of real-world problems and, hence, a variety of computational techniques have been developed to efficiently locate it. Among these techniques, evolutionary techniques, which seek, through the imitation of natural processes, to efficiently obtain the global minimum of multidimensional functions, play a central role. An evolutionary technique that has recently been introduced is the Optimal Foraging Algorithm, which is a swarm-based algorithm, and it is notable for its reliability in locating the global minimum. In this work, a series of modifications are proposed that aim to improve the reliability and speed of the above technique, such as a termination technique based on stochastic observations, an innovative sampling method and a technique to improve the generation of offspring. The new method was tested on a series of problems from the relevant literature and a comparative study was conducted against other global optimization techniques with promising results.

1. Introduction

The basic goal of global optimization is to find the global minimum by searching for the appropriate scope of each problem. Primarily, a global optimization method aims to discover the global minimum of a continuous function, and it is defined as
x * = arg min x S f ( x )
with S defined as
S = a 1 , b 1 × a 2 , b 2 × a n , b n
Global optimization is a core part of applied mathematics [1] and computer science [2]. Additionally, it finds application in various fields such as physics [3,4,5], chemistry [6,7,8], economics [9,10], medicine [11,12], etc. Optimization methods are divided into deterministic and stochastic [13]. Deterministic methods guarantee finding the global optimal solution and are usually applied to small problems [14]. The most commonly used techniques in the first category are the so-called interval techniques [15,16], which divide the initial domain of the function until they discover a promising subset that may contain the global minimum. On the other hand, stochastic methods do not guarantee finding the global minimum and are mainly applied to large problems, but they are inefficient when nonlinear equality constraints are involved. Stochastic methods include a variety of techniques, such as Controlled Random Search methods [17,18,19], Simulated Annealing methods [20,21,22], methods based on the Multistart technique [23,24], etc. Furthermore, in the recent literature, a variety of hybrid methods have appeared that combine different optimization techniques [25,26]. Furthermore, due to the widespread usage of parallel programming architectures, a variety of methods have been proposed that take advantage of such systems [27,28,29,30].
The above techniques present various problems that have had to be addressed by researchers, such as, for example, the initialization of the possible solutions, the termination rules, etc. A number of researchers have dealt with the topic of the initialization of candidate solutions and have proposed techniques such as the repulsion method [31], an adaptive sampling method using the Kriging model [32], Monte Carlo-based sampling methods [33], a method which utilizes an Radial Basis Function (RBF) network [34] to obtain samples [35] etc. Additionally, in the direction of termination methods, a series of techniques have been proposed, such as sequential stopping rules [36], a stopping rule based on asymptotic considerations [37], a stopping rule that utilizes the likelihood of obtaining the specific set of solutions [38], etc.
An important group of stochastic techniques that has attracted the interest of many researchers is the group of evolutionary techniques. The branch of evolutionary computing includes the study of the foundations and applications of computational techniques based on the principles of natural evolution. Evolution in nature is responsible for the “design” of all living things on earth and the strategies they use to interact with each other. This branch of methods includes Differential Evolution methods [39,40], Particle Swarm Optimization (PSO) methods [41,42,43], Ant Colony optimization methods [44,45], genetic algorithms [46,47], etc. Zhu and Zhang proposed a novel method in this area that tackles optimization problems with a strategy based on animal foraging behavior [48]. This Optimal Foraging Algorithm (OFA) is a swarm-based algorithm motivated by animal behavioral ecology. Furthermore, Fu et al. proposed a variation of the OFA algorithm with the incorporation of the Differential Evolution technique [49]. Additionally, Jian and Zhu proposed a modification of the OFA algorithm with direction prediction [50] in order to enhance the performance of the original algorithm. Recently, Chen Ding and Guang Yu Zhu introduced an improved quasi-adversarial optimal social behavior search algorithm (QOS-OFA) to handle optimization problems [51].
The current work introduces a series of modifications to the OFA algorithm in order to speed up the process and to increase the effectiveness of the algorithm. These modifications include the following:
  • The incorporation of a sampling technique that utilizes the K-means method [52]. The points that are sampled will more accurately and quickly lead to the global minimum of the function and additional points that might have been close enough using another sampling technique are discarded.
  • The use of a termination technique based on stochastic observations. At each iteration of the algorithm, the smallest functional value is recorded. If this remains constant for a predetermined number of iterations, then the method terminates. This way, the method will terminate in time without needlessly wasting computing time on iterations that will simply produce the same result.
  • The improvement of the offspring produced by the method using a few steps of a local minimization method. In the current work, a BFGS variant [53] was used as a local search procedure.
The rest of this paper is divided into the following sections: in Section 2, the proposed method is fully described. In Section 3, the benchmark functions are presented as well as the experimental results, and finally in Section 4 some conclusions and guidelines for future work are discussed.

2. The Proposed Method

The OFA algorithm uses nature as a source of inspiration. The OFA algorithm has as its main purpose the evolution of the population using some iterative steps which will be presented in more detail below. The quality of the initial population can be improved, allowing the search space to be explored more efficiently. The first major modification introduced to the above technique is the use of a sophisticated technique of sampling points from the objective function. This sampling is based on the K-means technique and is presented in detail in Section 2.2. The second modification is to apply a few steps of a local search procedure in offspring creation if an offspring is not consistent with the bounds [ a , b ] of the objective function f ( x ) . With this modification, it is ensured that a new point will be within the field definition of the function on the one hand, and on the other hand, it will have a slightly lower value than its predecessor. The third modification is to use an efficient termination rule to terminate the global optimization method effectively. This termination rule is outlined in Section 2.3.

2.1. The Main Steps of the Algorithm

The main steps of the used global optimization method are the following:
  • Initialization step.
    (a)
    Define as N c the number of elements in the population.
    (b)
    Define as N g the maximum number of allowed iterations.
    (c)
    Initialize randomly the members of the population in set S. This forms the initial population denoted as P. The sampling is performed using the procedure in Section 2.2.
    (d)
    Create the QOP population, which stands for the quasi-opposite population of P, by calculating the quasi-opposite positions of P.
    (e)
    Set  t = 0 , the iteration number.
    (f)
    Select the N c fittest solutions from the population set P , QOP using the QOBL method proposed in [51].
  • Calculation step.
    (a)
    Produce the new offspring solutions x t + 1 based on the ranking order using the following equation:
    x j t + 1 = x j t + K ( t ) × r 1 r 2 × x j t x b t + 1 K ( t ) × x 1 t x r t , j = 2 , , N c x 1 t + K ( t ) × r 1 r 2 × x 1 t x N c t , j = 1
    where r 1 , r 2 are random numbers in [ 0 , 1 ] and x r t is a randomly selected solution from the population. The function K ( t ) is defined as
    K ( t ) = cos π t 2 N g
    (b)
    For  j = 1 , , N c , do
    i.
    If  x t + 1 S then x t + 1 = L S x t , B i . This step applies a limited number of steps of a local search procedure LS(x) to the current point. The parameter B i denotes the number of steps for the local search procedure. In the current work, the BFGS local optimization procedure is used, but other local search methods could be employed such as Gradient Descent [54], Steepest Descent [55], etc.
    ii.
    if  λ j t + 1 × f x t + 1 1 + ( t + 1 ) × λ j t + 1 < f x t t then select x t + 1 for the next population else select x t . The values λ j t + 1 are uniformly distributed random numbers in [ 0 , 1 ] .
    (c)
    End For.
    (d)
    Sort all solutions in the current population from the best to worst according to the function value.
  • Termination check step.
    (a)
    Set  t = t + 1 .
    (b)
    Calculate to stopping rule in the work of Charilogis [56]. This termination method is described in Section 2.3.
    (c)
    If the termination criteria are not met then go to Calculation step, else terminate.
The steps of the proposed method are also graphically shown in Figure 1.

2.2. The Proposed Sampling Procedure

The proposed sampling technique initially produces a series of samples from the objective function and, subsequently, through the application of the K-means technique, only the located centers are considered as the final samples. The method was introduced by James MacQueen [57] and it is a clustering algorithm that has been used in data analysis and machine learning in a variety of research papers [58,59]. The algorithm seeks to estimate the centers of possible teams in a set of samples and its main steps are listed as follows:
  • Define as k the number of clusters.
  • Draw randomly N m initial points x i , i = 1 , , N m from the objective function.
  • Assign randomly each point x i , i = 1 , , N m in a cluster S j , j = 1 , , k .
  • For every cluster j = 1 k  do
    (a)
    Set as M j the number of points in S j .
    (b)
    Compute the center of the cluster c j as
    c j = 1 M j x i S j x i
  • EndFor.
  • Repeat
    (a)
    Set S j = { } , j = 1 k .
    (b)
    For each point x i , i = 1 , , N m  do
    i.
    Set  j * = argmin m = 1 k D x i , c m . The function D ( x , y ) is the Euclidean distance of points ( x , y ) .
    ii.
    Set  S j * = S j * x i .
    (c)
    EndFor.
    (d)
    For each center c j , j = 1 k  do
    i.
    Update the center c j as
    c j = 1 M j x i S j x i
    (e)
    EndFor.
  • If there is no significant change in centers   c j  terminate the algorithm and return the k centers as the final set of samples.

2.3. The Termination Rule Used

At every iteration t, the difference between the current best value f m i n ( t ) and the previous best value f m i n ( t 1 ) is computed:
δ ( t ) = f min ( k ) f min ( k 1 )
The algorithm terminates when δ ( t ) ϵ for a series of predefined consecutive iterations N k , where ϵ is a small positive number, for example 10 6 .

3. Results

This section will begin with a detailed description of the functions that will be used in the experiments, followed by an analysis of the experiments performed and comparisons with other global optimization techniques.

3.1. Test Functions

The functions used in the experiments have been proposed in a series of relative works [60,61] and they cover various scientific fields, such as medicine, physics, engineering, etc. Furthermore, these objective functions have been used by many researchers in a variety of publications [62,63,64,65,66]. The definitions of these functions are given below:
  • Ackley function:
    f ( x ) = a exp b 1 n i = 1 n x i 2 exp 1 n i = 1 n cos c x i + a + exp ( 1 )
    where a = 20.0.
  • Bf1 (Bohachevsky 1) function:
f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 4 10 cos 4 π x 2 + 7 10
  • Bf2 (Bohachevsky 2) function:
    f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 cos 4 π x 2 + 3 10
  • Bf3 (Bohachevsky 3) function:
    f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 + 4 π x 2 + 3 10
  • Branin function: f ( x ) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos ( x 1 ) + 10 with 5 x 1 10 , 0 x 2 15 .
  • Camel function:
    f ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 , x [ 5 , 5 ] 2
  • Easom function:
    f ( x ) = cos x 1 cos x 2 exp x 2 π 2 x 1 π 2
    with x [ 100 , 100 ] 2 .
  • Exponential function, defined as
    f ( x ) = exp 0.5 i = 1 n x i 2 , 1 x i 1
    In the conducted experiments, the values n = 4 , 8 , 16 , 32 were used.
  • ExtendedF10 function:
    f ( x ) = i = 1 n x i 2 10 cos 2 π x i
  • F12 function:
    f ( x ) = π n 10 sin π y 1 + i = 1 n 1 y i 1 2 1 + 10 sin 2 π y i + 1 + y n 1 2 + i = 1 n u x i , 10 , 100 , 4
    where
    y i = 1 + x i + 1 4
    and
    u x i , a , k , m = k x i a m , x i > a 0 , a < x i < a k x i a m , x i < a
  • F14 function:
    f ( x ) = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 6 1
  • F15 function:
    f ( x ) = i = 1 11 a i x 1 b i + b i x 2 b i 2 + b i x 3 + x 4 2
  • F17 function:
    f ( x ) = 1 + x 1 + x 2 + 1 2 × 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × 30 + 2 x 1 3 x 2 2 × 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2
  • Griewank2 function:
    f ( x ) = 1 + 1 200 i = 1 2 x i 2 i = 1 2 cos ( x i ) ( i ) , x [ 100 , 100 ] 2
  • Griewank10 function: the function is given by the equation
    f ( x ) = i = 1 n x i 2 4000 i = 1 n cos x i i + 1
    with n = 10 .
  • Gkls function: f ( x ) = Gkls ( x , n , w ) is a constructed function with w local minima presented in [67], with x [ 1 , 1 ] n . For the conducted experiments, the values n = 2 , 3 and w = 50 were utilized.
  • Goldstein and Price function:
    f ( x ) = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ] × [ 30 + 2 x 1 3 x 2 2 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ]
    with x [ 2 , 2 ] 2 .
  • Hansen function: f ( x ) = i = 1 5 i cos ( i 1 ) x 1 + i j = 1 5 j cos ( j + 1 ) x 2 + j , x [ 10 , 10 ] 2 .
  • Hartman 3 function:
    f ( x ) = i = 1 4 c i exp j = 1 3 a i j x j p i j 2
    with x [ 0 , 1 ] 3 and a = 3 10 30 0.1 10 35 3 10 30 0.1 10 35 , c = 1 1.2 3 3.2 and
    p = 0.3689 0.117 0.2673 0.4699 0.4387 0.747 0.1091 0.8732 0.5547 0.03815 0.5743 0.8828
  • Hartman 6 function:
    f ( x ) = i = 1 4 c i exp j = 1 6 a i j x j p i j 2
    with x [ 0 , 1 ] 6 and a = 10 3 17 3.5 1.7 8 0.05 10 17 0.1 8 14 3 3.5 1.7 10 17 8 17 8 0.05 10 0.1 14 , c = 1 1.2 3 3.2 and
    p = 0.1312 0.1696 0.5569 0.0124 0.8283 0.5886 0.2329 0.4135 0.8307 0.3736 0.1004 0.9991 0.2348 0.1451 0.3522 0.2883 0.3047 0.6650 0.4047 0.8828 0.8732 0.5743 0.1091 0.0381
  • Potential function: This function stands for the energy of a molecular conformation of N atoms that interacts via the Lennard-Jones potential [68]. The function is defined as
    V L J ( r ) = 4 ϵ σ r 12 σ r 6
    For the conducted experiments, the values N = 3 , 5 were used.
  • Rastrigin function:
    f ( x ) = x 1 2 + x 2 2 cos ( 18 x 1 ) cos ( 18 x 2 ) , x [ 1 , 1 ] 2
  • Rosenbrock function:
    f ( x ) = i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 , 30 x i 30 .
    The values n = 4 , 8 , 16 were used in the conducted experiments.
  • Shekel 5 function:
f ( x ) = i = 1 5 1 ( x a i ) ( x a i ) T + c i
with x [ 0 , 10 ] 4 and a = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 , c = 0.1 0.2 0.2 0.4 0.4
  • Shekel 7 function:
f ( x ) = i = 1 7 1 ( x a i ) ( x a i ) T + c i
with x [ 0 , 10 ] 4 and a = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 2 9 2 9 5 3 5 3 , c = 0.1 0.2 0.2 0.4 0.4 0.6 0.3 .
  • Shekel 10 function:
f ( x ) = i = 1 10 1 ( x a i ) ( x a i ) T + c i
with x [ 0 , 10 ] 4 and a = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 2 9 2 9 5 5 3 3 8 1 8 1 6 2 6 2 7 3.6 7 3.6 , c = 0.1 0.2 0.2 0.4 0.4 0.6 0.3 0.7 0.5 0.6 .
  • Sinusoidal function, defined as
    f ( x ) = 2.5 i = 1 n sin x i z + i = 1 n sin 5 x i z , 0 x i π .
    The values n = 4 , 8 , 16 were used in the conducted experiments.
  • Sphere function:
    f ( x ) = i = 1 n x i 2
  • Schwefel function:
    f ( x ) = i = 1 n j = 1 i x j 2
  • Schwefel 2.21 function:
    f ( x ) = 418.9829 n + i = 1 n x i sin x i
  • Schwefel 2.22 function:
    f ( x ) = i = 1 n x i + i = 1 n x i
  • Test2N function:
    f ( x ) = 1 2 i = 1 n x i 4 16 x i 2 + 5 x i , x i [ 5 , 5 ] .
    For the conducted experiments, the values n = 4 , 5 , 6 , 7 were used.
  • Test30N function:
    f ( x ) = 1 10 sin 2 3 π x 1 i = 2 n 1 x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x n
    The values n = 3 , 4 were used in the conducted experiments.

3.2. Experimental Results

A series of experiments was performed using the test functions presented previously. Every experiment was conducted 30 times using different seeds for the random generator each time. The averages of function calls were measured and are presented in the following tables. The software was coded in ANSI C++ with the assistance of the OPTIMUS optimization package, which is freely available from https://github.com/itsoulos/OPTIMUS (accessed on 2 July 2024). The values for the experimental parameters are shown in Table 1. Generally, the region of attraction A x * of a local minimum x * is defined as
A x * = x : x S , L ( x ) = x *
where L ( x ) is a local search procedure, such as BFGS. Global optimization methods usually find points which are in the region of attraction A x * of a local minimum x * but not necessarily the local minimum itself. For this reason, at the end of their execution, a local minimization method is applied in order to ensure the exact finding of a local minimum. Of course, this does not imply that the minimum that will be located will be the total minimum of the function, since a function can contain tens or even hundreds of local minima.
The following applies to Table 2:
  • The column FUNCTION denotes the name of the objective problem.
  • The column GENETIC denotes the application of a genetic algorithm to the objective problem. The genetic algorithm has N c chromosomes and the maximum number of allowed generations was set to N g .
  • The column PSO stands for the application of Particle Swarm Optimizer to every objective problem. The number of particles was set to N c and the maximum number of allowed iterations was set to N g .
  • The column EOFA represents the application of the proposed method using the values for the parameters shown in Table 1.
  • The row SUM represents the sum of function calls for all test functions.
A statistical comparison of these results is also outlined in Figure 2.
As can be seen, the present method drastically reduces the required number of function calls to almost all objective functions. This, of course, also results in a significant reduction in the required computing time to find the global minimum. Furthermore, a scatter plot for the used optimization methods is shown in Figure 3, which also depicts the superiority of the current method.
Furthermore, an additional experiment was conducted in order to measure the importance of the K-means sampling in the proposed method. In this test, three different sampling methods were used in the proposed method:
  • The column UNIFORM stands for the application of uniform sampling in the proposed method.
  • The column TRIANGULAR represents the application of the triangular distribution [69] to sample the initial points of the proposed method.
  • The column KMEANS represents the sampling method presented in the current work.
The experimental results for this test are shown in Table 3.
The statistical comparison for the previous experiment is graphically outlined in Figure 4. The proposed sampling method significantly outperforms the remaining sampling techniques in almost all problems. Furthermore, the proposed global optimization technique outperforms the rest of the compared techniques, even if a different sampling technique is used.
Moreover, a scatter plot representation of the used distributions with Kruskal–Wallis Ranking is depicted in Figure 5.
In this test, the p-value remains very low at any level of significance. Consequently, the null hypothesis is rejected, indicating that the experimental tests are statistically significant.
Furthermore, an additional test was carried out, where the function ELP was used to measure the effectiveness of the proposed method as the dimension increased. This function is defined as
f ( x ) = i = 1 n 10 6 i 1 n 1 x i 2
where the parameter n defines the dimension of the function. A comparison between the genetic algorithm and the proposed method as n increases is shown in Figure 6.
The required number of calls for the current method increases at a much lower rate than in the case of the genetic algorithm. This means that the method can cope with large-scale problems without significantly increasing the required computing time.

4. Conclusions

Three modifications were proposed to the EOFA optimization method in this article. The modifications were primarily aimed at improving the efficiency and speed of the global optimization algorithm. The first modification proposed the application of a sampling technique which incorporates the K-means method [52]. With the proposed modification, the points where the sampling was achieved helped to find the global minimum with the greatest accuracy and in the least possible time. The second amendment concerned termination rules. Termination rules help terminate functions immediately without unnecessarily wasting computational time on iterations. The third modification concerned the refinement of the offspring produced using the BFGS method [53] as a local search procedure. In the experiments conducted, different sampling techniques were used for the proposed method. More specifically, the following techniques were used: Uniform, Triangular and K-means. It was found that the K-means sampling technique yields much better results than the other two techniques and the total number of calls is extremely lower.
Experiments were also performed using different optimization methods. More specifically, the following were used: genetic, PSO and EOFA. It was observed that the average number of EOFA calls is very limited compared to the other two.
Since the experimental results have been shown to be extremely promising, further efforts can be made to develop the technique in various areas. Among the future extensions of the application may be the use of parallel computing techniques to speed up the optimization process, such as the integration of MPI [70] or the OpenMP library [71].

Author Contributions

G.K., V.C. and I.G.T. conceived of the idea and the methodology, and G.K. and V.C. implemented the corresponding software. G.K. conducted the experiments, employing objective functions as test cases, and provided the comparative experiments. I.G.T. performed the necessary statistical tests. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Acknowledgments

This research was financed by the European Union: Next Generation EU through the Program Greece 2.0 National Recovery and Resilience Plan, under the call RESEARCH–CREATE–INNOVATE, project name “iCREW: Intelligent small craft simulator for advanced crew training using Virtual Reality techniques” (project code: TAEDK-06195).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Intriligator, M.D. Mathematical Optimization and Economic Theory; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2002. [Google Scholar]
  2. Törn, A.; Ali, M.M.; Viitanen, S. Stochastic global optimization: Problem classes and solution techniques. J. Glob. Optim. 1999, 14, 437–447. [Google Scholar] [CrossRef]
  3. Yang, L.; Robin, D.; Sannibale, F.; Steier, C.; Wan, W. Global optimization of an accelerator lattice using multiobjective genetic algorithms. Nucl. Instrum. Methods Phys. Res. Accel. Spectrometers Detect. Assoc. Equip. 2009, 609, 50–57. [Google Scholar] [CrossRef]
  4. Iuliano, E. Global optimization of benchmark aerodynamic cases using physics-based surrogate models. Aerosp. Sci. Technol. 2017, 67, 273–286. [Google Scholar] [CrossRef]
  5. Duan, Q.; Sorooshian, S.; Gupta, V. Effective and efficient global optimization for conceptual rainfall-runoff models. Water Resour. Res. 1992, 28, 1015–1031. [Google Scholar] [CrossRef]
  6. Heiles, S.; Johnston, R.L. Global optimization of clusters using electronic structure methods. Int. J. Quantum Chem. 2013, 113, 2091–2109. [Google Scholar] [CrossRef]
  7. Shin, W.H.; Kim, J.K.; Kim, D.S.; Seok, C. GalaxyDock2: Protein–ligand docking using beta-complex and global optimization. J. Comput. Chem. 2013, 34, 2647–2656. [Google Scholar] [CrossRef] [PubMed]
  8. Liwo, A.; Lee, J.; Ripoll, D.R.; Pillardy, J.; Scheraga, H.A. Protein structure prediction by global optimization of a potential energy function. Biophysics 1999, 96, 5482–5485. [Google Scholar] [CrossRef] [PubMed]
  9. Gaing, Z.-L. Particle swarm optimization to solving the economic dispatch considering the generator constraints. IEEE Trans. Power Syst. 2003, 18, 1187–1195. [Google Scholar] [CrossRef]
  10. Maranas, C.D.; Androulakis, I.P.; Floudas, C.A.; Berger, A.J.; Mulvey, J.M. Solving long-term financial planning problems via global optimization. J. Econ. Dyn. Control 1997, 21, 1405–1425. [Google Scholar] [CrossRef]
  11. Lee, E.K. Large-Scale Optimization-Based Classification Models in Medicine and Biology. Ann. Biomed. Eng. 2007, 35, 1095–1109. [Google Scholar] [CrossRef]
  12. Cherruault, Y. Global optimization in biology and medicine. Math. Comput. Model. 1994, 20, 119–132. [Google Scholar] [CrossRef]
  13. Liberti, L.; Kucherenko, S. Comparison of deterministic and stochastic approaches to global optimization. Int. Trans. Oper. Res. 2005, 12, 263–285. [Google Scholar] [CrossRef]
  14. Choi, S.H.; Manousiouthakis, V. Global optimization methods for chemical process design: Deterministic and stochastic approaches. Korean J. Chem. Eng. 2002, 19, 227–232. [Google Scholar] [CrossRef]
  15. Wolfe, M.A. Interval methods for global optimization. Appl. Math. Comput. 1996, 75, 179–206. [Google Scholar]
  16. Csendes, T.; Ratz, D. Subdivision Direction Selection in Interval Methods for Global Optimization. SIAM J. Numer. Anal. 1997, 34, 922–938. [Google Scholar] [CrossRef]
  17. Price, W.L. Global optimization by controlled random search. J. Optim. Theory Appl. 1983, 40, 333–348. [Google Scholar] [CrossRef]
  18. Krivy, I.; Tvrdik, J. The controlled random search algorithm in optimizing regression models. Comput. Stat. Data Anal. 1995, 20, 229–234. [Google Scholar] [CrossRef]
  19. Ali, M.M.; Torn, A.; Viitanen, S. A Numerical Comparison of Some Modified Controlled Random Search Algorithms. J. Glob. Optim. 1997, 11, 377–385. [Google Scholar] [CrossRef]
  20. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  21. Ingber, L. Very fast simulated re-annealing. Math. Comput. Model. 1989, 12, 967–973. [Google Scholar] [CrossRef]
  22. Eglese, R.W. Simulated annealing: A tool for operational research. Simulated Annealing Tool Oper. Res. 1990, 46, 271–281. [Google Scholar] [CrossRef]
  23. Tsoulos, I.G.; Lagaris, I.E. MinFinder: Locating all the local minima of a function. Comput. Phys. Commun. 2006, 174, 166–179. [Google Scholar] [CrossRef]
  24. Liu, Y.; Tian, P. A multi-start central force optimization for global optimization. Appl. Soft Comput. 2015, 27, 92–98. [Google Scholar] [CrossRef]
  25. Perez, M.; Almeida, F.; Moreno-Vega, J.M. Genetic algorithm with multistart search for the p-Hub median problem. In Proceedings of the 24th EUROMICRO Conference (Cat. No. 98EX204), Vasteras, Sweden, 27 August 1998; Volume 2, pp. 702–707. [Google Scholar]
  26. Oliveira, H.C.B.d.; Vasconcelos, G.C.; Alvarenga, G. A Multi-Start Simulated Annealing Algorithm for the Vehicle Routing Problem with Time Windows. In Proceedings of the 2006 Ninth Brazilian Symposium on Neural Networks (SBRN’06), Ribeirao Preto, Brazil, 23–27 October 2006; pp. 137–142. [Google Scholar]
  27. Larson, J.; Wild, S.M. Asynchronously parallel optimization solver for finding multiple minima. Math. Program. Comput. 2018, 10, 303–332. [Google Scholar] [CrossRef]
  28. Bolton, H.P.J.; Schutte, J.F.; Groenwold, A.A. Multiple Parallel Local Searches in Global Optimization. In Recent Advances in Parallel Virtual Machine and Message Passing Interface; EuroPVM/MPI 2000. Lecture Notes in Computer Science; Dongarra, J., Kacsuk, P., Podhorszki, N., Eds.; Springer: Berlin/Heidelberg, Germany, 2000; Volume 1908. [Google Scholar]
  29. Kamil, R.; Reiji, S. An Efficient GPU Implementation of a Multi-Start TSP Solver for Large Problem Instances. In Proceedings of the 14th Annual Conference Companion on Genetic and Evolutionary Computation, Philadelphia, PA, USA, 7–11 July 2012; pp. 1441–1442. [Google Scholar]
  30. Van Luong, T.; Melab, N.; Talbi, E.G. GPU-Based Multi-start Local Search Algorithms. In Learning and Intelligent Optimization. LION 2011; Lecture Notes in Computer Science; Coello, C.A.C., Ed.; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6683. [Google Scholar]
  31. Sepulveda, A.E.; Epstein, L. The repulsion algorithm, a new multistart method for global optimization. Struct. Optim. 1996, 11, 145–152. [Google Scholar] [CrossRef]
  32. Chen, Z.; Qiu, H.; Gao, L.; Li, X.; Li, P. A local adaptive sampling method for reliability-based design optimization using Kriging model. Struct. Multidisc. Optim. 2014, 49, 401–416. [Google Scholar] [CrossRef]
  33. Homem-de-Mello, T.; Bayraksan, G. Monte Carlo sampling-based methods for stochastic optimization. Surv. Oper. Res. Manag. Sci. 2014, 19, 56–85. [Google Scholar] [CrossRef]
  34. Park, J.; Sandberg, I.W. Universal Approximation Using Radial-Basis-Function Networks. Neural Comput. 1991, 3, 246–257. [Google Scholar] [CrossRef]
  35. Tsoulos, I.G.; Tzallas, A.; Tsalikakis, D. Use RBF as a Sampling Method in Multistart Global Optimization Method. Signals 2022, 3, 857–874. [Google Scholar] [CrossRef]
  36. Hart, W.E. Sequential stopping rules for random optimization methods with applications to multistart local search. SIAM J. Optim. 1998, 9, 270–290. [Google Scholar] [CrossRef]
  37. Tsoulos, I.G. Modifications of real code genetic algorithm for global optimization. Appl. Math. Comput. 2008, 203, 598–607. [Google Scholar] [CrossRef]
  38. Ohsaki, M.; Yamakawa, M. Stopping rule of multi-start local search for structural optimization. Struct. Multidisc. Optim. 2018, 57, 595–603. [Google Scholar] [CrossRef]
  39. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  40. Liu, J.; Lampinen, J. A Fuzzy Adaptive Differential Evolution Algorithm. Soft Comput. 2005, 9, 448–462. [Google Scholar] [CrossRef]
  41. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  42. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization An Overview. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  43. Trelea, I.C. The particle swarm optimization algorithm: Convergence analysis and parameter selection. Inf. Process. Lett. 2003, 85, 317–325. [Google Scholar] [CrossRef]
  44. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  45. Socha, K.; Dorigo, M. Ant colony optimization for continuous domains. Eur. J. Oper. Res. 2008, 185, 1155–1173. [Google Scholar] [CrossRef]
  46. Goldberg, D. Genetic Algorithms in Search, Optimization and Machine Learning; Addison-Wesley Publishing Company: Reading, MA, USA, 1989. [Google Scholar]
  47. Michaelewicz, Z. Genetic Algorithms + Data Structures = Evolution Programs; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  48. Zhu, G.Y.; Zhang, W.B. Optimal foraging algorithm for global optimization. Appl. Soft Comput. 2017, 51, 294–313. [Google Scholar] [CrossRef]
  49. Fu, Y.; Zhang, W.; Qu, C.; Huang, B. Optimal Foraging Algorithm Based on Differential Evolution. IEEE Access 2020, 8, 19657–19678. [Google Scholar] [CrossRef]
  50. Jian, Z.; Zhu, G. Optimal foraging algorithm with direction prediction. Appl. Soft Comput. 2021, 111, 107660. [Google Scholar] [CrossRef]
  51. Ding, C.; Zhu, G. Improved optimal foraging algorithm for global optimization. Computing 2024, 106, 2293–2319. [Google Scholar] [CrossRef]
  52. Ahmed, M.; Seraj, R.; Islam, S.M.S. The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 2020, 9, 1295. [Google Scholar] [CrossRef]
  53. Powell, M.J.D. A Tolerant Algorithm for Linearly Constrained Optimization Calculations. Math. Program. 1989, 45, 547–566. [Google Scholar] [CrossRef]
  54. Amari, S.I. Backpropagation and stochastic gradient descent method. Neurocomputing 1993, 5, 185–196. [Google Scholar] [CrossRef]
  55. Meza, J.C. Steepest descent. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 719–722. [Google Scholar] [CrossRef]
  56. Charilogis, V.; Tsoulos, I.G. Toward an Ideal Particle Swarm Optimizer for Multidimensional Functions. Information 2022, 13, 217. [Google Scholar] [CrossRef]
  57. MacQueen, J.B. Some Methods for classification and Analysis of Multivariate Observations. In Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, Oakland, CA, USA, 21 June–18 July 1967; pp. 281–297. [Google Scholar]
  58. Li, Y.; Wu, H. A clustering method based on K-means algorithm. Phys. Procedia 2012, 25, 1104–1109. [Google Scholar] [CrossRef]
  59. Arora, P.; Varshney, S. Analysis of k-means and k-medoids algorithm for big data. Procedia Comput. Sci. 2016, 78, 507–512. [Google Scholar] [CrossRef]
  60. Montaz Ali, M. Charoenchai Khompatraporn, Zelda B. Zabinsky, A Numerical Evaluation of Several Stochastic Algorithms on Selected Continuous Global Optimization Test Problems. J. Glob. Optim. 2005, 31, 635–672. [Google Scholar]
  61. Floudas, C.A.; Pardalos, P.M.; Adjiman, C.; Esposoto, W.; Gümüs, Z.; Harding, S.; Klepeis, J.; Meyer, C.; Schweiger, C. Handbook of Test Problems in Local and Global Optimization; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1999. [Google Scholar]
  62. Ali, M.M.; Kaelo, P. Improved particle swarm algorithms for global optimization. Appl. Math. Comput. 2008, 196, 578–593. [Google Scholar] [CrossRef]
  63. Koyuncu, H.; Ceylan, R. A PSO based approach: Scout particle swarm algorithm for continuous global optimization problems. J. Comput. Des. Eng. 2019, 6, 129–142. [Google Scholar] [CrossRef]
  64. Siarry, P.; Berthiau, G.; Durdin, F.; Haussy, J. Enhanced simulated annealing for globally minimizing functions of many-continuous variables. ACM Trans. Math. Softw. 1997, 23, 209–228. [Google Scholar] [CrossRef]
  65. Tsoulos, I.G.; Lagaris, I.E. GenMin: An enhanced genetic algorithm for global optimization. Comput. Phys. Commun. 2008, 178, 843–851. [Google Scholar] [CrossRef]
  66. LaTorre, A.; Molina, D.; Osaba, E.; Poyatos, J.; Del Ser, J.; Herrera, F. A prescription of methodological guidelines for comparing bio-inspired optimization algorithms. Swarm Evol. Comput. 2021, 67, 100973. [Google Scholar] [CrossRef]
  67. Gaviano, M.; Ksasov, D.E.; Lera, D.; Sergeyev, Y.D. Software for generation of classes of test functions with known local and global minima for global optimization. ACM Trans. Math. Softw. 2003, 29, 469–480. [Google Scholar] [CrossRef]
  68. Lennard-Jones, J.E. On the Determination of Molecular Fields. Proc. R. Soc. Lond. A 1924, 106, 463–477. [Google Scholar]
  69. Stein, W.E.; Keblis, M.F. A new method to simulate the triangular distribution. Math. Comput. Model. Vol. 2009, 49, 1143–1147. [Google Scholar] [CrossRef]
  70. Gropp, W.; Lusk, E.; Doss, N.; Skjellum, A. A high-performance, portable implementation of the MPI message passing interface standard. Parallel Comput. 1996, 22, 789–828. [Google Scholar] [CrossRef]
  71. Chandra, R. Parallel Programming in OpenMP; Morgan Kaufmann: Cambridge, MA, USA, 2001. [Google Scholar]
Figure 1. The steps of the proposed method.
Figure 1. The steps of the proposed method.
Computation 12 00158 g001
Figure 2. Statistical representation of the function calls for different optimization methods.
Figure 2. Statistical representation of the function calls for different optimization methods.
Computation 12 00158 g002
Figure 3. Scatter plot representation of optimization methods with Kruskal–Wallis Ranking.
Figure 3. Scatter plot representation of optimization methods with Kruskal–Wallis Ranking.
Computation 12 00158 g003
Figure 4. Statistical representation for different sampling methods.
Figure 4. Statistical representation for different sampling methods.
Computation 12 00158 g004
Figure 5. Scatter plot representation of the used distributions with Kruskal–Wallis Ranking.
Figure 5. Scatter plot representation of the used distributions with Kruskal–Wallis Ranking.
Computation 12 00158 g005
Figure 6. Different variations of the ELP problem.
Figure 6. Different variations of the ELP problem.
Computation 12 00158 g006
Table 1. Experimental settings. The numbers in cells denote the values used in the experiments for all parameters.
Table 1. Experimental settings. The numbers in cells denote the values used in the experiments for all parameters.
ParameterMeaningValue
N c Number of chromosomes/particles500
N g Maximum number of allowed iterations200
N m Number of initial samples for K-means 10 × N c
N k Number of iterations for stopping rule5
p s Selection rate for the genetic algorithm0.1
p m Mutation rate for the genetic algorithm0.05
B i Number of iterations for BFGS3
Table 2. Experimental results using different optimization methods. Numbers in cells represent average function calls.
Table 2. Experimental results using different optimization methods. Numbers in cells represent average function calls.
FUNCTIONGENETICPSOEOFA
ACKLEY16,75424,5181346
BF110,46699121856
BF210,05993641738
BF3729098471408
BRANIN10,03259401479
CAMEL11,06971321540
EASOM10,58749221304
EXP410,23173821651
EXP810,62276441891
EXP1610,45880502145
EXP3210,20288002165
EXTENDEDF1010,73914,3631361
F1231,27723,7054340
F1414,59826,2961617
F15821513,5991620
F17920211,8511438
GKLS25010,19854881404
GKLS350986160291325
GOLDSTEIN11,90192441751
GRIEWANK213,61210,3151602
GRIEWANK1014,75015,7213092
HANSEN13,05376361559
HARTMAN310,06668971664
HARTMAN611,11980611873
POTENTIAL316,32510,7282093
POTENTIAL534,28419,3073397
RASTRIGIN13,35497831528
ROSENBROCK412,61892662134
ROSENBROCK815,01912,8542985
ROSENBROCK1617,15021,0744151
SHEKEL513,92783832021
SHEKEL713,68884912029
SHEKEL1013,72287462120
TEST2N410,52278151608
TEST2N510,84783931658
TEST2N611,18093851782
TEST2N711,48510,5611876
SPHERE394283301199
SCHWEFEL468890371242
SCHWEFEL2.21623378721286
SCHWEFEL2.2281,98088,2275309
SINU412,92072501700
SINU812,70382022202
SINU1612,40410,6402188
TEST30N316,69277501912
TEST30N419,15910,0361820
SUM651,203564,84691,409
Table 3. Experiments using different sampling techniques for the proposed method.
Table 3. Experiments using different sampling techniques for the proposed method.
FUNCTIONUNIFORMTRIANGULARKMEANS
ACKLEY180313301346
BF1263721461856
BF2247020111738
BF3186313661408
BRANIN202315181479
CAMEL217416711540
EASOM175512641304
EXP4242818441651
EXP8250019391891
EXP16256820262145
EXP32253419762165
EXTENDEDF10181213601361
F12608541964340
F14222516911617
F15195015121620
F17193214241438
GKLS250189514031404
GKLS350178912491325
GOLDSTEIN251620131751
GRIEWANK2224217111602
GRIEWANK10377634163092
HANSEN216316781559
HARTMAN3228917571664
HARTMAN6254120651873
POTENTIAL3284123662093
POTENTIAL5402939603397
RASTRIGIN210517791528
ROSENBROCK4325529492134
ROSENBROCK8405934682985
ROSENBROCK16496343914151
SPHERE153310331199
SCHWEFEL163511391242
SCHWEFEL2.21172412301286
SCHWEFEL2.22836386635309
SHEKEL5294525152021
SHEKEL7300825672029
SHEKEL10316026832120
TEST2N4233918041608
TEST2N5238818471658
TEST2N6247819621782
TEST2N7255220441876
SINU4244419541700
SINU8288123882202
SINU16369433002188
TEST30N3218916541912
TEST30N4228223741820
SUM124,837102,63691,309
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kyrou, G.; Charilogis, V.; Tsoulos, I.G. EOFA: An Extended Version of the Optimal Foraging Algorithm for Global Optimization Problems. Computation 2024, 12, 158. https://doi.org/10.3390/computation12080158

AMA Style

Kyrou G, Charilogis V, Tsoulos IG. EOFA: An Extended Version of the Optimal Foraging Algorithm for Global Optimization Problems. Computation. 2024; 12(8):158. https://doi.org/10.3390/computation12080158

Chicago/Turabian Style

Kyrou, Glykeria, Vasileios Charilogis, and Ioannis G. Tsoulos. 2024. "EOFA: An Extended Version of the Optimal Foraging Algorithm for Global Optimization Problems" Computation 12, no. 8: 158. https://doi.org/10.3390/computation12080158

APA Style

Kyrou, G., Charilogis, V., & Tsoulos, I. G. (2024). EOFA: An Extended Version of the Optimal Foraging Algorithm for Global Optimization Problems. Computation, 12(8), 158. https://doi.org/10.3390/computation12080158

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop