EOFA: An Extended Version of the Optimal Foraging Algorithm for Global Optimization Problems

: The problem of finding the global minimum of a function is applicable to a multitude of real-world problems and, hence, a variety of computational techniques have been developed to efficiently locate it. Among these techniques, evolutionary techniques, which seek, through the imitation of natural processes, to efficiently obtain the global minimum of multidimensional functions, play a central role. An evolutionary technique that has recently been introduced is the Optimal Foraging Algorithm, which is a swarm-based algorithm, and it is notable for its reliability in locating the global minimum. In this work, a series of modifications are proposed that aim to improve the reliability and speed of the above technique, such as a termination technique based on stochastic observations, an innovative sampling method and a technique to improve the generation of offspring. The new method was tested on a series of problems from the relevant literature and a comparative study was conducted against other global optimization techniques with promising results.


Introduction
The basic goal of global optimization is to find the global minimum by searching for the appropriate scope of each problem.Primarily, a global optimization method aims to discover the global minimum of a continuous function, and it is defined as with S defined as Global optimization is a core part of applied mathematics [1] and computer science [2].Additionally, it finds application in various fields such as physics [3][4][5], chemistry [6][7][8], economics [9,10], medicine [11,12], etc. Optimization methods are divided into deterministic and stochastic [13].Deterministic methods guarantee finding the global optimal solution and are usually applied to small problems [14].The most commonly used techniques in the first category are the so-called interval techniques [15,16], which divide the initial domain of the function until they discover a promising subset that may contain the global minimum.On the other hand, stochastic methods do not guarantee finding the global minimum and are mainly applied to large problems, but they are inefficient when nonlinear equality constraints are involved.Stochastic methods include a variety of techniques, such as Controlled Random Search methods [17][18][19], Simulated Annealing methods [20][21][22], methods based on the Multistart technique [23,24], etc.Furthermore, in the recent literature, a variety of hybrid methods have appeared that combine different optimization techniques [25,26].Furthermore, due to the widespread usage of parallel programming architectures, a variety of methods have been proposed that take advantage of such systems [27][28][29][30].
The above techniques present various problems that have had to be addressed by researchers, such as, for example, the initialization of the possible solutions, the termination rules, etc.A number of researchers have dealt with the topic of the initialization of candidate solutions and have proposed techniques such as the repulsion method [31], an adaptive sampling method using the Kriging model [32], Monte Carlo-based sampling methods [33], a method which utilizes an Radial Basis Function (RBF) network [34] to obtain samples [35] etc.Additionally, in the direction of termination methods, a series of techniques have been proposed, such as sequential stopping rules [36], a stopping rule based on asymptotic considerations [37], a stopping rule that utilizes the likelihood of obtaining the specific set of solutions [38], etc.
An important group of stochastic techniques that has attracted the interest of many researchers is the group of evolutionary techniques.The branch of evolutionary computing includes the study of the foundations and applications of computational techniques based on the principles of natural evolution.Evolution in nature is responsible for the "design" of all living things on earth and the strategies they use to interact with each other.This branch of methods includes Differential Evolution methods [39,40], Particle Swarm Optimization (PSO) methods [41][42][43], Ant Colony optimization methods [44,45], genetic algorithms [46,47], etc. Zhu and Zhang proposed a novel method in this area that tackles optimization problems with a strategy based on animal foraging behavior [48].This Optimal Foraging Algorithm (OFA) is a swarm-based algorithm motivated by animal behavioral ecology.Furthermore, Fu et al. proposed a variation of the OFA algorithm with the incorporation of the Differential Evolution technique [49].Additionally, Jian and Zhu proposed a modification of the OFA algorithm with direction prediction [50] in order to enhance the performance of the original algorithm.Recently, Chen Ding and Guang Yu Zhu introduced an improved quasi-adversarial optimal social behavior search algorithm (QOS-OFA) to handle optimization problems [51].
The current work introduces a series of modifications to the OFA algorithm in order to speed up the process and to increase the effectiveness of the algorithm.These modifications include the following:

•
The incorporation of a sampling technique that utilizes the K-means method [52].The points that are sampled will more accurately and quickly lead to the global minimum of the function and additional points that might have been close enough using another sampling technique are discarded.

•
The use of a termination technique based on stochastic observations.At each iteration of the algorithm, the smallest functional value is recorded.If this remains constant for a predetermined number of iterations, then the method terminates.This way, the method will terminate in time without needlessly wasting computing time on iterations that will simply produce the same result.

•
The improvement of the offspring produced by the method using a few steps of a local minimization method.In the current work, a BFGS variant [53] was used as a local search procedure.
The rest of this paper is divided into the following sections: in Section 2, the proposed method is fully described.In Section 3, the benchmark functions are presented as well as the experimental results, and finally in Section 4 some conclusions and guidelines for future work are discussed.

The Proposed Method
The OFA algorithm uses nature as a source of inspiration.The OFA algorithm has as its main purpose the evolution of the population using some iterative steps which will be presented in more detail below.The quality of the initial population can be improved, allowing the search space to be explored more efficiently.The first major modification introduced to the above technique is the use of a sophisticated technique of sampling points from the objective function.This sampling is based on the K-means technique and is presented in detail in Section 2.2.The second modification is to apply a few steps of a local search procedure in offspring creation if an offspring is not consistent with the bounds [a, b] of the objective function f (x).With this modification, it is ensured that a new point will be within the field definition of the function on the one hand, and on the other hand, it will have a slightly lower value than its predecessor.The third modification is to use an efficient termination rule to terminate the global optimization method effectively.This termination rule is outlined in Section 2.3.

The Main Steps of the Algorithm
The main steps of the used global optimization method are the following: 1.
Initialization step.Initialize randomly the members of the population in set S. This forms the initial population denoted as P. The sampling is performed using the procedure in Section 2.

(d)
Create the QOP population, which stands for the quasi-opposite population of P, by calculating the quasi-opposite positions of P.

(e)
Set t = 0, the iteration number.(f) Select the N c fittest solutions from the population set {P, QOP} using the QOBL method proposed in [51]. 2.
Calculation step.

(a)
Produce the new offspring solutions x t+1 based on the ranking order using the following equation: where r 1 , r 2 are random numbers in [0, 1] and x t r is a randomly selected solution from the population.The function K(t) is defined as (b) This step applies a limited number of steps of a local search procedure LS(x) to the current point.The parameter B i denotes the number of steps for the local search procedure.In the current work, the BFGS local optimization procedure is used, but other local search methods could be employed such as Gradient Descent [54], Steepest Descent [55], etc.
ii. if Sort all solutions in the current population from the best to worst according to the function value.

3.
Termination check step. (a) Calculate to stopping rule in the work of Charilogis [56].This termination method is described in Section 2.

(c)
If the termination criteria are not met then go to Calculation step, else terminate.
The steps of the proposed method are also graphically shown in Figure 1.

The Proposed Sampling Procedure
The proposed sampling technique initially produces a series of samples from the objective function and, subsequently, through the application of the K-means technique, only the located centers are considered as the final samples.The method was introduced by James MacQueen [57] and it is a clustering algorithm that has been used in data analysis and machine learning in a variety of research papers [58,59].The algorithm seeks to estimate the centers of possible teams in a set of samples and its main steps are listed as follows: 1.
Define as k the number of clusters.

2.
Draw randomly N m initial points x i , i = 1, . . ., N m from the objective function.

4.
For every cluster j = 1 . . .k do (a) Set as M j the number of points in S j .(b) Compute the center of the cluster c j as
The function D(x, y) is the Euclidean distance of points (x, y).ii.
Update the center c j as

7.
If there is no significant change in centers c j terminate the algorithm and return the k centers as the final set of samples.

The Termination Rule Used
At every iteration t, the difference between the current best value f min and the previous best value f min is computed: The algorithm terminates when δ (t) ≤ ϵ for a series of predefined consecutive iterations N k , where ϵ is a small positive number, for example 10 −6 .

Results
This section will begin with a detailed description of the functions that will be used in the experiments, followed by an analysis of the experiments performed and comparisons with other global optimization techniques.

Test Functions
The functions used in the experiments have been proposed in a series of relative works [60,61] and they cover various scientific fields, such as medicine, physics, engineering, etc.Furthermore, these objective functions have been used by many researchers in a variety of publications [62][63][64][65][66].The definitions of these functions are given below:

•
Goldstein and Price function:  • Potential function: This function stands for the energy of a molecular conformation of N atoms that interacts via the Lennard-Jones potential [68].The function is defined as For the conducted experiments, the values N = 3, 5 were used.

Experimental Results
A series of experiments was performed using the test functions presented previously.Every experiment was conducted 30 times using different seeds for the random generator each time.The averages of function calls were measured and are presented in the following tables.The software was coded in ANSI C++ with the assistance of the OPTIMUS optimization package, which is freely available from https://github.com/itsoulos/OPTIMUS(accessed on 2 July 2024).The values for the experimental parameters are shown in Table 1.Generally, the region of attraction A(x * ) of a local minimum x * is defined as where L(x) is a local search procedure, such as BFGS.Global optimization methods usually find points which are in the region of attraction A(x * ) of a local minimum x * but not necessarily the local minimum itself.For this reason, at the end of their execution, a local minimization method is applied in order to ensure the exact finding of a local minimum.Of course, this does not imply that the minimum that will be located will be the total minimum of the function, since a function can contain tens or even hundreds of local minima.The following applies to Table 2: 1.
The column FUNCTION denotes the name of the objective problem.

2.
The column GENETIC denotes the application of a genetic algorithm to the objective problem.The genetic algorithm has N c chromosomes and the maximum number of allowed generations was set to N g .

3.
The column PSO stands for the application of Particle Swarm Optimizer to every objective problem.The number of particles was set to N c and the maximum number of allowed iterations was set to N g .4.
The column EOFA represents the application of the proposed method using the values for the parameters shown in Table 1.

5.
The row SUM represents the sum of function calls for all test functions.A statistical comparison of these results is also outlined in Figure 2. As can be seen, the present method drastically reduces the required number of function calls to almost all objective functions.This, of course, also results in a significant reduction in the required computing time to find the global minimum.Furthermore, a scatter plot for the used optimization methods is shown in Figure 3, which also depicts the superiority of the current method.Furthermore, an additional experiment was conducted in order to measure the importance of the K-means sampling in the proposed method.In this test, three different sampling methods were used in the proposed method: 1.
The column UNIFORM stands for the application of uniform sampling in the proposed method.

2.
The column TRIANGULAR represents the application of the triangular distribution [69] to sample the initial points of the proposed method.

3.
The column KMEANS represents the sampling method presented in the current work.
The experimental results for this test are shown in Table 3.The statistical comparison for the previous experiment is graphically outlined in Figure 4.The proposed sampling method significantly outperforms the remaining sam-pling techniques in almost all problems.Furthermore, the proposed global optimization technique outperforms the rest of the compared techniques, even if a different sampling technique is used.Moreover, a scatter plot representation of the used distributions with Kruskal-Wallis Ranking is depicted in Figure 5.In this test, the p-value remains very low at any level of significance.Consequently, the null hypothesis is rejected, indicating that the experimental tests are statistically significant.
Furthermore, an additional test was carried out, where the function ELP was used to measure the effectiveness of the proposed method as the dimension increased.This function is defined as where the parameter n defines the dimension of the function.A comparison between the genetic algorithm and the proposed method as n increases is shown in Figure 6.The required number of calls for the current method increases at a much lower rate than in the case of the genetic algorithm.This means that the method can cope with large-scale problems without significantly increasing the required computing time.

Conclusions
Three modifications were proposed to the EOFA optimization method in this article.The modifications were primarily aimed at improving the efficiency and speed of the global optimization algorithm.The first modification proposed the application of a sampling technique which incorporates the K-means method [52].With the proposed modification, the points where the sampling was achieved helped to find the global minimum with the greatest accuracy and in the least possible time.The second amendment concerned termination rules.Termination rules help terminate functions immediately without unnecessarily wasting computational time on iterations.The third modification concerned the refinement of the offspring produced using the BFGS method [53] as a local search procedure.In the experiments conducted, different sampling techniques were used for the proposed method.
More specifically, the following techniques were used: Uniform, Triangular and K-means.It was found that the K-means sampling technique yields much better results than the other two techniques and the total number of calls is extremely lower.
Experiments were also performed using different optimization methods.More specifically, the following were used: genetic, PSO and EOFA.It was observed that the average number of EOFA calls is very limited compared to the other two.
Since the experimental results have been shown to be extremely promising, further efforts can be made to develop the technique in various areas.Among the future extensions of the application may be the use of parallel computing techniques to speed up the optimization process, such as the integration of MPI [70] or the OpenMP library [71].
Define as N c the number of elements in the population.(b) Define as N g the maximum number of allowed iterations.(c)

Figure 1 .
Figure 1.The steps of the proposed method.

Figure 2 .
Figure 2. Statistical representation of the function calls for different optimization methods.

Figure 3 .
Figure 3. Scatter plot representation of optimization methods with Kruskal-Wallis Ranking.

Figure 4 .
Figure 4. Statistical representation for different sampling methods.

Figure 5 .
Figure 5. Scatter plot representation of the used distributions with Kruskal-Wallis Ranking.

Figure 6 .
Figure 6.Different variations of the ELP problem.

Table 1 .
Experimental settings.The numbers in cells denote the values used in the experiments for all parameters.

Table 2 .
Experimental results using different optimization methods.Numbers in cells represent average function calls.

Table 3 .
Experiments using different sampling techniques for the proposed method.