Improving the Giant-Armadillo Optimization Method

: Global optimization is widely adopted presently in a variety of practical and scientific problems. In this context, a group of widely used techniques are evolutionary techniques. A relatively new evolutionary technique in this direction is that of Giant-Armadillo Optimization, which is based on the hunting strategy of giant armadillos. In this paper, modifications to this technique are proposed, such as the periodic application of a local minimization method as well as the use of modern termination techniques based on statistical observations. The proposed modifications have been tested on a wide series of test functions available from the relevant literature and compared against other evolutionary methods.


Introduction
Global optimization aims to discover the global minimum of an optimization problem by searching in the domain range of the problem.Typically, a global optimization method aims to discover the global minimum of a continuous function f : S → R, S ⊂ R n and hence the global optimization problem is formulated as follows: x * = arg min x∈S f (x). ( The set S is defined as: The vectors a and b stand for the left and right bounds, respectively, for the point x.
A review of the optimization procedure can be found in the paper by Rothlauf [1].Global optimization refers to techniques that seek the optimal solution to a problem, mainly using traditional mathematical methods, for example, methods that try to locate either maxima or minima [2][3][4].Every optimization problem contains its decision variables, a possible series of constraints, and the definition of the objective function [5].Every optimization method targets the discovery of appropriate values for the decision variables to minimize the objective function.The optimization methods are commonly divided into deterministic and stochastic approaches [6].The techniques used in most cases for the first category are the interval methods [7,8].In interval methods, the set S is divided through several iterations into subareas that may contain the global minimum using some criteria.On the other hand, stochastic optimization methods are used in most cases because they can be programmed faster than deterministic ones, and they do not depend on any previously defined information about the objective function.Such techniques may include Controlled Random Search methods [9][10][11], Simulated Annealing methods [12,13], Clustering methods [14][15][16], etc. Systematic reviews of stochastic methods can be located in the paper by Pardalos et al. [17] or in the paper by Fouskakis et al. [18].Also, due to the widespread use of parallel computing techniques, a series of methods have been presented that exploit such architectures [19,20].
Metaheuristic algorithms, from their appearance in the early 1970s to the late 1990s, have seen significant developments.Metaheuristic algorithms have gained much attention in solving difficult optimization problems and are paradigms of computational intelligence [46][47][48].Metaheuristic algorithms are grouped into four categories based on their behavior: evolutionary algorithms, algorithms based on considerations derived from physics, algorithms based on swarms, and human-based algorithms [49].
Recently, Alsayyed et al. [50] introduced a new bio-inspired algorithm that belongs to the group of metaheuristic algorithms.This new algorithm is called Giant-Armadillo Optimization (GAO) and aims to replicate the behavior of giant armadillos in the real world [51].The new algorithm is based on the giant armadillo's hunting strategy of heading toward prey and digging termite mounds.
Owaid et al. presented a method [52] concerning the decision-making process in organizational and technical systems management problems, which also uses giant-armadillo agents.The article presents a method for maximizing decision-making capacity in organizational and technical systems using artificial intelligence.The research is based on giant-armadillo agents that are trained with the help of artificial neural networks [53,54], and in addition, a genetic algorithm is used to select the best one.
The GAO optimizer can also be considered to be a method based on Swarm Intelligence [55].Some of the reasons why methods based on Swarm Intelligence are used in optimization problems are their robustness, scalability, and flexibility.With the help of simple rules, simple reactive agents such as fish and birds exchange information with the basic purpose of finding an optimal solution [56,57].This article focuses on enhancing the effectiveness and the speed of the GAO algorithm by proposing some modifications and, more specifically:

•
The application of termination rules, which are based on asymptotic considerations and are defined in the recent bibliography.This addition will achieve early termination of the method and will not waste computational time on iterations that do not yield a better estimate of the global minimum of the objective function.

•
A periodic application of a local search procedure.Using local optimization, the local minima of the objective function will be found more efficiently, which will also lead to a faster discovery of the global minimum.
The current method was applied to a series of objective problems found in the literature of global optimization, and it is compared against an implemented Genetic Algorithm and a variant of the PSO technique.
The rest of this paper is divided into the following sections: in Section 2, the proposed modifications are fully described.In Section 3, the benchmark functions are listed, accompanied by the experimental results, and finally, in Section 4, some conclusions and guidelines for future work are provided.

The Proposed Method
The GAO algorithm is based on processes inspired by nature and initially generates a population of candidate solutions that are possible solutions to the objective problem.The GAO algorithm aims to evolve the population of solutions through iterative steps.The algorithm has two major phases: the exploration phase, where the candidate solutions are updated with a process that mimics the attack of armadillos on termite mounds, and the exploitation phase, where the solutions are updated similarly to digging in termite mounds.The basic steps of the GAO algorithm are presented below: 1.

Initialization step
• Define as N c as number of armadillos in the population.for the armadillo according to the formula: where r i,j are random numbers in [0, 1] and I i,j are random numbers in [1,2] and j = 1, . . ., n -Update the position of the armadillo i according to: where r i,j are random numbers in [0, 1].

-
Update the position of the armadillo i according to: , then a local optimization algorithm is applied to g i .Some local search procedures found in the optimization literature are the BFGS method [58], the Steepest Descent method [59], the L-Bfgs method [60] for large-scaled optimization, etc.A BFGS modification proposed by Powell [61] was used in the current work as the local search optimizer.Using the local optimization technique ensures that the outcome of the global optimization method will be one of the local minima of the objective function.This ensures maximum accuracy in the end result.
• end for

4.
Termination check step For the valid termination of the method, two termination rules that have recently appeared in the literature are proposed here, and they are based on stochastic considerations.The first stopping rule will be called DoubleBox in the conducted experiments, and it was introduced in the work of Tsoulos in 2008 [62].The steps for this termination rule are as follows: (a) Define as σ iter the variance of located global miniumum at iteration iter.
where k T is the iteration where a new and better estimation for the global minimum was first found.
The second termination rule was introduced in the work of Charilogis et al.
[63] and will be called Similarity in the experiments.In the Similarity stopping rule, at every iteration k, the absolute difference between the current located global minimum f min and the previous best value f min If this difference is zero for a predefined number of consecutive generations N k , the method terminates.

•
If the termination criteria are not held, then go to step 3.
The steps of the proposed method are also outlined in Figure 1.

Experiments
This section will begin by detailing the functions that will be used in the experiments.These functions are widespread in the modern global optimization literature and have been used in many research works.Next, the experiments performed using the current method will be presented, and a comparison will be made with methods that are commonly used in the literature of global optimization.

Experimental Functions
The functions used in the conducted experiments can be found in the related literature [64,65].The definitions for the functions are listed as follows.

•
Bf1 function, defined as: • Bf2 function, defined as: • Branin function, defined as: Camel function defined as: • Easom defined as: • Exponential function defined as: In the current work, the following values were used for the conducted experiments: n = 4, 8, 16, 32.

•
Gkls function [66].The f (x) = Gkls(x, n, w) is defined as a function with w local minima, and the dimension of the function was n.For the conducted experiments, the cases of n = 2, 3 and w = 50 were used.

•
Goldstein and Price function • Griewank2 function, that has the following definition: • Griewank10 function defined as: Hartman 3 function defined as: is adopted as a test case here with N = 3, 5.
Sinusoidal function defined as: For the current series of experiments the values n = 4, 8, 16 and z = π 6 were used.

•
Test2N function defined as: The function has 2 n local minima, and for the conducted experiments, the cases of n = 4, 5, 6, 7 were used.

•
Test30N function defined as: 10,10].This function has 30 n local minima, and for the conducted experiments, the cases n = 3, 4 were used.

Experimental Results
The software used in the experiments was coded in ANSI-C++.Also, the freely available Optimus optimization environment was incorporated.The software can be downloaded from https://github.com/itsoulos/GlobalOptimus/(accessed on 14 April 2024).Optimus is entirely written in ANSI-C++ and was prepared using the freely available QT library.All the experiments were executed on an AMD Ryzen 5950X with 128 GB of RAM.The operating system used was Debian Linux.In all experimental tables, the numbers in cells denote average function calls for 30 runs.In each run, a different seed for the random number generator was used.The decimal numbers enclosed in parentheses represent the success rate of the method in finding the global minimum of the corresponding function.If this number does not appear, then the method managed to discover the global minimum in each run.The simulation parameters for the used optimization techniques are listed in Table 1.The values for these parameters were chosen to strike a balance between the expected efficiency of the optimization methods and their speed.All techniques used uniform distribution to initialize the corresponding population.The results from the conducted experiments are outlined in Table 2.The following applies to this table : 1.
The column PROBLEM denotes the objective problem.

2.
The column GENETIC stands for the average function calls for the Genetic algorithm.
The same number of armadillos and chromosomes and particles was used in the experiments conducted to make a fair comparison between the algorithms.Also, the same number of maximum generations and the same stopping criteria were utilized among the different optimization methods.

3.
The column PSO stands for the application of a Particle Swarm Optimization method in the objective problem.The number of particles and the stopping rule in the PSO method are the same as in the proposed method.

4.
The column GWO stands for the application of Gray Wolf Optimizer [68] on the benchmark functions.

5.
The column PROPOSED represents the experimental results for the Gao method with the suggested modifications.6.
The final row, denoted as SUM, stands for the sum of the function calls and the average success rate for all the used objective functions.
The statistical comparison for the previous experimental results is depicted in Figure 2. The previous experiments and their subsequent statistical processing demonstrate that the proposed method significantly outperforms Particle Swarm Optimization when a comparison is conducted for the average number of function calls since it requires 20% fewer function calls on average to efficiently find the global minimum.In addition, the proposed method appears to have similar efficiency in terms of required function calls to that of the Genetic Algorithm.The reliability of the termination techniques was tested with one more experiment, in which both proposed termination rules were used, and the produced results for the benchmark functions are presented in Table 3.Also, the statistical comparison for the experiment is shown graphically in Figure 3.
From the statistical processing of the experimental results, one can find that the termination method using the Similarity criterion demands a lower number of function calls DoubleBox stopping rule to achieve the goal, which is to effectively find the global minimum.Furthermore, there is no significant difference in the success rate of the two termination techniques, as reflected in the success rate in finding the global minimum, which remains high for both techniques (around 97%).
Moreover, the effect of the application of the local search technique is explored in the experiments shown in Table 4, where the local search rate increases from 0.5% to 5%.As expected, the success rate in discovering the global minimum increases as the rate of application of the local minimization technique increases.For the case of the current method, this rate increases from 92% to 97% in the experimental results.This finding demonstrates that if this method is combined with effective local minimization techniques, it can lead to a more efficient finding global minimum for the objective function.Also, to measure the time complexity of the proposed work, the ELP (High Elliptic Function) function was employed with arbitrary dimensions.The function is defined as: In this test, the dimension of the function (n) increased from 1 to 15, and the average execution time was measured.The results obtained for the similarity termination rule are outlined in Figure 4, and the results for the DoubleBox termination rule are graphically shown in Figure 5.As expected, the execution time increases as the dimensions of the function increase, but there are no significant differences between the execution times of the three optimization methods.
Furthermore, as a practical application, consider the training of an artificial neural network for classification or data fitting problems [69,70].Neural networks are non-linear parametric tools with many applications in real-world problems [71][72][73].Neural networks can be defined as functions N( − → x , − → w ).The vector − → x represents the input pattern, while the vector − → w represents the weight vector of the neural network that should be estimated.Optimization methods can be used to estimate the set of weights by minimizing the following equation: The quantity of Equation (3) was minimized using the mentioned algorithm of this work for the BK dataset [74], which is used to estimate the points in a basketball game.The average test error using the four methods presented in this article is shown graphically in Figure 6.To validate the results, the well-known ten-fold cross method was applied.The current work has the same performance as the PSO algorithm and significantly outperforms the Genetic algorithm.

Conclusions
Two modifications for the Giant-Armadillo Optimization method were suggested in this article.These modifications aimed to improve the efficiency and the speed of the underlying global optimization algorithm.The first modification suggested the periodic application of a local optimization procedure to randomly selected armadillos from the current population.The second modification utilized some stopping rules from the recent bibliography to stop the more efficient optimization method and to avoid unnecessary iterations when the global minimum was already discovered.The modified global optimization method was tested against two other global optimization methods from the relevant literature and, more specifically, an implementation of the Genetic Algorithm and a Particle Swarm Optimization variant on a series of well-known test functions.To make a fair comparison between these methods, the same number of test solutions (armadillo or chromosomes) and the same termination rule were used.The present technique, after comparing the experimental results, clearly outperforms particle optimization and has a similar behavior to that of the genetic algorithm.Also, after a series of experiments, it was shown that the Similarity termination rule outperforms the DoubleBox termination rule in terms of function calls without reducing the effectiveness of the proposed method in the task of locating the global minimum.
Since the experimental results have been shown to be extremely promising, further efforts can be made to develop the technique in various fields.For example, an extension could be to develop a termination rule that exploits the particularities of the particular global optimization technique.Among the future extensions of the application may be the use of parallel computing techniques to speed up the optimization process, such as the incorporation of the MPI [75] or the OpenMP library [76].For example, in this direction, it could be investigated to parallelize the technique in a similar way as genetic algorithms using islands [77,78].

Figure 1 .
Figure 1.A schematic representation of the current method.

Figure 2 .
Figure 2. A statistical comparison using the number of function calls.The test was performed for three different optimization methods.

Figure 3 .
Figure 3.Comparison of Gao algorithm with two termination rules.

Figure 4 .
Figure 4. Representation of the average execution times for the ELP objective function, using the similarity stopping rule.

Figure 5 .
Figure 5. Representation of the average execution times for the ELP objective function, using the DoubleBox stopping rule.

Figure 6 .
Figure 6.Comparison of test error between the mentioned global optimization algorithms for the BK dataset.
Create a set that contains the termites as TMi = g k i : f k i < f i and k i ̸ = i -Select the termite mound STM i for armadillo i.
-Create a new position g P1 i

Table 1 .
The experimental values for each parameter used in the conducted experiments.

Table 2 .
Experimental results and comparison against other methods.The stopping rule used is the Similarity stopping rule.

Table 3 .
Average number of function calls for the proposed method using the two suggested termination rules.

Table 4 .
Experimental results using different values for the local search rate and the proposed method.