ANA: Ant Nesting Algorithm for Optimizing Real-World Problems

In this paper, a novel swarm intelligent algorithm is proposed called ant nesting algorithm (ANA). The algorithm is inspired by Leptothorax ants and mimics the behavior of ants searching for positions to deposit grains while building a new nest. Although the algorithm is inspired by the swarming behavior of ants, it does not have any algorithmic similarity with the ant colony optimization (ACO) algorithm. It is worth mentioning that ANA is considered a continuous algorithm that updates the search agent position by adding the rate of change (e.g., step or velocity). ANA computes the rate of change differently as it uses previous, current solutions, fitness values during the optimization process to generate weights by utilizing the Pythagorean theorem. These weights drive the search agents during the exploration and exploitation phases. The ANA algorithm is benchmarked on 26 well-known test functions, and the results are verified by a comparative study with genetic algorithm (GA), particle swarm optimization (PSO), dragonfly algorithm (DA), five modified versions of PSO, whale optimization algorithm (WOA), salp swarm algorithm (SSA), and fitness dependent optimizer (FDO). ANA outperformances these prominent metaheuristic algorithms on several test cases and provides quite competitive results. Finally, the algorithm is employed for optimizing two well-known real-world engineering problems: antenna array design and frequency-modulated synthesis. The results on the engineering case studies demonstrate the proposed algorithm's capability in optimizing real-world problems.


Introduction
Both our professional and private life is a sequence of decisions and each decision involves selecting between at least two options (it will not be considered a decision otherwise), and the fact is we are always in search of finding the best option. Every question that needs a superlative answer is an optimization problem. Finding the shortest path to a destination, constructing the fastest car, hiring the best job applicant, prescribing the best medicine for a patient, making the best vaccine for a virus, finding the best way to discover diseases and viruses as early and reliable as possible, and finding the best strategy for overcoming financial and economic crises, are few examples of optimization problems.
Simple exhaustive search methods [1,2] are rarely sufficient for most real-world problems, and they lead to too slow or incomplete searches as the search space (the number of options) increases dramatically. That means, finding the best solution for a problem is not usually an easy task and requires a long time and sometimes an enormous amount of resources. Optimization can be defined as the art and science of making good decisions, and optimization algorithms are meant to solve optimization problems by trading in solution quality for runtime. Optimization algorithms provide us with a set of tools and techniques mainly from mathematics and computer science to select the best solution amongst the possible choices [3].
There are many ways in which optimization algorithms can be classified, the simplest way is to assort them as deterministic or stochastic [3]. Deterministic algorithms [3,4] such as linear programming, nonlinear programming, and mixed-integer nonlinear programming guarantee optimal or near-optimal solutions by adopting repeated design variables and functions firmly. They have a fast convergence rate and are simple and easy to implement and understand. Despite their efficiency [5], they are deterministic, i.e., for the same inputs, the same output is obtained consistently.
On the other hand, stochastic algorithms are more flexible and efficient than deterministic algorithms [3,6,7] as they are stochastic, i.e., they all have some level of randomness; for the same set of inputs, the same output is not always obtained. They are considered to be quite efficient in obtaining near-optimal solutions to all types of problems because they do not assume the underlying fitness landscape. The stochastic algorithms include heuristic and metaheuristic algorithms. Metaheuristic can be seen as a "master strategy that guides and modifies other heuristics to produce solutions beyond those that are normally generated in a quest for local optimality" [8]. Metaheuristics are considered to perform better than heuristics though the names are used interchangeably [3].
Time, cost, and resource limitations make searching every single solution for a problem to find the optimal one an impossible task. Therefore, researchers have started observing and studying the behavior of animals and natural phenomena to develop algorithms for solving optimization problems. They have developed algorithms based on swarm intelligence, biological systems, physical and chemical systems. These types of algorithms are called nature-inspired, and they contain a big set of novel problem-solving methodologies and approaches. They have been used to solve many real-world problems, and they comprise a large portion of stochastic algorithms.
From the time of advent, nature-inspired optimization algorithms have received great attention and are growing very rapidly. According to a research report [9], there are more than 200 nature-inspired algorithms presented in the literature. Despite the considerable number of algorithms and developments, there is always room for presenting a new algorithm, as long as the new proposed algorithm presents better or comparative performance to the previous ones as proven by the NFL theorem [10]. The theorem states that if any algorithm A outperforms another algorithm B in the search for an optimum of an objective, then algorithm B outperforms A over other objective functions. In other words, all the optimization algorithms give the same average performance when averaged over all functions. This proposes and motivates developing more and more algorithms for solving diverse and complex real-world optimization problems.
This paper proposes a new algorithm under the name ant nesting algorithm that is abbreviated to ANA. It is inspired by the swarming behavior of Leptothorax ants during nest construction. Ant colony optimization (ACO) algorithm is also a nature-inspired metaheuristic algorithm that mimics the behavior of ants. However, it is very different to our algorithm as it uses different ant stigmergy and behavior.
The major contribution of this work is a proposal of a new swarm intelligent algorithm for optimizing single-objective problems that has a good level of exploration and exploitation. It is noted that single-objective optimization problems are those problems that require a solution for a single criterion or metric of the problem.
The main contributions of this work are outlined as follows: (1) Proposing a novel metaheuristic algorithm for solving SOPs.
(2) Integrating Pythagorean theorem into the ant nesting model for generating convenient weights that assist the algorithm in both exploration and exploitation phases. (3) Utilizing a quite different approach from PSO for updating search agent positions and testing the algorithm on several optimization benchmark functions and comparing it to the most well-known and outstanding metaheuristic algorithms like a genetic algorithm (GA), particle swarm optimization (PSO), five modified versions of PSO, dragonfly algorithm (DA), whale optimization algorithm (WOA), salp swarm algorithm (SSA), and fitness dependent optimization (FDO). (4) Applying ANA algorithm for optimizing two real-world engineering problems that are antenna array design and frequency-modulated synthesis.
The rest of the paper contains a brief history of the most prominent swarm algorithms in the literature, the inspiration of the algorithm with its modelling, testing and evaluation of the algorithm, and the conclusion and recommendation of a few future works.

Nature-Inspired Metaheuristic Algorithms in Literature
Metaheuristic as a scientific method for solving problems is a quite new phenomenon in comparison to its ubiquitous nature. Although it is difficult to pinpoint the first use of metaheuristics, the mathematician Alan Turing is known to be the first to have used heuristic algorithms for breaking the Enigma cyphers at Bletchley Park during World War II [3].
To date, over 200 nature-inspired metaheuristic algorithms for optimization exist in the literature. This section presents the most well-known algorithms in the literature. A genetic algorithm was developed by Holland and his colleagues at the University of Michigan in the 1960s and 1970s [11]. It is based on the abstraction of Darwinian evolution and the natural selection of biological systems. GA is proven to be extremely successful in solving a wide range of optimization problems, and hundreds and thousands of books and research papers have been published about it. In addition to that, some studies show its usage in solving combinatorial optimization problems like scheduling, planning, and data mining in companies [12].
Ant colony optimization was developed by Dorigo in 1992, and it is inspired by the foraging behavior of social ants; how ants find the shortest path to a destination [13]. ACO has been applied for solving various problems such as scheduling [14], vehicle routing [15], assignment [16], and set [17].
Particle swarm optimization was developed by Kennedy and Eberhart in 1995, and it is inspired by the movement behavior of bird flocks and fish schools [18]. The algorithm's exploitation and exploration abilities are considered to be quite efficient, and it has been used for solving a very large number of real-world optimization problems [19].
The artificial bee colony algorithm was developed by Karaboga in 2005, and it is inspired by the intelligent foraging behavior of honeybee swarms [20]. ABC has high exploration ability and comparatively low exploitation ability. Although the performance of the algorithm depends on applications, ABC has been proven to be more efficient than GA and PSO in solving certain problems [21][22][23]. Examples of applications are in solving the set covering problem (SCP) [24] and optimum reservoir release [25].
Firefly algorithm was developed by Yang in 2008, and it is inspired by the behavior of the flashing pattern of fireflies [26]. It has been used for solving a variety of optimization problems in computer science and engineering, and it has been proven to outperform some other metaheuristic algorithms. However, despite the application and efficiency of FA, it has been criticized as differing from PSO only in a negligible way [27][28][29].
The cuckoo search was proposed by Yang and Deb in 2009, and it is inspired by the brood parasitism of cuckoo species [30]. CS uses a switching parameter to balance between local and global random walks. It outperforms PSO and GA algorithms and is used for solving several real-world problems like power network planning [31], series system [32], and engineering design optimization problems [33].
Bat algorithm was developed by Yang in 2010, and it is based on the echolocation behavior of microbats [34]. BA has been applied in several areas like image processing [35] and scheduling [36].
Grey wolf optimizer was proposed by Mirajili et al. in 2014 and is inspired by the searching and hunting behavior of grey wolves. The algorithm has been implemented in three steps, which are searching, encircling, and attacking prey. GWO has been proven to be very efficient and outperforms several well-known algorithms like gravitational search algorithm (GSA), PSO, differential evolution (DE), evolutionary programming (EP), and evolution strategy (ES) on several benchmark test functions. It has been applied for solving classical engineering design problems and optical engineering real-world problems [37].
Dragonfly algorithm was developed by Mirajili in 2015 and is inspired by the static and dynamic swarming behaviors of dragonflies. For simulating the swarming behavior, the five swarming principles of insects have been utilized, which are alignment, cohesion, separation, attraction to a food source, and distraction from the enemy. DA is available to be used for solving single-objective, discrete, and multi-objective optimization problems [38].
Ant lion optimizer was proposed by Mirjalili in 2015 and is inspired by the hunting mechanism of antlions in nature. Five main steps of hunting prey like the random walk of ants, building traps, entrapment of ants in traps, catching prey, and re-building traps are implemented. ALO has been used for solving three-bar truss design, cantilever beam design, and gear train design and optimizing the shapes of two ship propellers [39].
The whale optimization algorithm (WOA) was developed by Mirjalili and Lewis in 2016 [40]. WOA mimics the hunting mechanism of humpback whales and has three phases, which are encircling prey, bubble-net attacking method, and search for prey. It has been applied for solving several optimization problems and provided outstanding results like economic dispatch problem [41], breast cancer diagnosis [42], global MPP tracking of a photovoltaic system [43], and a handful number of other significant problems [44][45][46].
Salp swarm algorithm was developed by Mirjalili et al. in 2017. It is inspired by the behavior of salp swarms when navigating and foraging in oceans. SSA has a single decreasing parameter to make the balance between diversification and intensification. It can be used for solving both single-objective and multi-objective optimization problems. It is applied for solving several challenging engineering designs [47].
Donkey and smuggler optimization algorithm was proposed by Shamsaldin et al. in 2019. DSO mimics the searching behavior of donkeys while transporting goods, and it consists of two modes that are the donkeys and the smuggler modes. In the smuggler mode, all the possible solutions are discovered and the best one is identified. While in the donkey's mode, a couple of donkeys' behaviors are utilized for finding the optimal option among the possible solutions. The algorithm has been applied to solve three well-known real-world optimization problems that are namely traveling salesman problem, packet routing, and ambulance routing, and significant results were produced by the algorithm [48].
Fitness dependent optimizer was developed by Abdullah and Rashid in 2019 [49]. It is a PSO-based algorithm and is inspired by the foraging behavior of honeybees whilst selecting a hive. It has been improved and applied by Muhammed et al. in 2020 to develop a model for pedestrian evacuation [50].
In addition to the standard versions of the nature-inspired metaheuristic algorithms, there are many modified, enhanced versions of them like GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO that are modified versions of the PSO algorithm [51], and DCSO which is a modified version of CWO algorithm [52] including the hybridized algorithms like WOAGWO [53], that composes the behavior of more than one algorithm.

Ant Swarming
Ants are remarkable tiny insects with an enormous population and over 10,000 species all over the world, and they have always been in the position of wonder and fascination to human beings. A considerable number of books and research papers have been published about all the aspects of their life, ranging from pure literature [54,55] to detailed scientific accounts [56,57]. What is striking the most is their fascinating high social organization in comparison to their limited individual capabilities. They have been an inspiration to researchers and scientists for years. This section shortly presents Leptothorax ant behavior while building a new nest.
Leptothorax ant colonies prefer horizontal crevices and build nests within flat crevices in rocks that provide the roof and floor of their dwelling place to defend themselves against physical factors and biological enemies. They wall themselves into their chosen crevice by encircling themselves with a border of debris or grains, which are particles of earth or small fragments of stones. The circumference of the wall is appropriate to comfortably house their population which consists of a single queen, broods, and up to 500 workers. Worker ants are responsible for building the nest, and each is 2 to 3 mm long [58,59]. Figure 2 demonstrates ant nesting.
Leptothorax worker ants do not build a roof or floor for their nest. They build their nest by only constructing a wall around their queen and broods that consist of eggs, larvae, and pupae. The cluster of worker ants around the brood cluster serves as a mechanical template for determining where the nest wall should be built [58,59].
The nest construction starts with the worker ants in the colony departing from their cluster, collecting a single grain of building material, and dropping it within a distance from the cluster randomly. Then, the ants lean towards the area with the most dropped stone and start bulldozing stones into other stones from that area. The process of selecting an area to start bulldozing is very important for the consolidation of the wall. The bulldozing process continues till a well tightly packed and densely consolidated wall is formed around the queen ant in the center [58,59].
What inspired in developing this algorithm is the worker ants' individual decision to drop grains at a fixed distance from the center of the nest with stigmergic interaction of deposing where others have already been deposited. The wall originates from a combination of each worker ant's decision. In other words, when constructing a new nest, worker ants select an area around the queen, the area with the most grain, and start bulldozing from that area. A decision is made when the majority of the ants are bulldozing at a potential area [58,59].
The Leptothorax worker ants' building behavior that inspired the algorithm is summarized as the following:

•
Worker ants are responsible for building new nests by collecting building materials, transporting them into the nest site, and releasing them in an area around the queen ant [58,59].

•
Worker ants make a random walk within the nest until they face their nestmates or stationary building materials to deposit; the latter is the major cue for the deposition of another building material [58].

•
Each worker ant makes an independent decision about which direction to take around the queen ant for depositing [59].

•
Worker ants lean towards the area with the most dropped building material to deposit [58].

•
Each worker ant selects an area around the queen ant to start the bulldozing process. A decision is made when all the worker ants in the colony are bulldozing at a potential area, i.e., deposit grain in that area [58].

Ant Nesting Algorithm
In this section, the modelling process including the entities, mathematical representations, working mechanism, and analysis of the algorithm is provided.

Entities
Algorithmically modelling the entities of the Leptothorax ant behavior while building a new nest, the worker ants represent artificial search agents; each position around the queen ant that a worker ant exploits to drop grain, represents a potential solution exploited by an artificial search agent, and the best position to deposit among all the possible positions exploited by all the worker ants represents the global optimum solution. Accordingly, the deposition position is determined through the worker ants' position. The deposition position's specification, such as its influence on the consolidation of the wall and its closeness to other stones, can be considered as fitness functions of the algorithm. Each worker ant's decision factor to deposit grain at a specific position is represented by deposition weight (dw) in the algorithm. dw is a random weight of worker ant to deposit grain at a specific position using solution and fitness information of the previous and current deposition positions, it is discussed further in the next section. Table 1 summarizes the main elements of the algorithm.
The stationary stones and nestmates that are encountered by the worker ants while performing random walks within the nest to find new better positions to drop grain are modelled using the worker ant's previous deposition position Xt,iprevious. That is to say that the current worker ant's previous deposition position represents the stationary stone and/or nestmate the current worker ant faces in the algorithm.

Mathematical Modelling
The algorithm mimics what a swarm of worker ants is doing during nesting. The main part of the algorithm is taken from the process of worker ants searching for a suitable position among many potential positions to deposit grain. Every worker ant that searches for deposition positions represents a potential solution in this algorithm; furthermore, selecting the best deposition position among several good deposition positions is considered as converging to optimality.
It is noteworthy that in nature, worker ants collect grains, transport them to the nest site, and search for deposition positions continuously as a cycle till a well tightly wall is formed around the queen ant. However, only a single cycle of dropping grain is modelled in the algorithm, i.e., the algorithm only mimics the worker ants search for the deposition position of a single grain during nesting not continuously searching for deposition positions for each grain collected by each worker ant. In addition to that, the first building workers who use the brood cluster as a mechanical template for determining where the nest wall should be built are ignored in the algorithm. Rather, the grain dropped after the first depositions are considered in modelling the algorithm.
The algorithm begins by randomly initializing the artificial worker ant population in the search space Xi (i = 1, 2, 3, …, n); each worker ant position represents a newly discovered deposition position (solution). Worker ants try to find better deposition positions by randomly searching more positions. Each time a better deposition position is found, the newly discovered solution becomes the optimum solution. However, if the new solution is not better than the current, it will then continue to the current solution, which is the best solution that has been discovered to that point.
In nature, worker ants search for deposition positions randomly. In this algorithm, artificial worker ants search the landscape randomly using a deposition weight mechanism. Accordingly, every time an artificial worker ant obtains a new deposition position Xt+1,i (t = 1, 2, 3, …, m) (i = 1, 2, 3, …, n) by adding deposition position rate of change that is denoted by ∆Xt+1,i, to their current deposition position Xt,i. The deposition position of the artificial worker ant is updated with the following expression: where, i represents the current worker ant, t represents the current iteration, X represents the artificial worker ant's deposition position, and ∆Xt+1,i represents the deposition position's rate of change. Table 2 summarizes all the mathematical notations used in the algorithm.
The deposition position's rate of change ∆Xt+1,i, is dependent on deposition weight dw and the difference between the local best-known worker ant Xt,ibest and deposition position of the current worker ant Xt,i; the latter is the mathematical modelling of the behavior of leaning towards the most dropped building material. Thus, each worker ant tends to improve its deposition position (potential solution) by moving towards the best-known worker ant (the best potential solution discovered so far). Thereby, the ∆Xt+1,i is calculated as the following: The following rule is followed for calculating ∆Xt+1,i when: the current worker ant is the local best-known ant the current deposition position is equal to the previous deposition position Deposition weight (dw) is the mathematical representation of the random walk performed by the worker ant, and it depends on the artificial worker ant's previous (Tprevious) and current (T) tendency rate to deposit grain at a specific position. T and Tprevious are computed as the slope sides in the Pythagorean theorem of the difference between the worker ant's current and previous deposition positions to the best deposition position discovered so far with their fitness difference as the other sides. Figure 3a,b explicitly demonstrates how the T and Tprevious are gained for a single ant respectively. Thus, dw for minimization problems can be calculated as the following: where, r is a random number in [−1, 1] range, works as deposition factor for controlling the dw. There are different mechanisms for generating random numbers. Levy flight has been selected because it provides more stable movements due to its good distribution curve [26].
The worker ant's tendency rate of deposition (T) is calculated as the following: The worker ant's previous tendency rate of deposition (Tprevious) is calculated as the following:

Working Mechanism
The algorithm starts by initializing random deposition positions Xt,i (t = 1, 2, 3, …, m) (i = 1, 2, 3, …, n) for each artificial worker ant using the lower and upper boundaries. Initially, the previous deposition position Xt,iprevious of each artificial worker ant is assigned to Xt,i as it is the first generation. Then for each iteration, the global best deposition position Xt,ibest is selected, a random number r in the range [−1, 1] will be generated, and for each artificial worker ant, Xt,i is compared to Xt,ibest. If the current worker ant deposition position is the global best solution discovered so far, i.e., Xt,i is equal to Xt,ibest, calculate ∆Xt+1,i using Equation (3), and if the previous deposition position is equal to current deposition position, calculate ∆Xt+1,i using Equation (4). Otherwise, calculate T, Tprevious, dw, and ∆Xt+1,i using Equations (6), (7), (5) and (2) respectively.
After that, a new solution Xt+1,i is obtained through Equation (1). Each time the artificial worker ant finds a new solution, it checks whether the new solution is better than the current solution using the fitness function. If the new solution is fitter, then it is accepted and the old solution is saved to Xt,iprevious. However, if the new solution is not fitter, then the algorithm maintains the current solution until the next iteration. For elucidating the working mechanism of the ANA algorithm more, both pseudocode and flowchart are developed. Figure 4 shows the pseudocode of the ANA algorithm, and Figure 5 presents the flowchart.
When implementing the ANA algorithm for maximization problems, two minor changes are needed. First, Equation (5) must be replaced by Equation (8), as Equation (8) is simply an inverse version of Equation (5).
Second, the condition for selecting a better (fitter) solution should be changed.

Testing and Evaluation
For measuring and evaluating the performance and feasibility of an algorithm, several techniques exist in the literature including test functions and real-world applications. This chapter presents the test mechanisms used with the results and analysis of the results.

Standard Benchmark Functions
A considerable number of common standard benchmarks or test functions for testing reliability, efficiency, and validation of optimization algorithms exist. However, the effectiveness of an algorithm against others cannot be measured by the problems, which it solves if the set of problems selected are too specific and do not have varied properties. Therefore, a set of 16 common standard benchmark functions with diverse characteristics are selected for testing the performance of the algorithm. The sets consist of unimodal, multimodal, and composite test functions. Tables A1-A3 in Appendix A demonstrate the selected standard benchmark functions for benchmarking performance.
The unimodal test functions have only a single optimum. They are used for testing exploitation ability. They allow focusing more on the convergence rate of the algorithm other than the final results. Table A1 in Appendix A presents the six unimodal test functions selected for testing the ANA algorithm that are namely F1, F2, F3, F4, F5, and F7. F6 is not selected as a benchmark function for testing the algorithm but is mentioned in the table because it is utilized in the composite functions [38].
The multimodal test functions have more than one optimum, and the number of local optima usually increases exponentially with the number of problem dimensions. They are used for testing exploration ability which can make the algorithm avoid local optima/s. Table A2 in Appendix A presents the four multimodal test functions selected for testing the ANA algorithm that are namely F9, F10, F11, and F12. F8 is not selected as a benchmark function for testing the ANA algorithm; however, it is mentioned in the table because it is utilized in the composite functions [38].
The composite test functions as the name implies, are the combined, shifted, rotated, and biased versions of the other test functions. They provide a lot of varied shapes with several local optima. They allow measuring the exploitation and exploration balance of the algorithm. Table A3 in Appendix A presents the six composite test functions selected for testing the algorithm [38]. For verification and analysis purposes, our proposed ANA algorithm is compared to two sets of competitive algorithms with different parameter settings for each set.

DA, PSO, and GA
In the first set, the standard GA, PSO, and DA algorithms are selected as reference algorithms for comparison to ANA on the 16 selected standard benchmark functions. GA and PSO are among the earliest well-known and efficient algorithms in literature [60], and DA is a recently developed, promising algorithm with a high number of successful applications [61]. The reference [38] contains the test results of DA, PSO, and GA algorithms on the 16 selected standard benchmark functions including the parameter settings in detail.
Regarding the common parameter sets in all the cases, the population size is set to 30, and the dimension of the benchmark functions is equal to 10. The maximum number of iterations is set as the stopping criteria, equal to 500. The algorithm is tested 30 times and the average and standard deviation are calculated. Table 3 presents the test results of ANA, DA, PSO, and GA algorithms on the 16 standard benchmark functions [38].
Each test function of the algorithm from the standard benchmark functions is minimized towards 0.0 as shown in Table 3. Comparing the test results of ANA to the other algorithms presented in the table; ANA outperformed the most famous algorithms: DA, PSO, and GA on seven test cases are namely F4, F10, F13-16, and F18, and only on F7 the other algorithms provided better results than ANA. However, the result of F7 was not poor but only not better than the other algorithms. On the rest of the benchmark functions, the algorithm provided comparative results to the others.
According to the results of Table 3, it can be noted that F13 to F18 are composite test functions and are suitable for measuring the local minima avoidance of an algorithm. ANA algorithm outperformed all the DA, PSO, and GA algorithms on all these test functions except F17 that came in the third position with the outperformance of the DA algorithm. From that, it can be concluded that ANA is quite effective in avoiding local minima and balancing exploitation and exploration levels. In the second set, the standard PSO, and several modified versions of the PSO algorithm that are GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO are selected as reference algorithms for comparing to ANA on six selected standard benchmark functions that are namely F1, F5, F7, F9, F10, and F11. This work [51] contains the detail of modification of the GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO algorithms, parameter settings, and test results of the algorithms on the selected functions.
Regarding the common parameter sets in all the cases, the population size is set to 30, and the dimension of the benchmark functions is equal to 20. The maximum number of iterations is set as the stopping criteria, equal to 10,000. It is worth mentioning that the functions are used without shift and with the same range of the first set except for F1 the range is reduced to [−5.12, 5.12]. The algorithm is tested 100 times and the average and standard deviation are calculated. Table 4 presents the test results of ANA, PSO, GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO algorithms on the six selected standard benchmark functions [51].
Each test function of the algorithm from the standard benchmark functions is outstandingly minimized towards 0.0 as shown in Table 4. Comparing the test results of ANA to the other algorithms presented in the table; ANA outperformed these algorithms: PSO, GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO on two test cases that are namely F1 and F9. On the rest of the benchmark functions, the ANA algorithm provided quite comparative results to the others.

CEC-C06 2019 Benchmark Functions
In addition to the standard benchmark functions, a set of 10 modern CEC benchmark functions are used as an extra evaluation of the ANA algorithm, and the results are compared to three other remarkable metaheuristic algorithms that are DA, WOA, and SSA. These test functions are developed for benchmarking single-objective optimization problems, and they are known as "the 100-digit challenge" that is intended to be used in annual optimization competitions [62]. Table A4 in Appendix A presents the CEC-06 2019 test functions used for benchmarking the ANA algorithm [62].
All the test functions from CEC-06 2019 are scalable, and functions CEC01 to CEC03 are not shifted or rotated but functions CEC04 to CEC10 are. The default test function parameters that were set by the CEC benchmark developer is used for performing the test of ANA. As can be seen in Table A4 in Appendix A, function CEC01 is set as a 9-dimensional minimization problem in [−8192, 8192] boundary range, function CEC02 is set as 16 dimensional in [−16,384, 16,384] range, function CEC03 is set as 18 dimensional in [−4, 4] range, while the rest of the functions from CEC04 to CEC10 are set as a 10-dimensional minimization problem in [−100, 100] boundary range. All the CEC functions' global optimum were unified towards 1.0 for providing more convenience.
The test results of the ANA algorithm are compared to the test results of three other modern optimization algorithms that are DA, WOA, and SSA which are taken from Abdullah and Rashid [49]. Regarding common parameter settings, the same is used as the ones that have been previously used [49] with the number of iterations as 500 and the number of agents as 30. The algorithm is run 30 times and average and standard deviation on each test function are computed. Table 5 presents the test results of ANA, DA, WOA, and SSA on CEC-C06 2019 test functions [49].
From Table 5, it can be concluded that each test function of the ANA algorithm on CEC functions is minimized towards one except on CEC01 and CEC06 that did not provide any convenient results; the runtime for these two functions was considerably long for taking results. ANA outperformed all the other algorithms on all the other test cases. This is another indicator of the ANA algorithm's outstanding performance and efficiency. It is worth mentioning that the WOA algorithm has the same result as ANA on the CEC03 function, but the value of the WOA algorithm in Table 5 has been estimated to only 4 decimal places. However, the standard deviation of WOA is 0.0 on the CEC03 function, which means WOA provides the same result each time it is run without any chances for further improvement.

Comparative Study
There are several measures and techniques for comparing the performance of algorithms. Considering the importance of reaching optimality in optimization, this part presents a comparative study on the average of global best solutions of ANA, DA, PSO, and GA algorithms and ANA, PSO, GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO algorithms on the standard benchmark functions used for testing the ANA algorithm. The algorithms' average global best solution on the standard benchmark functions are taken from Tables 3 and 4, and each is ranked from 1 to 4 for Table 3: 1 for the algorithm that provided the minimum value/result on the function and 4 for the algorithm that provided the maximum result, and 1 to 7 for Table 4: 1 for the algorithm that provided the minimum value/result on the function and 7 for the algorithm that provided the maximum result. Table 6 presents the ranking of ANA, DA, PSO, and GA algorithms on the 16 standard benchmark functions, and Table 7 presents the total number of first, second, third, and fourth rankings of the algorithms. Moreover, Table 8 demonstrates the ranking of ANA, PSO, GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO on the six standard benchmark functions, and Table 9 presents the total number of first, second, third, fourth, fifth, sixth, and seventh rankings of the algorithms. Table 6. ANA, DA, PSO, and GA ranking on the standard benchmark functions. ANA  DA  PSO  GA  F1  3  1  2  4  F2  2  1  3  4  F3  3  1  2  4  F4  1  2  3  4  F5  2  1  3  4  F7  4  2  1  3  F9  3  2  1  4  F10  1  2  3  4  F11  3  2  1  4  F12  3  2  1  4  F13  1  2  4  3  F14  1  4  3  2  F15  1  4  2  3  F16  1  4  2  3  F17  3  4  2  1  F18  1  3  4  2   Table 7. ANA, DA, PSO, and GA total number of ranking on the standard benchmark functions.  Table 9. ANA, PSO, GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO total number of ranking on the standard benchmark functions. Tables 6 and 7 demonstrate that the ANA algorithm has the highest first ranking number with a total of seven and the lowest fourth ranking with a total of only one in comparison to the famous DA, PSO, and GA algorithms. Furthermore, ANA demonstrates its efficiency once again by achieving the highest first rank on the six standard benchmark functions in comparison to PSO, GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO algorithms as can be seen from Tables 8 and 9.

Rank
For providing a more detailed evaluation of the algorithm, ANA is ranked on the standard benchmark functions both by type of benchmark function and in total in comparison to DA, PSO, and GA algorithms. Table 10 presents ANA rankings on standard benchmark functions by type and in total. As demonstrated, the ranking by the type of the benchmark function for unimodal and multimodal test functions both are 2.50 and is 1.33 for composite test functions. Furthermore, if the global average performance of ANA is rounded to the nearest integer, then ANA ranks second amongst the four algorithms and the evaluated 16 benchmark functions. That demonstrates the algorithm's great per-formance compared to these famous algorithms. However, it is worth noting that no algorithm can perform the best for all optimization problems. Some algorithms will perform very well on some problems, while others will not perform as well as that algorithm [63].

ANA versus FDO
For providing a more detailed insight into the ANA algorithm and proving its significant performance, it has been compared to another novel algorithm that has an outstanding performance. The 16 standard benchmark functions used for testing the ANA algorithm have been selected, and the results have been compared to the FDO algorithm. FDO has been selected to conduct the comparison for three main reasons. First, FDO is a PSO-based algorithm same as ANA. Second, the FDO algorithm's both outstanding performance and outperforming all the standard GA, PSO, DA, WOA, and SSA algorithms have been proven in this work [49]. Third, the algorithm's implementation is publicly provided by the authors.
Regarding the algorithms' parameter settings, both the algorithms have been run with 30 search agents and 500 iterations 30 times. The wf, which is the FDO's single parameter, has been set to 0 in all the test cases. The global best agent for each turn has been recorded, and the results are demonstrated in box and whisker plots in Figure 6a Table A5 in Appendix A contains the 30 times test results of ANA and FDO algorithms on the standard benchmark functions.

Statistical Test
To show that the results presented in the previous section are statistically significant, the p values of the Student's t-test, Welch's t-test, and Wilcoxon signed-rank test are found for all the standard benchmark functions, and the results are shown in Table 11. In Table  11, the comparison is conducted only between ANA and FDO algorithms because FDO was already tested against DA, PSO, and GA algorithms in this paper [49]. According to the mentioned work, it has been proven that the FDO results are statistically significant compared to DA, PSO, and GA algorithms. Table 11, ANA results are considered significant in 10 statistical tests of Student's t-test that are namely F3, F4, F5, F7, F9, F11, F12, F13, F16, and F18, 7 statistical tests of Welch's t-test that are namely F1, F2, F4, F5, F10, F15, and F17, and 3 statistical tests of Wilcoxon signed-rank test that are namely F2, F5, and F17 that is because the results are less than 0.05. According to this work [64], the statistical test results of the Wilcoxon signed-rank test should be relied on for ensuring the statistical significance of the ANA algorithm in comparison to FDO as the data is not normally distributed nor homoscedastic. Tables A6 and A7 in Appendix A contains the normality test using the Shapiro-Wilk test and the homoscedastic test using Levene's test of ANA and FDO on the standard benchmark functions.

Real-World Applications of ANA
To prove the feasibility of the algorithm and evaluate its performance, ANA has been applied for solving two different real-world engineering problems.

ANA on Aperiodic Antenna Array Design
The problem investigated for measuring the feasibility of the ANA algorithm is aperiodic antenna array design. In today's technological society, developing products that are more efficient and economical than their predecessors is quite crucial and highly demanded. The development of radar technology is a reason among several ones that led to the huge demand for innovation in the area of antenna array design [65]. Over the years, several various antennas have been designed to accommodate the industry's growing needs. One major requirement for antenna design has been the ability to position the elements of non-uniform arrays to obtain the peak sidelobe level (SLL).
Design techniques for array element placement include thinning, numerical optimization, and other methods. However, even with modern tools, the designing problem remains computationally challenging. For reducing the complexity of the design, and the optimization algorithm is used for optimizing element positions and minimizing peak sidelobe levels [66].
In non-uniform isotropic arrays, there are 10 elements, and only four of them need to be optimized. Thus, the application is a four-dimensional optimization problem. It is worth noting that designing the aperiodic antenna array is a convex problem since all the lines joining every two elements/points lie in the set, and minimization algorithms are considered to be quite efficient in optimizing this problem and reaching the optimum is much likely [67]. For more details on this problem, interested readers may refer to this paper [68]. Figure 7 demonstrates an array configuration for 10 elements.
Consider θs = 90° [68] For achieving an improved peak SLL in non-uniform arrays, the element positions need to be optimized in a real number vector. In addition to that, to mitigate grating lobe level, a certain element spacing limit is required. Equation (11) shows the constraints of the problem.

ANA on Frequency-Modulated Synthesis
A frequency-modulated synthesis is a form of sound synthesis whereby the frequency of a waveform is changed by modulating its frequency. It is a complex real-world engineering optimization problem that has a fundamental role in several modern music systems. For the FM synthesis to create a harmonic sound, six parameters need to be optimized. Thus, the parameter optimization of an FM synthesizer is a six-dimensional optimization problem where the vector to be optimized is ⃗ = {a1, w1, a2, w2, a3, w3} of the sound wave given in Equation (12). The objective of this problem is to generate a sound in Equation (12) that is similar to the target sound in Equation (13).
respectively, where the parameters are defined in the range [−6.4, 6.35] and θ = 2π/100. The fitness function can be calculated using Equation (14), which is the summation of square errors between the estimated wave, i.e., the result of Equation (12) and the target wave in Equation (13), while t = 100 turns. Interested readers can find more details on this problem in this work by [69].

Conclusions
A novel swarm intelligent algorithm for optimizing single objective problems called the ant nesting algorithm was proposed. The proposed algorithm is inspired by Leptothorax ant behavior when dropping grains for constructing a new nest. The algorithm mimics what a swarm of worker ants do while searching for a position to drop grain around their queen ant in addition to a model of their random walk performance while searching. ANA employs slope-side lengths of Pythagorean theorem from the difference between previous and current deposition positions to the best local deposition position as one side and their fitness values as the other side to generate weights that drive the search agents towards optimality. In addition to that, ANA depends on the randomization mechanism in all the phases of searching including initialization, exploration, and exploitation.
Regarding ANA's performance after testing on several standards and modern test functions and real-world applications, it has been discovered that the common parameter settings like several search agents and iterations have a great impact on the algorithm's performance as is the case with the other metaheuristic algorithms. Increasing the number of search agents and iterations may provide much better and more accurate results, and the reverse is also true. However, on the other hand, increasing the number of search agents and/or iterations means utilizing more resources. Thus, a wise choice needs to be made while setting the parameters. This paper is only providing a new method for reaching optimality in single-objective problems and several works can be suggested for conducting further studies. The development of binary and multi-objective versions of the ANA algorithm is one recommendation for solving a greater range of diverse optimization problems. Another recommendation is integrating more evolutionary operators into the algorithm for improving performance and/or more effective utilization of resources. In addition to that, hybridizing the algorithm with the other algorithms and using their features is another exceptional suggestion. Finally, the most important is applying the algorithm in solving real-world applications. It is strongly recommended that the algorithm is used for solving problems practically.

Funding:
This research received no external funding.

Conflicts of Interest:
The authors declare no conflict of interest.