Next Article in Journal
The STEM Methodology and Graph Theory: Some Practical Examples
Next Article in Special Issue
Retrieval-Based Transformer Pseudocode Generation
Previous Article in Journal
A New Modified Fixed-Point Iteration Process
Previous Article in Special Issue
Hybrid Fruit-Fly Optimization Algorithm with K-Means for Text Document Clustering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ANA: Ant Nesting Algorithm for Optimizing Real-World Problems

by
Deeam Najmadeen Hama Rashid
1,*,
Tarik A. Rashid
1,* and
Seyedali Mirjalili
2,3
1
Department of Computer Science and Engineering, School of Science and Engineering, University of Kurdistan Hewler, Erbil 44001, Iraq
2
Centre for Artificial Intelligence Research and Optimisation, Torrens University, Brisbane, QLD 4006, Australia
3
Yonsei Frontier Lab, Yonsei University, Seoul 03722, Korea
*
Authors to whom correspondence should be addressed.
Mathematics 2021, 9(23), 3111; https://doi.org/10.3390/math9233111
Submission received: 20 October 2021 / Revised: 23 November 2021 / Accepted: 27 November 2021 / Published: 2 December 2021

Abstract

:
In this paper, a novel swarm intelligent algorithm is proposed called ant nesting algorithm (ANA). The algorithm is inspired by Leptothorax ants and mimics the behavior of ants searching for positions to deposit grains while building a new nest. Although the algorithm is inspired by the swarming behavior of ants, it does not have any algorithmic similarity with the ant colony optimization (ACO) algorithm. It is worth mentioning that ANA is considered a continuous algorithm that updates the search agent position by adding the rate of change (e.g., step or velocity). ANA computes the rate of change differently as it uses previous, current solutions, fitness values during the optimization process to generate weights by utilizing the Pythagorean theorem. These weights drive the search agents during the exploration and exploitation phases. The ANA algorithm is benchmarked on 26 well-known test functions, and the results are verified by a comparative study with genetic algorithm (GA), particle swarm optimization (PSO), dragonfly algorithm (DA), five modified versions of PSO, whale optimization algorithm (WOA), salp swarm algorithm (SSA), and fitness dependent optimizer (FDO). ANA outperformances these prominent metaheuristic algorithms on several test cases and provides quite competitive results. Finally, the algorithm is employed for optimizing two well-known real-world engineering problems: antenna array design and frequency-modulated synthesis. The results on the engineering case studies demonstrate the proposed algorithm’s capability in optimizing real-world problems.

1. Introduction

Both our professional and private life is a sequence of decisions and each decision involves selecting between at least two options (it will not be considered a decision otherwise), and the fact is we are always in search of finding the best option. Every question that needs a superlative answer is an optimization problem. Finding the shortest path to a destination, constructing the fastest car, hiring the best job applicant, prescribing the best medicine for a patient, making the best vaccine for a virus, finding the best way to discover diseases and viruses as early and reliable as possible, and finding the best strategy for overcoming financial and economic crises, are few examples of optimization problems.
Simple exhaustive search methods [1,2] are rarely sufficient for most real-world problems, and they lead to too slow or incomplete searches as the search space (the number of options) increases dramatically. That means, finding the best solution for a problem is not usually an easy task and requires a long time and sometimes an enormous amount of resources. Optimization can be defined as the art and science of making good decisions, and optimization algorithms are meant to solve optimization problems by trading in solution quality for runtime. Optimization algorithms provide us with a set of tools and techniques mainly from mathematics and computer science to select the best solution amongst the possible choices [3].
There are many ways in which optimization algorithms can be classified, the simplest way is to assort them as deterministic or stochastic [3]. Deterministic algorithms [3,4] such as linear programming, nonlinear programming, and mixed-integer nonlinear programming guarantee optimal or near-optimal solutions by adopting repeated design variables and functions firmly. They have a fast convergence rate and are simple and easy to implement and understand. Despite their efficiency [5], they are deterministic, i.e., for the same inputs, the same output is obtained consistently.
On the other hand, stochastic algorithms are more flexible and efficient than deterministic algorithms [3,6,7] as they are stochastic, i.e., they all have some level of randomness; for the same set of inputs, the same output is not always obtained. They are considered to be quite efficient in obtaining near-optimal solutions to all types of problems because they do not assume the underlying fitness landscape. The stochastic algorithms include heuristic and metaheuristic algorithms. Metaheuristic can be seen as a “master strategy that guides and modifies other heuristics to produce solutions beyond those that are normally generated in a quest for local optimality” [8]. Metaheuristics are considered to perform better than heuristics though the names are used interchangeably [3].
Time, cost, and resource limitations make searching every single solution for a problem to find the optimal one an impossible task. Therefore, researchers have started observing and studying the behavior of animals and natural phenomena to develop algorithms for solving optimization problems. They have developed algorithms based on swarm intelligence, biological systems, physical and chemical systems. These types of algorithms are called nature-inspired, and they contain a big set of novel problem-solving methodologies and approaches. They have been used to solve many real-world problems, and they comprise a large portion of stochastic algorithms.
From the time of advent, nature-inspired optimization algorithms have received great attention and are growing very rapidly. According to a research report [9], there are more than 200 nature-inspired algorithms presented in the literature. Despite the considerable number of algorithms and developments, there is always room for presenting a new algorithm, as long as the new proposed algorithm presents better or comparative performance to the previous ones as proven by the NFL theorem [10]. The theorem states that if any algorithm A outperforms another algorithm B in the search for an optimum of an objective, then algorithm B outperforms A over other objective functions. In other words, all the optimization algorithms give the same average performance when averaged over all functions. This proposes and motivates developing more and more algorithms for solving diverse and complex real-world optimization problems.
This paper proposes a new algorithm under the name ant nesting algorithm that is abbreviated to ANA. It is inspired by the swarming behavior of Leptothorax ants during nest construction. Ant colony optimization (ACO) algorithm is also a nature-inspired metaheuristic algorithm that mimics the behavior of ants. However, it is very different to our algorithm as it uses different ant stigmergy and behavior.
The major contribution of this work is a proposal of a new swarm intelligent algorithm for optimizing single-objective problems that has a good level of exploration and exploitation. It is noted that single-objective optimization problems are those problems that require a solution for a single criterion or metric of the problem.
The main contributions of this work are outlined as follows:
(1)
Proposing a novel metaheuristic algorithm for solving SOPs.
(2)
Integrating Pythagorean theorem into the ant nesting model for generating convenient weights that assist the algorithm in both exploration and exploitation phases.
(3)
Utilizing a quite different approach from PSO for updating search agent positions and testing the algorithm on several optimization benchmark functions and comparing it to the most well-known and outstanding metaheuristic algorithms like a genetic algorithm (GA), particle swarm optimization (PSO), five modified versions of PSO, dragonfly algorithm (DA), whale optimization algorithm (WOA), salp swarm algorithm (SSA), and fitness dependent optimization (FDO).
(4)
Applying ANA algorithm for optimizing two real-world engineering problems that are antenna array design and frequency-modulated synthesis.
The rest of the paper contains a brief history of the most prominent swarm algorithms in the literature, the inspiration of the algorithm with its modelling, testing and evaluation of the algorithm, and the conclusion and recommendation of a few future works.

2. Nature-Inspired Metaheuristic Algorithms in Literature

Metaheuristic as a scientific method for solving problems is a quite new phenomenon in comparison to its ubiquitous nature. Although it is difficult to pinpoint the first use of metaheuristics, the mathematician Alan Turing is known to be the first to have used heuristic algorithms for breaking the Enigma cyphers at Bletchley Park during World War II [3].
To date, over 200 nature-inspired metaheuristic algorithms for optimization exist in the literature. This section presents the most well-known algorithms in the literature. Figure 1 highlights the year of development of each algorithm mentioned.
A genetic algorithm was developed by Holland and his colleagues at the University of Michigan in the 1960s and 1970s [11]. It is based on the abstraction of Darwinian evolution and the natural selection of biological systems. GA is proven to be extremely successful in solving a wide range of optimization problems, and hundreds and thousands of books and research papers have been published about it. In addition to that, some studies show its usage in solving combinatorial optimization problems like scheduling, planning, and data mining in companies [12].
Ant colony optimization was developed by Dorigo in 1992, and it is inspired by the foraging behavior of social ants; how ants find the shortest path to a destination [13]. ACO has been applied for solving various problems such as scheduling [14], vehicle routing [15], assignment [16], and set [17].
Particle swarm optimization was developed by Kennedy and Eberhart in 1995, and it is inspired by the movement behavior of bird flocks and fish schools [18]. The algorithm’s exploitation and exploration abilities are considered to be quite efficient, and it has been used for solving a very large number of real-world optimization problems [19].
The artificial bee colony algorithm was developed by Karaboga in 2005, and it is inspired by the intelligent foraging behavior of honeybee swarms [20]. ABC has high exploration ability and comparatively low exploitation ability. Although the performance of the algorithm depends on applications, ABC has been proven to be more efficient than GA and PSO in solving certain problems [21,22,23]. Examples of applications are in solving the set covering problem (SCP) [24] and optimum reservoir release [25].
Firefly algorithm was developed by Yang in 2008, and it is inspired by the behavior of the flashing pattern of fireflies [26]. It has been used for solving a variety of optimization problems in computer science and engineering, and it has been proven to outperform some other metaheuristic algorithms. However, despite the application and efficiency of FA, it has been criticized as differing from PSO only in a negligible way [27,28,29].
The cuckoo search was proposed by Yang and Deb in 2009, and it is inspired by the brood parasitism of cuckoo species [30]. CS uses a switching parameter to balance between local and global random walks. It outperforms PSO and GA algorithms and is used for solving several real-world problems like power network planning [31], series system [32], and engineering design optimization problems [33].
Bat algorithm was developed by Yang in 2010, and it is based on the echolocation behavior of microbats [34]. BA has been applied in several areas like image processing [35] and scheduling [36].
Grey wolf optimizer was proposed by Mirajili et al. in 2014 and is inspired by the searching and hunting behavior of grey wolves. The algorithm has been implemented in three steps, which are searching, encircling, and attacking prey. GWO has been proven to be very efficient and outperforms several well-known algorithms like gravitational search algorithm (GSA), PSO, differential evolution (DE), evolutionary programming (EP), and evolution strategy (ES) on several benchmark test functions. It has been applied for solving classical engineering design problems and optical engineering real-world problems [37].
Dragonfly algorithm was developed by Mirajili in 2015 and is inspired by the static and dynamic swarming behaviors of dragonflies. For simulating the swarming behavior, the five swarming principles of insects have been utilized, which are alignment, cohesion, separation, attraction to a food source, and distraction from the enemy. DA is available to be used for solving single-objective, discrete, and multi-objective optimization problems [38].
Ant lion optimizer was proposed by Mirjalili in 2015 and is inspired by the hunting mechanism of antlions in nature. Five main steps of hunting prey like the random walk of ants, building traps, entrapment of ants in traps, catching prey, and re-building traps are implemented. ALO has been used for solving three-bar truss design, cantilever beam design, and gear train design and optimizing the shapes of two ship propellers [39].
The whale optimization algorithm (WOA) was developed by Mirjalili and Lewis in 2016 [40]. WOA mimics the hunting mechanism of humpback whales and has three phases, which are encircling prey, bubble-net attacking method, and search for prey. It has been applied for solving several optimization problems and provided outstanding results like economic dispatch problem [41], breast cancer diagnosis [42], global MPP tracking of a photovoltaic system [43], and a handful number of other significant problems [44,45,46].
Salp swarm algorithm was developed by Mirjalili et al. in 2017. It is inspired by the behavior of salp swarms when navigating and foraging in oceans. SSA has a single decreasing parameter to make the balance between diversification and intensification. It can be used for solving both single-objective and multi-objective optimization problems. It is applied for solving several challenging engineering designs [47].
Donkey and smuggler optimization algorithm was proposed by Shamsaldin et al. in 2019. DSO mimics the searching behavior of donkeys while transporting goods, and it consists of two modes that are the donkeys and the smuggler modes. In the smuggler mode, all the possible solutions are discovered and the best one is identified. While in the donkey’s mode, a couple of donkeys’ behaviors are utilized for finding the optimal option among the possible solutions. The algorithm has been applied to solve three well-known real-world optimization problems that are namely traveling salesman problem, packet routing, and ambulance routing, and significant results were produced by the algorithm [48].
Fitness dependent optimizer was developed by Abdullah and Rashid in 2019 [49]. It is a PSO-based algorithm and is inspired by the foraging behavior of honeybees whilst selecting a hive. It has been improved and applied by Muhammed et al. in 2020 to develop a model for pedestrian evacuation [50].
In addition to the standard versions of the nature-inspired metaheuristic algorithms, there are many modified, enhanced versions of them like GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO that are modified versions of the PSO algorithm [51], and DCSO which is a modified version of CWO algorithm [52] including the hybridized algorithms like WOAGWO [53], that composes the behavior of more than one algorithm.

3. Ant Swarming

Ants are remarkable tiny insects with an enormous population and over 10,000 species all over the world, and they have always been in the position of wonder and fascination to human beings. A considerable number of books and research papers have been published about all the aspects of their life, ranging from pure literature [54,55] to detailed scientific accounts [56,57]. What is striking the most is their fascinating high social organization in comparison to their limited individual capabilities. They have been an inspiration to researchers and scientists for years. This section shortly presents Leptothorax ant behavior while building a new nest.
Leptothorax ant colonies prefer horizontal crevices and build nests within flat crevices in rocks that provide the roof and floor of their dwelling place to defend themselves against physical factors and biological enemies. They wall themselves into their chosen crevice by encircling themselves with a border of debris or grains, which are particles of earth or small fragments of stones. The circumference of the wall is appropriate to comfortably house their population which consists of a single queen, broods, and up to 500 workers. Worker ants are responsible for building the nest, and each is 2 to 3 mm long [58,59]. Figure 2 demonstrates ant nesting.
Leptothorax worker ants do not build a roof or floor for their nest. They build their nest by only constructing a wall around their queen and broods that consist of eggs, larvae, and pupae. The cluster of worker ants around the brood cluster serves as a mechanical template for determining where the nest wall should be built [58,59].
The nest construction starts with the worker ants in the colony departing from their cluster, collecting a single grain of building material, and dropping it within a distance from the cluster randomly. Then, the ants lean towards the area with the most dropped stone and start bulldozing stones into other stones from that area. The process of selecting an area to start bulldozing is very important for the consolidation of the wall. The bulldozing process continues till a well tightly packed and densely consolidated wall is formed around the queen ant in the center [58,59].
What inspired in developing this algorithm is the worker ants’ individual decision to drop grains at a fixed distance from the center of the nest with stigmergic interaction of deposing where others have already been deposited. The wall originates from a combination of each worker ant’s decision. In other words, when constructing a new nest, worker ants select an area around the queen, the area with the most grain, and start bulldozing from that area. A decision is made when the majority of the ants are bulldozing at a potential area [58,59].
The Leptothorax worker ants’ building behavior that inspired the algorithm is summarized as the following:
  • Worker ants are responsible for building new nests by collecting building materials, transporting them into the nest site, and releasing them in an area around the queen ant [58,59].
  • Worker ants make a random walk within the nest until they face their nestmates or stationary building materials to deposit; the latter is the major cue for the deposition of another building material [58].
  • Each worker ant makes an independent decision about which direction to take around the queen ant for depositing [59].
  • Worker ants lean towards the area with the most dropped building material to deposit [58].
  • Each worker ant selects an area around the queen ant to start the bulldozing process. A decision is made when all the worker ants in the colony are bulldozing at a potential area, i.e., deposit grain in that area [58].

4. Ant Nesting Algorithm

In this section, the modelling process including the entities, mathematical representations, working mechanism, and analysis of the algorithm is provided.

4.1. Entities

Algorithmically modelling the entities of the Leptothorax ant behavior while building a new nest, the worker ants represent artificial search agents; each position around the queen ant that a worker ant exploits to drop grain, represents a potential solution exploited by an artificial search agent, and the best position to deposit among all the possible positions exploited by all the worker ants represents the global optimum solution. Accordingly, the deposition position is determined through the worker ants’ position. The deposition position’s specification, such as its influence on the consolidation of the wall and its closeness to other stones, can be considered as fitness functions of the algorithm. Each worker ant’s decision factor to deposit grain at a specific position is represented by deposition weight (dw) in the algorithm. dw is a random weight of worker ant to deposit grain at a specific position using solution and fitness information of the previous and current deposition positions, it is discussed further in the next section. Table 1 summarizes the main elements of the algorithm.
The stationary stones and nestmates that are encountered by the worker ants while performing random walks within the nest to find new better positions to drop grain are modelled using the worker ant’s previous deposition position Xt,iprevious. That is to say that the current worker ant’s previous deposition position represents the stationary stone and/or nestmate the current worker ant faces in the algorithm.

4.2. Mathematical Modelling

The algorithm mimics what a swarm of worker ants is doing during nesting. The main part of the algorithm is taken from the process of worker ants searching for a suitable position among many potential positions to deposit grain. Every worker ant that searches for deposition positions represents a potential solution in this algorithm; furthermore, selecting the best deposition position among several good deposition positions is considered as converging to optimality.
It is noteworthy that in nature, worker ants collect grains, transport them to the nest site, and search for deposition positions continuously as a cycle till a well tightly wall is formed around the queen ant. However, only a single cycle of dropping grain is modelled in the algorithm, i.e., the algorithm only mimics the worker ants search for the deposition position of a single grain during nesting not continuously searching for deposition positions for each grain collected by each worker ant. In addition to that, the first building workers who use the brood cluster as a mechanical template for determining where the nest wall should be built are ignored in the algorithm. Rather, the grain dropped after the first depositions are considered in modelling the algorithm.
The algorithm begins by randomly initializing the artificial worker ant population in the search space Xi (i = 1, 2, 3, …, n); each worker ant position represents a newly discovered deposition position (solution). Worker ants try to find better deposition positions by randomly searching more positions. Each time a better deposition position is found, the newly discovered solution becomes the optimum solution. However, if the new solution is not better than the current, it will then continue to the current solution, which is the best solution that has been discovered to that point.
In nature, worker ants search for deposition positions randomly. In this algorithm, artificial worker ants search the landscape randomly using a deposition weight mechanism. Accordingly, every time an artificial worker ant obtains a new deposition position Xt+1,i (t = 1, 2, 3, …, m) (i = 1, 2, 3, …, n) by adding deposition position rate of change that is denoted by ΔXt+1,i, to their current deposition position Xt,i. The deposition position of the artificial worker ant is updated with the following expression:
X t + 1 , i   = X t , i   +   Δ X t + 1 , i
where, i represents the current worker ant, t represents the current iteration, X represents the artificial worker ant’s deposition position, and ΔXt+1,i represents the deposition position’s rate of change. Table 2 summarizes all the mathematical notations used in the algorithm.
The deposition position’s rate of change ΔXt+1,i, is dependent on deposition weight dw and the difference between the local best-known worker ant Xt,ibest and deposition position of the current worker ant Xt,i; the latter is the mathematical modelling of the behavior of leaning towards the most dropped building material. Thus, each worker ant tends to improve its deposition position (potential solution) by moving towards the best-known worker ant (the best potential solution discovered so far). Thereby, the ΔXt+1,i is calculated as the following:
Δ X t + 1 , i   =   d w   ×   X t , i b e s t   X t , i
The following rule is followed for calculating ΔXt+1,i when:
the current worker ant is the local best-known ant
Δ X t + 1 , i   = r × X t , i
the current deposition position is equal to the previous deposition position
Δ X t + 1 , i   = r   ×   X t , i b e s t     X t , i
Deposition weight (dw) is the mathematical representation of the random walk performed by the worker ant, and it depends on the artificial worker ant’s previous (Tprevious) and current (T) tendency rate to deposit grain at a specific position. T and Tprevious are computed as the slope sides in the Pythagorean theorem of the difference between the worker ant’s current and previous deposition positions to the best deposition position discovered so far with their fitness difference as the other sides. Figure 3a,b explicitly demonstrates how the T and Tprevious are gained for a single ant respectively. Thus, dw for minimization problems can be calculated as the following:
d w = r ×   T T p r e v i o u s
where, r is a random number in [−1, 1] range, works as deposition factor for controlling the dw. There are different mechanisms for generating random numbers. Levy flight has been selected because it provides more stable movements due to its good distribution curve [26].
The worker ant’s tendency rate of deposition (T) is calculated as the following:
T = X t , i b e s t     X t , i 2 X t , i b e s t f i t n e s s     X t , i f i t n e s s 2
The worker ant’s previous tendency rate of deposition (Tprevious) is calculated as the following:
T p r e v i o u s = X t , i b e s t     X t , i p r e v i o u s 2     X t , i b e s t f i t n e s s     X t , i p r e v i o u s f i t n e s s 2

4.3. Working Mechanism

The algorithm starts by initializing random deposition positions Xt,i (t = 1, 2, 3, …, m) (i = 1, 2, 3, …, n) for each artificial worker ant using the lower and upper boundaries. Initially, the previous deposition position Xt,iprevious of each artificial worker ant is assigned to Xt,i as it is the first generation. Then for each iteration, the global best deposition position Xt,ibest is selected, a random number r in the range [−1, 1] will be generated, and for each artificial worker ant, Xt,i is compared to Xt,ibest. If the current worker ant deposition position is the global best solution discovered so far, i.e., Xt,i is equal to Xt,ibest, calculate ΔXt+1,i using Equation (3), and if the previous deposition position is equal to current deposition position, calculate ΔXt+1,i using Equation (4). Otherwise, calculate T, Tprevious, dw, and ΔXt+1,i using Equations (6), (7), (5) and (2) respectively.
After that, a new solution Xt+1,i is obtained through Equation (1). Each time the artificial worker ant finds a new solution, it checks whether the new solution is better than the current solution using the fitness function. If the new solution is fitter, then it is accepted and the old solution is saved to Xt,iprevious. However, if the new solution is not fitter, then the algorithm maintains the current solution until the next iteration. For elucidating the working mechanism of the ANA algorithm more, both pseudocode and flowchart are developed. Algorithm 1 shows the pseudocode of the ANA algorithm, and Figure 4 presents the flowchart.
When implementing the ANA algorithm for maximization problems, two minor changes are needed. First, Equation (5) must be replaced by Equation (8), as Equation (8) is simply an inverse version of Equation (5).
d w   =   r   ×   T p r e v i o u s T
Second, the condition for selecting a better (fitter) solution should be changed.
Algorithm 1. Pseudocode of ANA for a minimization problem without the loss of generality
Initialize worker ant population Xi (i = 1, 2, 3, …, n)
Initialize worker ant previous position Xiprevious
while iteration (t) limit not reached
for each artificial worker ant Xt,i
find best artificial worker ant Xt,ibest
generate random walk r in [−1, 1] range
if (Xt,i == Xt,ibest)
calculate ΔXt+1,i using Equation (3)
else if (Xt,i = Xt,iprevious)
calculate ΔXt+1,i using Equation (4)
else
calculate T using Equation (6)
calculate Tprevious using Equation (7)
calculate dw using Equation (5)   //for minimization
calculate ΔXt+1,i using Equation (2)
end if
calculate Xt+1,i using Equation (1)
if (Xt+1,i fitness < Xt,i fitness)   //for minimization
move accepted and Xt,i assigned to Xt,iprevious
else
maintain current position
end if
end for
end while

5. Testing and Evaluation

For measuring and evaluating the performance and feasibility of an algorithm, several techniques exist in the literature including test functions and real-world applications. This chapter presents the test mechanisms used with the results and analysis of the results.

5.1. Standard Benchmark Functions

A considerable number of common standard benchmarks or test functions for testing reliability, efficiency, and validation of optimization algorithms exist. However, the effectiveness of an algorithm against others cannot be measured by the problems, which it solves if the set of problems selected are too specific and do not have varied properties. Therefore, a set of 16 common standard benchmark functions with diverse characteristics are selected for testing the performance of the algorithm. The sets consist of unimodal, multimodal, and composite test functions. Table A1, Table A2 and Table A3 in Appendix A demonstrate the selected standard benchmark functions for benchmarking performance.
The unimodal test functions have only a single optimum. They are used for testing exploitation ability. They allow focusing more on the convergence rate of the algorithm other than the final results. Table A1 in Appendix A presents the six unimodal test functions selected for testing the ANA algorithm that are namely F1, F2, F3, F4, F5, and F7. F6 is not selected as a benchmark function for testing the algorithm but is mentioned in the table because it is utilized in the composite functions [38].
The multimodal test functions have more than one optimum, and the number of local optima usually increases exponentially with the number of problem dimensions. They are used for testing exploration ability which can make the algorithm avoid local optima/s. Table A2 in Appendix A presents the four multimodal test functions selected for testing the ANA algorithm that are namely F9, F10, F11, and F12. F8 is not selected as a benchmark function for testing the ANA algorithm; however, it is mentioned in the table because it is utilized in the composite functions [38].
The composite test functions as the name implies, are the combined, shifted, rotated, and biased versions of the other test functions. They provide a lot of varied shapes with several local optima. They allow measuring the exploitation and exploration balance of the algorithm. Table A3 in Appendix A presents the six composite test functions selected for testing the algorithm [38]. For verification and analysis purposes, our proposed ANA algorithm is compared to two sets of competitive algorithms with different parameter settings for each set.

5.1.1. DA, PSO, and GA

In the first set, the standard GA, PSO, and DA algorithms are selected as reference algorithms for comparison to ANA on the 16 selected standard benchmark functions. GA and PSO are among the earliest well-known and efficient algorithms in literature [60], and DA is a recently developed, promising algorithm with a high number of successful applications [61]. The reference [38] contains the test results of DA, PSO, and GA algorithms on the 16 selected standard benchmark functions including the parameter settings in detail.
Regarding the common parameter sets in all the cases, the population size is set to 30, and the dimension of the benchmark functions is equal to 10. The maximum number of iterations is set as the stopping criteria, equal to 500. The algorithm is tested 30 times and the average and standard deviation are calculated. Table 3 presents the test results of ANA, DA, PSO, and GA algorithms on the 16 standard benchmark functions [38].
Each test function of the algorithm from the standard benchmark functions is minimized towards 0.0 as shown in Table 3. Comparing the test results of ANA to the other algorithms presented in the table; ANA outperformed the most famous algorithms: DA, PSO, and GA on seven test cases are namely F4, F10, F13–16, and F18, and only on F7 the other algorithms provided better results than ANA. However, the result of F7 was not poor but only not better than the other algorithms. On the rest of the benchmark functions, the algorithm provided comparative results to the others.
According to the results of Table 3, it can be noted that F13 to F18 are composite test functions and are suitable for measuring the local minima avoidance of an algorithm. ANA algorithm outperformed all the DA, PSO, and GA algorithms on all these test functions except F17 that came in the third position with the outperformance of the DA algorithm. From that, it can be concluded that ANA is quite effective in avoiding local minima and balancing exploitation and exploration levels.

5.1.2. PSO, GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO

In the second set, the standard PSO, and several modified versions of the PSO algorithm that are GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO are selected as reference algorithms for comparing to ANA on six selected standard benchmark functions that are namely F1, F5, F7, F9, F10, and F11. This work [51] contains the detail of modification of the GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO algorithms, parameter settings, and test results of the algorithms on the selected functions.
Regarding the common parameter sets in all the cases, the population size is set to 30, and the dimension of the benchmark functions is equal to 20. The maximum number of iterations is set as the stopping criteria, equal to 10,000. It is worth mentioning that the functions are used without shift and with the same range of the first set except for F1 the range is reduced to [−5.12, 5.12]. The algorithm is tested 100 times and the average and standard deviation are calculated. Table 4 presents the test results of ANA, PSO, GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO algorithms on the six selected standard benchmark functions [51].
Each test function of the algorithm from the standard benchmark functions is outstandingly minimized towards 0.0 as shown in Table 4. Comparing the test results of ANA to the other algorithms presented in the table; ANA outperformed these algorithms: PSO, GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO on two test cases that are namely F1 and F9. On the rest of the benchmark functions, the ANA algorithm provided quite comparative results to the others.

5.2. CEC-C06 2019 Benchmark Functions

In addition to the standard benchmark functions, a set of 10 modern CEC benchmark functions are used as an extra evaluation of the ANA algorithm, and the results are compared to three other remarkable metaheuristic algorithms that are DA, WOA, and SSA. These test functions are developed for benchmarking single-objective optimization problems, and they are known as “the 100-digit challenge” that is intended to be used in annual optimization competitions [62]. Table A4 in Appendix A presents the CEC-06 2019 test functions used for benchmarking the ANA algorithm [62].
All the test functions from CEC-06 2019 are scalable, and functions CEC01 to CEC03 are not shifted or rotated but functions CEC04 to CEC10 are. The default test function parameters that were set by the CEC benchmark developer is used for performing the test of ANA. As can be seen in Table A4 in Appendix A, function CEC01 is set as a 9-dimensional minimization problem in [−8192, 8192] boundary range, function CEC02 is set as 16 dimensional in [−16,384, 16,384] range, function CEC03 is set as 18 dimensional in [−4, 4] range, while the rest of the functions from CEC04 to CEC10 are set as a 10-dimensional minimization problem in [−100, 100] boundary range. All the CEC functions’ global optimum were unified towards 1.0 for providing more convenience.
The test results of the ANA algorithm are compared to the test results of three other modern optimization algorithms that are DA, WOA, and SSA which are taken from Abdullah and Rashid [49]. Regarding common parameter settings, the same is used as the ones that have been previously used [49] with the number of iterations as 500 and the number of agents as 30. The algorithm is run 30 times and average and standard deviation on each test function are computed. Table 5 presents the test results of ANA, DA, WOA, and SSA on CEC-C06 2019 test functions [49].
From Table 5, it can be concluded that each test function of the ANA algorithm on CEC functions is minimized towards one except on CEC01 and CEC06 that did not provide any convenient results; the runtime for these two functions was considerably long for taking results. ANA outperformed all the other algorithms on all the other test cases. This is another indicator of the ANA algorithm’s outstanding performance and efficiency. It is worth mentioning that the WOA algorithm has the same result as ANA on the CEC03 function, but the value of the WOA algorithm in Table 5 has been estimated to only 4 decimal places. However, the standard deviation of WOA is 0.0 on the CEC03 function, which means WOA provides the same result each time it is run without any chances for further improvement.

5.3. Comparative Study

There are several measures and techniques for comparing the performance of algorithms. Considering the importance of reaching optimality in optimization, this part presents a comparative study on the average of global best solutions of ANA, DA, PSO, and GA algorithms and ANA, PSO, GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO algorithms on the standard benchmark functions used for testing the ANA algorithm. The algorithms’ average global best solution on the standard benchmark functions are taken from Table 3 and Table 4, and each is ranked from 1 to 4 for Table 3: 1 for the algorithm that provided the minimum value/result on the function and 4 for the algorithm that provided the maximum result, and 1 to 7 for Table 4: 1 for the algorithm that provided the minimum value/result on the function and 7 for the algorithm that provided the maximum result. Table 6 presents the ranking of ANA, DA, PSO, and GA algorithms on the 16 standard benchmark functions, and Table 7 presents the total number of first, second, third, and fourth rankings of the algorithms. Moreover, Table 8 demonstrates the ranking of ANA, PSO, GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO on the six standard benchmark functions, and Table 9 presents the total number of first, second, third, fourth, fifth, sixth, and seventh rankings of the algorithms.
Table 6 and Table 7 demonstrate that the ANA algorithm has the highest first ranking number with a total of seven and the lowest fourth ranking with a total of only one in comparison to the famous DA, PSO, and GA algorithms. Furthermore, ANA demonstrates its efficiency once again by achieving the highest first rank on the six standard benchmark functions in comparison to PSO, GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO algorithms as can be seen from Table 8 and Table 9.
For providing a more detailed evaluation of the algorithm, ANA is ranked on the standard benchmark functions both by type of benchmark function and in total in comparison to DA, PSO, and GA algorithms. Table 10 presents ANA rankings on standard benchmark functions by type and in total. As demonstrated, the ranking by the type of the benchmark function for unimodal and multimodal test functions both are 2.50 and is 1.33 for composite test functions. Furthermore, if the global average performance of ANA is rounded to the nearest integer, then ANA ranks second amongst the four algorithms and the evaluated 16 benchmark functions. That demonstrates the algorithm’s great performance compared to these famous algorithms. However, it is worth noting that no algorithm can perform the best for all optimization problems. Some algorithms will perform very well on some problems, while others will not perform as well as that algorithm [63].

5.4. ANA versus FDO

For providing a more detailed insight into the ANA algorithm and proving its significant performance, it has been compared to another novel algorithm that has an outstanding performance. The 16 standard benchmark functions used for testing the ANA algorithm have been selected, and the results have been compared to the FDO algorithm. FDO has been selected to conduct the comparison for three main reasons. First, FDO is a PSO-based algorithm same as ANA. Second, the FDO algorithm’s both outstanding performance and outperforming all the standard GA, PSO, DA, WOA, and SSA algorithms have been proven in this work [49]. Third, the algorithm’s implementation is publicly provided by the authors.
Regarding the algorithms’ parameter settings, both the algorithms have been run with 30 search agents and 500 iterations 30 times. The wf, which is the FDO’s single parameter, has been set to 0 in all the test cases. The global best agent for each turn has been recorded, and the results are demonstrated in box and whisker plots in Figure 5a–p. Figure 5 is another indicator for the ANA algorithm’s significant performance since the results show the algorithm’s outperforming FDO on seven test functions that are namely F2, F4, F5, F11, F12, F14, and F17 and very comparative results on the others. Table A5 in Appendix A contains the 30 times test results of ANA and FDO algorithms on the standard benchmark functions.

5.5. Statistical Test

To show that the results presented in the previous section are statistically significant, the p values of the Student’s t-test, Welch’s t-test, and Wilcoxon signed-rank test are found for all the standard benchmark functions, and the results are shown in Table 11. In Table 11, the comparison is conducted only between ANA and FDO algorithms because FDO was already tested against DA, PSO, and GA algorithms in this paper [49]. According to the mentioned work, it has been proven that the FDO results are statistically significant compared to DA, PSO, and GA algorithms.
As shown in Table 11, ANA results are considered significant in 10 statistical tests of Student’s t-test that are namely F3, F4, F5, F7, F9, F11, F12, F13, F16, and F18, 7 statistical tests of Welch’s t-test that are namely F1, F2, F4, F5, F10, F15, and F17, and 3 statistical tests of Wilcoxon signed-rank test that are namely F2, F5, and F17 that is because the results are less than 0.05. According to this work [64], the statistical test results of the Wilcoxon signed-rank test should be relied on for ensuring the statistical significance of the ANA algorithm in comparison to FDO as the data is not normally distributed nor homoscedastic. Table A6 and Table A7 in Appendix A contains the normality test using the Shapiro–Wilk test and the homoscedastic test using Levene’s test of ANA and FDO on the standard benchmark functions.

5.6. Real-World Applications of ANA

To prove the feasibility of the algorithm and evaluate its performance, ANA has been applied for solving two different real-world engineering problems.

5.6.1. ANA on Aperiodic Antenna Array Design

The problem investigated for measuring the feasibility of the ANA algorithm is aperiodic antenna array design. In today’s technological society, developing products that are more efficient and economical than their predecessors is quite crucial and highly demanded. The development of radar technology is a reason among several ones that led to the huge demand for innovation in the area of antenna array design [65]. Over the years, several various antennas have been designed to accommodate the industry’s growing needs. One major requirement for antenna design has been the ability to position the elements of non-uniform arrays to obtain the peak sidelobe level (SLL).
Design techniques for array element placement include thinning, numerical optimization, and other methods. However, even with modern tools, the designing problem remains computationally challenging. For reducing the complexity of the design, and the optimization algorithm is used for optimizing element positions and minimizing peak sidelobe levels [66].
In non-uniform isotropic arrays, there are 10 elements, and only four of them need to be optimized. Thus, the application is a four-dimensional optimization problem. It is worth noting that designing the aperiodic antenna array is a convex problem since all the lines joining every two elements/points lie in the set, and minimization algorithms are considered to be quite efficient in optimizing this problem and reaching the optimum is much likely [67]. For more details on this problem, interested readers may refer to this paper [68]. Figure 6 demonstrates an array configuration for 10 elements.
The fitness function of the problem is described by the following expression:
f = m a x 20 log A F θ
where,
A F θ = i = 1 4 cos 2 π x i cos θ cos θ s   +   cos 2.25 × 2 π cos θ cos θ s
Consider θs = 90° [68].
For achieving an improved peak SLL in non-uniform arrays, the element positions need to be optimized in a real number vector. In addition to that, to mitigate grating lobe level, a certain element spacing limit is required. Equation (11) shows the constraints of the problem.
x i   0 ,   2.25 x i x j   >   0.25 λ 0 m i n x i   >   0.125 λ 0 .   i = 1 , 2 , 3 , 4 .   i j
ANA algorithm is applied for optimizing this problem concerning the constraints mentioned in Equation (11). Here, 20 search agents and 200 iterations are used. The algorithm reached its optimum solution in iteration 57 with element positions {1.5959, 0.3081, 0.8747, 0.6072}.

5.6.2. ANA on Frequency-Modulated Synthesis

A frequency-modulated synthesis is a form of sound synthesis whereby the frequency of a waveform is changed by modulating its frequency. It is a complex real-world engineering optimization problem that has a fundamental role in several modern music systems. For the FM synthesis to create a harmonic sound, six parameters need to be optimized. Thus, the parameter optimization of an FM synthesizer is a six-dimensional optimization problem where the vector to be optimized is x = {a1, w1, a2, w2, a3, w3} of the sound wave given in Equation (12). The objective of this problem is to generate a sound in Equation (12) that is similar to the target sound in Equation (13).
y t   =   a 1 .   sin w 1 . t .   +   a 2 .   sin w 2 .   t .   θ + a 3 .   sin w 3 .   t .   θ
y o t   =   1.0 .   sin 5.0 .   t .   +   1.5 .   sin 4.8 .   t .   θ +   2.0 .   sin 4.9 .   t .   θ
respectively, where the parameters are defined in the range [−6.4, 6.35] and θ = 2π/100. The fitness function can be calculated using Equation (14), which is the summation of square errors between the estimated wave, i.e., the result of Equation (12) and the target wave in Equation (13), while t = 100 turns. Interested readers can find more details on this problem in this work by [69].
f x   =   t = 0 100 y t     y o t 2
ANA is applied to optimize this problem with 30 search agents and 200 iterations. The global best value converges to near-global optimal value from iteration 45 with parameters x = {a1 = −0.1253, w1 = 3.0527, a2 = −5.2198, w2 = −4.4465, a3 = 2.5656, w3 = 2.8532}.

6. Conclusions

A novel swarm intelligent algorithm for optimizing single objective problems called the ant nesting algorithm was proposed. The proposed algorithm is inspired by Leptothorax ant behavior when dropping grains for constructing a new nest. The algorithm mimics what a swarm of worker ants do while searching for a position to drop grain around their queen ant in addition to a model of their random walk performance while searching. ANA employs slope-side lengths of Pythagorean theorem from the difference between previous and current deposition positions to the best local deposition position as one side and their fitness values as the other side to generate weights that drive the search agents towards optimality. In addition to that, ANA depends on the randomization mechanism in all the phases of searching including initialization, exploration, and exploitation.
Regarding ANA’s performance after testing on several standards and modern test functions and real-world applications, it has been discovered that the common parameter settings like several search agents and iterations have a great impact on the algorithm’s performance as is the case with the other metaheuristic algorithms. Increasing the number of search agents and iterations may provide much better and more accurate results, and the reverse is also true. However, on the other hand, increasing the number of search agents and/or iterations means utilizing more resources. Thus, a wise choice needs to be made while setting the parameters.
This paper is only providing a new method for reaching optimality in single-objective problems and several works can be suggested for conducting further studies. The development of binary and multi-objective versions of the ANA algorithm is one recommendation for solving a greater range of diverse optimization problems. Another recommendation is integrating more evolutionary operators into the algorithm for improving performance and/or more effective utilization of resources. In addition to that, hybridizing the algorithm with the other algorithms and using their features is another exceptional suggestion. Finally, the most important is applying the algorithm in solving real-world applications. It is strongly recommended that the algorithm is used for solving problems practically.

Author Contributions

Conceptualization, D.N.H.R.; methodology, D.N.H.R.; software, D.N.H.R.; validation, D.N.H.R.; formal analysis, D.N.H.R., T.A.R. and S.M.; investigation, T.A.R. and S.M.; resources, D.N.H.R. and T.A.R.; data curation, D.N.H.R.; writing—original draft preparation, D.N.H.R.; writing—review and editing, D.N.H.R., T.A.R. and S.M.; visualization, D.N.H.R.; supervision, T.A.R.; project administration, D.N.H.R. and T.A.R.; funding acquisition, T.A.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Unimodal test functions [38].
Table A1. Unimodal test functions [38].
FunctionDimensionRangeShift Positionfmin
F 1 x   =   i = 1 n x i 2 10[−100, 100][−30, −30, … −30]0
F 2 x   =   i = 1 n | x i | + i = 1 n | x i | 10[−10, 10][−3, −3, … −3]0
F 3 x   =   i = 1 n j 1 i x j 2 10[−100, 100][−30, −30, … −30]0
F 4 x   =   max i x ,   1 i n 10[−100, 100][−30, −30, … −30]0
F 5 x   =   i = 1 n 1 100 x i + 1 x 1 2 2 + x i 1 2 10[−30, 30][−15, −15, … −15]0
F 6 x   =   i = 1 n x i + 0.5 2 10[−100, 100][−750, … −750]0
F 7 x   =   i = 1 n i x i 4 + random 0 ,   1 10[−1.28, 1.28][−0.25, …−0.25]0
Table A2. Multimodal test functions [38].
Table A2. Multimodal test functions [38].
FunctionDimensionRangeShift Positionfmin
F 8 x   =   i = 1 n x i 2 sin x i 10[−500, 500][−300, … −300]−418.9829
F 9 x   =   i = 1 n x i 2 10 cos 2 π x i   +   10 10[−5.12, 5.12][−2, −2, …−2]0
F 10 x   =   20 e x p 0.2 i = 1 n x i 2   e x p 1 n i = 1 n cos 2 π x i   +   20   +   e 10[−32, 32][0, 0, … 0]0
F 11 x   =   1 4000 i = 1 n x i 2     i = 1 n cos x i i   +   1 10[−600, 600][−400, … −400]0
F 12 x   =   π n { 10 sin π y 1   +   i = 1 n 1 y i 1 2 1 + 10   sin 2 π y i + 1   +   y n 1 2 }   +   i = 1 n u x i ,   10 ,   100 ,   4 y i = 1 + x + 1 4 u x i , a , k , m   =   k x i a m   x i > a 0 a < x i < a k x i a m   x i < a 10[−50, 50][−30, 30, … 30]0
Table A3. Composite test functions [38].
Table A3. Composite test functions [38].
FunctionDimensionRangefmin
            F13 (CF1)
f 1 ,   f 2 , f 3 f 10 = Sphere   function δ 1 , δ 2 , δ 3 δ 10 =   1 , 1 , 1 , . 1 λ 1 , λ 2 , λ 3 λ 10 =   5 100 , 5 100 ,   , 5 100 , 5 100
10[−5, 5]0
            F14 (CF2)
f 1 ,   f 2 , f 3 f 10 = Griewank s   function δ 1 , δ 2 , δ 3 δ 10 =   1 , 1 , 1 , . 1 λ 1 , λ 2 , λ 3 λ 10 =   5 100 , 5 100 ,   , 5 100 , 5 100
10[−5, 5]0
            F15 (CF3)
f 1 ,   f 2 , f 3 f 10 = Griewank s   function δ 1 , δ 2 , δ 3 δ 10 =   1 , 1 , 1 , . 1 λ 1 , λ 2 , λ 3 λ 10 =   1 , 1 , 1 , . 1
10[−5, 5]0
            F16 (CF4)
f 1 , f 2 = Ackley s   function f 3 , f 4 = Rastrigin s   function f 5 , f 6 = Weierstrass   function f 7 , f 8 = Griewank s   function f 9 , f 10 = Sphere   function δ 1 , δ 2 , δ 3 δ 10 = 1 , 1 , 1 , . 1 λ 1 , λ 2 , λ 3 λ 10 =   5 32 , 5 32 ,   , 1 , 1 , 5 0.5 , 5 0.5 , 5 100 , 5 100 , 5 100 , 5 100
10[−5, 5]0
            F17 (CF5)
f 1 , f 2 = Rastrigin s   function f 3 , f 4 = Rastrigin   function f 5 , f 6 = Griewank s   function f 7 , f 8 = Ackley s   function f 9 , f 10 = Sphere   function δ 1 , δ 2 , δ 3 δ 10 =   1 , 1 , 1 , . 1 λ 1 , λ 2 , λ 3 λ 10 =   1 5 , 1 5 ,   , 5 0.5 , 5 0.5 , 5 100 , 5 100 , 5 32 , 5 32 , 5 100 , 5 100
10[−5, 5]0
            F18 (CF6)
f 1 , f 2 = Rastrigin s   function f 3 , f 4 = Weierstrass   function f 5 , f 6 = Griewank s   function f 7 , f 8 = Ackley s   function f 9 , f 10 = Sphere   function δ 1 , δ 2 , δ 3 δ 10 =   0.1 , 0.2 , 0.3 , 0.4 , 0.5 , 0.6 , 0.7 , 0.8 , 0.9 , 1 λ 1 , λ 2 , λ 3 λ 10 =   0.1     1 5 , 0.2     1 5 , 0.3     5 0.5 , 0.4     5 0.5 , 0.5     5 100 , 0.6     5 100 , 0.7     5 32 , 0.8     5 32 , 0.9     5 100 , 1     5 100
10[−5, 5]0
Table A4. CEC-C06 2019 test functions [62].
Table A4. CEC-C06 2019 test functions [62].
Function No.Function NameDimensionRangefmin
CEC01Storn’s Chebyshev Polynomial Fitting Problem9[−8192, 8192]1
CEC02Inverse Hilbert Matrix Problem16[−16,384, 16,384]1
CEC03Lennard-Jones Minimum Energy Cluster18[−4, 4]1
CEC04Rastrigin’s Function10[−100, 100]1
CEC05Griewangk’s Function10[−100, 100]1
CEC06Weierstrass Function10[−100, 100]1
CEC07Modified Schwefel’s Function10[−100, 100]1
CEC08Expanded Schaffer’s F6 Function10[−100, 100]1
CEC09Happy Cat Function10[−100, 100]10
CEC10Ackley Function10[−100, 100]10
Table A5. ANA and FDO test results of 30 turns on standard benchmark functions.
Table A5. ANA and FDO test results of 30 turns on standard benchmark functions.
TurnF1F2F3F4F5F7
ANAFDOANAFDOANAFDOANAFDOANAFDOANAFDO
11.11 × 10−43.79 × 10−298.86 × 10−51.95 × 10−82.3917471441.69 × 10−117.11 × 10−151.64 × 10−87.4329841625.3893718640.8668042610.251164251
27.23 × 10−88.58 × 10−283.54 × 10−50.0250490093.6768739392.43 × 10−1301.27 × 10−128.1336327585.2120613210.1652212950.154350971
30.0481925751.14 × 10−281.50 × 10−44.08 × 10−62.600793852.85 × 10−900.0633379237.689508303480.53482861.2021368180.186249902
42.33 × 10−58.84 × 10−282.28 × 10−50.0317988493.9774936585.92 × 10−1300.0040862998.6527076696.9633510411.0263155940.986077172
53.52 × 10−63.27 × 10−244.25 × 10−52.22 × 10−150.1538752964.75 × 10−906.39 × 10−76.5664815027.5477743910.6176395170.69199572
60.0667428054.92 × 10−282.41 × 10−52.34 × 10−120.0231048045.44 × 10−1100.012112127.703774666946.26362180.7905943570.108666552
70.0035508814.92 × 10−281.64 × 10−58.33 × 10−61.0650887641.55 × 10−1105.52 × 10−918.364878374.1512229881.3572098940.174620628
81.01 × 10−52.52 × 10−281.66 × 10−52.17 × 10−51.5681605426.87 × 10−1407.77 × 10−88.0184006235.3808227270.5174249690.797722139
93.12 × 10−71.72 × 10−271.45 × 10−41.48 × 10−110.1024107556.46 × 10−1301.06 × 10−95.5362878264.7786966940.8403594190.284824671
105.26 × 10−66.31 × 10−283.86 × 10−52.01 × 10−80.4833939119.41 × 10−1200.0016739288.5466105653.2150257250.8100476660.405013575
112.78 × 10−61.14 × 10−283.67 × 10−53.64 × 10−140.8648790541.42 × 10−1102.51 × 10−107.8742990550.0043197181.1171701290.935591962
120.0197928113.45 × 10−258.81 × 10−62.22 × 10−120.4319124464.05 × 10−12007.8937413397.5835760950.9535785070.104218667
130.0139218991.77 × 10−282.11 × 10−63.29 × 10−101.1960973446.57 × 10−139.24 × 10−149.50 × 10−98.2075168346.7662458370.8324467940.873904075
142.47 × 10−41.01 × 10−251.04 × 10−53.31 × 10−101.2156433836.47 × 10−1103.71 × 10−68.3704751375.8067082720.4170105690.445590624
151.14 × 10−52.26 × 10−254.63 × 10−51.5118548390.3075614652.83 × 10−1208.99 × 10−67.84872865576.726114591.113531840.779961586
160.4968825452.08 × 10−255.05 × 10−69.10 × 10−144.0314043033.80 × 10−113.55 × 10−151.41 × 10−424.035025584.9690493981.2585670910.729519408
172.26 × 10−58.84 × 10−294.43 × 10−52.68 × 10−120.2223106551.18 × 10−1009.79 × 10−4105.67399924.1184191970.6707138080.722250032
189.52 × 10−57.83 × 10−281.05 × 10−41.16 × 10−40.2355618921.67 × 10−901.32 × 10−47.950295796676.12261021.0250293770.297330263
196.08 × 10−68.64 × 10−254.88 × 10−60.3989665732.2718723962.33 × 10−1302.28 × 10−8355.42199876.0549114941.2141743230.171875021
203.59 × 10−43.79 × 10−286.07 × 10−64.98 × 10−92.1071501795.82 × 10−141.42 × 10−146.19 × 10−128.1416553269.5981177210.607642540.518382895
211.39 × 10−61.64 × 10−281.49 × 10−51.74 × 10−51.2458024344.86 × 10−1406.26 × 10−118.0857466718.0810527320.9447695360.838040939
229.09 × 10−61.07 × 10−257.92 × 10−61.54 × 10−50.1879354695.75 × 10−1407.33 × 10−105.964614538994.040030.5362935490.539834308
230.0271212439.72 × 10−283.43 × 10−55.24 × 10−103.1290015131.26 × 10−1002.13 × 10−149.5497341755.1792559280.6804434220.784514643
242.24 × 10−51.06 × 10−241.06 × 10−55.98 × 10−42.3187213485.56 × 10−91.07 × 10−147.72 × 10−58.1939499033.0729620870.4472962360.887949921
254.07 × 10−56.73 × 10−241.07 × 10−53.33 × 10−140.1217146723.85 × 10−88.88 × 10−120.03948338314.594126065.381250781.1420577440.494719509
261.02 × 10−57.32 × 10−287.53 × 10−60.0983190442.9182417611.27 × 10−111.07 × 10−145.05 × 10−88.2764834019.0634869880.3998333580.986881941
270.0881251295.05 × 10−291.75 × 10−51.16 × 10−96.0046862341.13 × 10−1103.55 × 10−159.0924365746.3445087380.5944896230.106760494
282.28 × 10−43.79 × 10−291.32 × 10−50.0130517871.3979548632.31 × 10−1000.0032789197.82060260751.200028490.5092882290.745339406
297.52 × 10−82.57 × 10−231.78 × 10−52.64 × 10−130.7397656566.68 × 10−1104.82 × 10−74.9121175497.9037196871.8445870.56002937
301.19 × 10−53.22 × 10−211.72 × 10−57.28 × 10−130.2710814387.99 × 10−804.64 × 10−58.7301172712.6998073441.0039689971.057408352
F9F10F11F12F13F14
ANAFDOANAFDOANAANAANAFDOANAFDOANAFDO
129.6499391912.799528364.00 × 10−154.00 × 10−150.5366694963.08 × 10−353.08 × 10−352.2562087024.13 × 1094.10 × 1093.08 × 10−351.05 × 10−9
223.8032488727.27008137.55 × 10−154.00 × 10−150.470160862.47 × 10−332.47 × 10−3315.828234984.12 × 1094.10 × 1092.47 × 10−339.27 × 10−8
325.5835780721.104702117.55 × 10−154.00 × 10−150.2880381933.59 × 10−263.59 × 10−2630.622299934.12 × 1094.10 × 1093.59 × 10−261.76 × 10−9
424.459215146.8689155487.55 × 10−154.00 × 10−150.4535018768.94 × 10−348.94 × 10−342.9370990884.13 × 1094.10 × 1098.94 × 10−344.98 × 10−8
527.3364091316.919177344.00 × 10−157.55 × 10−150.4776492850018.267179594.14 × 1094.10 × 10903.63 × 10−7
626.1424507513.72930117.55 × 10−154.00 × 10−150.4299306250020.727616464.13 × 1094.10 × 10904.88 × 10−7
727.1635173913.641364986.33 × 10−134.00 × 10−150.45735448003.7075494524.13 × 1094.10 × 10906.45 × 10−10
821.2419244218.459876697.55 × 10−154.00 × 10−150.5788807322.47 × 10−342.47 × 10−347.5374945554.12 × 1094.10 × 1092.47 × 10−347.47 × 10−23
930.6583527613.929591254.00 × 10−154.00 × 10−150.5143726190036.385075524.13 × 1094.10 × 10907.75 × 10−7
1024.7757886818.326834024.00 × 10−154.00 × 10−150.3303183898.49 × 10−328.49 × 10−3225.163116334.12 × 1094.10 × 1098.49 × 10−321.34 × 10−7
1118.0381579913.929416734.00 × 10−154.00 × 10−150.5199840743.08 × 10−353.08 × 10−351.0538230924.12 × 1094.10 × 1093.08 × 10−351.89 × 10−7
1217.4248755316.336018747.55 × 10−154.00 × 10−150.3403536470015.644245044.13 × 1094.10 × 10905.56 × 10−7
1327.7857714820.437008714.00 × 10−154.00 × 10−150.503012545001.0055120984.12 × 1094.10 × 10906.07 × 10−7
1430.5549721219.959262014.00 × 10−154.00 × 10−150.3623577124.13 × 10−324.13 × 10−32110.91376144.16 × 1094.10 × 1094.13 × 10−325.12 × 10−9
1533.4737084818.941970914.00 × 10−154.00 × 10−150.4055733091.23 × 10−341.23 × 10−343.3436995644.13 × 1094.10 × 1091.23 × 10−342.66 × 10−10
1627.578648127.0163692737.55 × 10−157.55 × 10−150.5505309476.97 × 10−216.97 × 10−2118.584818954.12 × 1094.10 × 1096.97 × 10−211.69 × 10−11
1722.0775632716.914288944.00 × 10−154.00 × 10−150.4523400517.70 × 10−347.70 × 10−3477.18720714.12 × 1094.10 × 1097.70 × 10−345.02 × 10−7
1828.1936785716.147770424.00 × 10−154.00 × 10−150.3556581974.01 × 10−344.01 × 10−343.790858044.13 × 1094.10 × 1094.01 × 10−342.04 × 10−8
1927.1285306851.13081074.00 × 10−154.00 × 10−150.3036653891.23 × 10−341.23 × 10−3411.279482754.13 × 1094.10 × 1091.23 × 10−342.58 × 10−7
2027.0322974313.737324761.47 × 10−144.00 × 10−150.4342208362.45 × 10−282.45 × 10−283.2016213814.12 × 1094.10 × 1092.45 × 10−286.04 × 10−33
2123.9805265212.934816397.55 × 10−154.00 × 10−150.4539901372.37 × 10−292.37 × 10−292.242352984.17 × 1094.10 × 1092.37 × 10−295.78 × 10−8
2230.4868149716.187086557.55 × 10−154.00 × 10−150.4480055011.43 × 10−301.43 × 10−3018.028946694.12 × 1094.10 × 1091.43 × 10−305.55 × 10−7
2320.6180345918.908823197.55 × 10−154.00 × 10−150.366715922007.299876674.13 × 1094.10 × 10904.66 × 10−27
2421.5566939113.931570134.00 × 10−154.00 × 10−150.4431660043.08 × 10−353.08 × 10−352.8207565794.13 × 1094.10 × 1093.08 × 10−357.95 × 10−8
2520.455548224.132722084.00 × 10−154.00 × 10−150.36216298004.1349245314.15 × 1094.10 × 10906.67 × 10−9
2616.730907629.5231229134.00 × 10−154.00 × 10−150.4773001949.00 × 10−339.00 × 10−331.8979651444.13 × 1094.10 × 1099.00 × 10−338.84 × 10−10
2721.6932192420.669680854.00 × 10−154.00 × 10−150.3855210018.35 × 10−308.35 × 10−3020.480119354.11 × 1094.10 × 1098.35 × 10−302.64 × 10−6
2827.0888605514.508150154.00 × 10−154.00 × 10−150.4414438261.14 × 10−261.14 × 10−266.7249626864.14 × 1094.10 × 1091.14 × 10−266.40 × 10−8
2932.8182449131.632983524.00 × 10−154.00 × 10−150.4749522276.16 × 10−346.16 × 10−349.998269714.12 × 1094.10 × 1096.16 × 10−342.92 × 10−10
3014.3175869813.141862444.00 × 10−154.00 × 10−150.2880492243.08 × 10−353.08 × 10−352.0667542824.13 × 1094.10 × 1093.08 × 10−356.01 × 10−14
F15F16F17F18
ANAFDOANAFDOANAFDOANAFDO
102.22 × 10−164.82 × 10−69.99 × 10−1623.7888143323.68277601223.5535953223.5513726
21.15 × 10−1401.43 × 10−51.11 × 10−1523.8023268623.93678471223.5667589223.5513726
39.99 × 10−169.99 × 10−164.20 × 10−69.99 × 10−1623.7780376423.69035899223.5553026223.5513726
46.28 × 10−141.11 × 10−166.74 × 10−71.33 × 10−1523.7433558123.73681148223.5581154223.5513726
51.82 × 10−148.96 × 10−131.34 × 10−61.33 × 10−1523.7774353523.7435614223.5626506223.5513726
66.22 × 10−151.11 × 10−161.61 × 10−59.99 × 10−1623.7943626123.71556709223.5598048223.5513726
78.22 × 10−151.11 × 10−161.03 × 10−69.99 × 10−1623.7156654623.68432366223.5578919223.5513726
81.44 × 10−135.55 × 10−163.48 × 10−68.88 × 10−1623.8796691523.70108012223.5612475223.5513726
91.11 × 10−161.11 × 10−161.69 × 10−67.77 × 10−1623.7060701623.93939471223.5542233223.5513726
104.44 × 10−161.33 × 10−156.06 × 10−61.22 × 10−1523.7195122623.83709351223.5604607223.5513726
111.22 × 10−159.99 × 10−163.76 × 10−67.77 × 10−1623.713625223.93896182223.5635372223.5513726
123.75 × 10−141.11 × 10−165.76 × 10−66.66 × 10−1623.8995207623.69320027223.5517005223.5513726
131.26 × 10−1309.02 × 10−65.55 × 10−1623.8397020723.94535677223.5764026223.5513726
143.00 × 10−153.33 × 10−161.58 × 10−67.77 × 10−1623.7921886423.76833148223.5804875223.5513726
153.11 × 10−154.44 × 10−161.51 × 10−61.22 × 10−1523.7410763923.68135525223.5578101223.5513726
163.30 × 10−143.33 × 10−158.91 × 10−61.33 × 10−1523.7568729723.6900798223.5595445223.5513726
171.19 × 10−1301.04 × 10−51.11 × 10−1523.8558494223.81447635223.551743223.5513726
181.14 × 10−122.22 × 10−163.24 × 10−65.55 × 10−1623.7185380723.73052044223.5573346223.5513726
195.55 × 10−1501.04 × 10−68.88 × 10−1623.7436559823.97056993223.5600951223.5513726
203.76 × 10−146.66 × 10−165.37 × 10−78.88 × 10−1623.7991217123.84108883223.5606669223.5513726
211.78 × 10−142.55 × 10−155.21 × 10−69.99 × 10−1623.7363458924.43502039223.5672378223.5513726
228.88 × 10−161.78 × 10−153.08 × 10−55.55 × 10−1623.7497839523.76997022223.5547583223.5513726
233.77 × 10−151.11 × 10−163.72 × 10−61.22 × 10−1523.8182126423.7822576223.5545055223.5513726
241.71 × 10−144.77 × 10−151.88 × 10−61.33 × 10−1523.7744055223.75868399223.5644214223.5513726
252.22 × 10−162.11 × 10−153.64 × 10−68.88 × 10−1623.8031149623.77651152223.5593722223.5513726
267.77 × 10−161.11 × 10−161.40 × 10−65.55 × 10−1623.771550924.04397935223.553155223.5513726
276.66 × 10−163.33 × 10−165.15 × 10−64.44 × 10−1623.7238817923.89049499223.5593623223.5513726
289.44 × 10−154.44 × 10−161.29 × 10−67.77 × 10−1623.7606250923.7717154223.5567037223.5513726
299.21 × 10−152.22 × 10−162.75 × 10−69.99 × 10−1623.7996530623.76744563223.5547438223.5513726
305.55 × 10−162.22 × 10−169.19 × 10−61.33 × 10−1523.8459689323.76024475223.589931223.5513726
Table A6. ANA and FDO normality test (p value) using Shapiro–Wilk test.
Table A6. ANA and FDO normality test (p value) using Shapiro–Wilk test.
AlgorithmF1F2F3F4F5F7F9F10
ANA8.75 × 10−111.65376 × 10−60.002296669.48 × 10−125.40 × 10−110.6590170.5813441.10 × 10−11
FDO9.34 × 10−124.45 × 10−111.09 × 10−102.47 × 10−101.58 × 10−90.02231754.79981 × 10−54.59 × 10−11
F11F12F13F14F15F16F17F18
ANA0.3847747.40037 × 10−64.32195 × 10−58.86 × 10−127.55 × 10−112.19701 × 10−60.1498676.46068 × 10−5
FDO0.05154881.39 × 10−7NaN9.95 × 10−99.53 × 10−120.05602450.0000118964.27 × 10−13
Note: The bold values of the tests indicate the significant results.
Table A7. ANA and FDO Homocedasticity test (p value) using Levene’s test.
Table A7. ANA and FDO Homocedasticity test (p value) using Levene’s test.
F1F2F3F4F5F7F9F10
1.33 × 10−10.184072955.73 × 10−89.35 × 10−28.25 × 10−20.897846140.364802412.91 × 10−1
F11F12F13F14F15F16F17F18
0.039982530.039982532.7128 × 10−58.29 × 10−35.58 × 10−10.000339670.02254084.3528 × 10−5
Note: The bold values of the tests indicate the significant results.

References

  1. Russell, S.J.; Norvig, P.; Davis, E. Artificial Intelligence: A Modern Approach, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2010. [Google Scholar]
  2. Cooper, D. Heuristics for Scheduling Resource-Constrained Projects: An Experimental Investigation. Manag. Sci. 1976, 22, 1186–1194. [Google Scholar] [CrossRef]
  3. Yang, X.-S. Nature-Inspired Optimization Algorithms, 1st ed.; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar]
  4. Lin, M.-H.; Tsai, J.-F.; Yu, C.-S. A Review of Deterministic Optimization Methods in Engineering and Management. Math. Probl. Eng. 2012, 2012, 756023. [Google Scholar] [CrossRef] [Green Version]
  5. Blake, A. Comparison of the Efficiency of Deterministic and Stochastic Alizorithms for Visual Reconstruction. IEEE Trans. Pattern Anal. Mach. Intell. 1989, 11, 2–12. [Google Scholar] [CrossRef]
  6. Pierre, C.; Rennard, J.-P. Stochastic Optimization Algorithms. In Handbook of Research on Nature Inspired Computing for Economics and Management; IGI Global: Hershey, PA, USA, 2006; pp. 28–44. [Google Scholar]
  7. Francisco, M.; Revollar, S.; Vega, P.; Lamanna, R. A comparative study of deterministic and stochastic optimization methods for integrated design of processes. IFAC Proc. Vol. 2005, 38, 335–340. [Google Scholar] [CrossRef] [Green Version]
  8. Valadi, J.; Siarry, P. Applications of Metaheuristics in Process Engineering, 1st ed.; Springer International Publishing: Cham, Switzerland, 2014. [Google Scholar]
  9. Tzanetos, A.; Dounias, G. An Application-Based Taxonomy of Nature Inspired INTELLIGENT Algorithms. Chios. 2019. Available online: http://mde-lab.aegean.gr/images/stories/docs/reportnii2019.pdf (accessed on 22 January 2020).
  10. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  11. Holland, J.H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, 1st ed.; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  12. Younas, I. Using Genetic Algorithms for Large Scale Optimization of Assignment, Planning and Rescheduling Problems; KTH Royal Institute of Technology: Stockholm, Sweden, 2014. [Google Scholar]
  13. Dorigo, M. Optimization, Learning and Natural Algorithms; Politecnico di Milano: Milano, Italy, 1992. [Google Scholar]
  14. Gambardella, L.M.; Dorigo, M. An Ant Colony System Hybridized with a New Local Search for the Sequential Ordering Problem. INFORMS J. Comput. 2000, 12, 237–255. [Google Scholar] [CrossRef] [Green Version]
  15. Gambardella, L.M.; Taillard, É.; Agazzi, G. MACS-VRPTW: A Multiple Ant Colony System for Vehicle Routing Problems with Time Windows. In New Ideas in Optimization; McGraw-Hill: London, UK, 1999; pp. 63–76. [Google Scholar]
  16. Liang, Y.-C.; Smith, A. An Ant Colony Optimization Algorithm for the Redundancy Allocation Problem (RAP). IEEE Trans. Reliab. 2004, 53, 417–423. [Google Scholar] [CrossRef]
  17. De A Silva, R.; Ramalho, G. Ant system for the set covering problem. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics. e-Systems and e-Man for Cybernetics in Cyberspace (Cat.No.01CH37236), Tucson, AZ, USA, 7–10 October 2001. [Google Scholar]
  18. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  19. Poli, R. Analysis of the Publications on the Applications of Particle Swarm Optimisation. J. Artif. Evol. Appl. 2008, 2008, 685175. [Google Scholar] [CrossRef]
  20. Karaboğa, D. An Idea Based on Honey Bee Swarm for Numerical Optimization. 2005. Available online: https://abc.erciyes.edu.tr/pub/tr06_2005.pdf (accessed on 22 January 2020).
  21. Singh, A. An artificial bee colony algorithm for the leaf-constrained minimum spanning tree problem. Appl. Soft Comput. 2009, 9, 625–631. [Google Scholar] [CrossRef]
  22. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  23. Zhang, Y.; Wu, L. Artificial Bee Colony for Two Dimensional Protein Folding. Adv. Electr. Eng. Syst. 2012, 1, 19–23. [Google Scholar]
  24. Crawford, B.; Soto, R.; Cuesta, R.; Paredes, F. Application of the Artificial Bee Colony Algorithm for Solving the Set Covering Problem. Sci. World J. 2014, 2014, 189164. [Google Scholar] [CrossRef]
  25. Hossain, M.; El-Shafie, A. Application of artificial bee colony (ABC) algorithm in search of optimal release of Aswan High Dam. J. Phys. Conf. Ser. 2013, 423, 012001. [Google Scholar] [CrossRef] [Green Version]
  26. Yang, X.-S. Nature-Inspired Metaheuristic Algorithms; Luniver Press: London, UK, 2010. [Google Scholar]
  27. Almasi, O.N.; Rouhani, M. A new fuzzy membership assignment and model selection approach based on dynamic class centers for fuzzy SVM family using the firefly algorithm. Turk. J. Electr. Eng. Comput. Sci. 2016, 24, 1797–1814. [Google Scholar] [CrossRef]
  28. Lones, M.A. Metaheuristics in nature-inspired algorithms. In Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation, Vancouver, BC, Canada, 12–16 July 2014; pp. 1419–1422. [Google Scholar]
  29. Weyland, D. A critical analysis of the harmony search algorithm—How not to solve sudoku. Oper. Res. Perspect. 2015, 2, 97–105. [Google Scholar] [CrossRef] [Green Version]
  30. Yang, X.-S.; Deb, S. Cuckoo Search via Lévy Flights; IEEE: Piscataway, NJ, USA, 2009; pp. 210–214. [Google Scholar]
  31. Tian, S.; Cheng, H.; Zhang, L.; Hong, S.; Sun, T.; Liu, L.; Zeng, P. Application of Cuckoo Search algorithm in power network planning. In Proceedings of the 2015 5th International Conference on Electric Utility Deregulation and Restructuring and Power Technologies (DRPT), Changsha, China, 26–29 November 2015; pp. 604–608. [Google Scholar]
  32. Sopa, M.; Angkawisittpan, N. An Application of Cuckoo Search Algorithm for Series System with Cost and Multiple Choices Constraints. Procedia Comput. Sci. 2016, 86, 453–456. [Google Scholar] [CrossRef] [Green Version]
  33. Yang, X.-S.; Deb, S. Engineering optimisation by cuckoo search. Int. J. Math. Model. Numer. Optim. 2010, 1, 330–343. [Google Scholar] [CrossRef]
  34. Yang, X.-S. A New Metaheuristic Bat-Inspired Algorithm. Nat. Inspired Coop. Strateg. Optim. 2010, 284, 65–74. [Google Scholar]
  35. Alihodzic, A.; Tuba, M. Bat Algorithm (BA) for Image Thresholding. Baltimore. 2013. Available online: http://www.wseas.us/e-library/conferences/2013/Baltimore/TESIMI/TESIMI-50.pdf (accessed on 22 January 2020).
  36. Raghavan, S.; Sarwesh, P.; Marimuthu, C.; Chandrasekaran, K. Bat algorithm for scheduling workflow applications in cloud. In Proceedings of the 2015 International Conference on Electronic Design, Computer Networks & Automated Verification (EDCAV), Shillong, India, 29–30 January 2015; pp. 139–144. [Google Scholar]
  37. Mirjalili, S.; Mirjalili, S.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  38. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2015, 27, 1053–1073. [Google Scholar] [CrossRef]
  39. Mirjalili, S. The Ant Lion Optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  40. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  41. Touma, H.J. Study of The Economic Dispatch Problem on IEEE 30-Bus System using Whale Optimization Algorithm. Int. J. Eng. Technol. Sci. 2016, 3, 11–18. [Google Scholar] [CrossRef]
  42. Sayed, G.I.; Darwish, A.; Hassanien, A.E.; Pan, J.-S. Breast Cancer Diagnosis Approach Based on Meta-Heuristic Optimization Algorithm Inspired by the Bubble-Net Hunting Strategy of Whales. In International Conference on Genetic and Evolutionary Computing; Springer: Cham, Switzerland, 2016; Volume 536, pp. 306–313. [Google Scholar]
  43. Kumar, C.S.; Rao, R.S.; Cherukuri, S.K.; Rayapudi, S.R. A Novel Global MPP Tracking of Photovoltaic System based on Whale Optimization Algorithm. Int. J. Renew. Energy Dev. 2016, 5, 225. [Google Scholar] [CrossRef]
  44. Aljarah, I.; Faris, H.; Mirjalili, S. Optimizing connection weights in neural networks using the whale optimization algorithm. Soft Comput. 2016, 22, 1–15. [Google Scholar] [CrossRef]
  45. Hassanien, A.E.; Abd Elfattah, M.; Aboulenin, S.; Schaefer, G.; Zhu, S.Y.; Korovin, I. Historic handwritten manuscript binarisation using whale optimization. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016. [Google Scholar]
  46. Dao, T.-K.; Pan, T.-S.; Pan, J.-S. A multi-objective optimal mobile robot path planning based on whale optimization algorithm. In Proceedings of the 2016 IEEE 13th International Conference on Signal Processing (ICSP), Chengdu, China, 6–10 November 2016; pp. 337–342. [Google Scholar]
  47. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  48. Shamsaldin, A.S.; AlRashid, R.; Agha, R.A.A.-R.; Al-Salihi, N.K.; Mohammadi, M. Donkey and smuggler optimization algorithm: A collaborative working approach to path finding. J. Comput. Des. Eng. 2019, 6, 562–583. [Google Scholar] [CrossRef]
  49. Abdullah, J.M.; Rashid, T.A. Fitness Dependent Optimizer: Inspired by the Bee Swarming Reproductive Process. IEEE Access 2019, 6, 43473–43486. [Google Scholar] [CrossRef]
  50. Muhammed, D.A.; Saeed, S.A.M.; Rashid, T.A. Improved Fitness-Dependent Optimizer Algorithm. IEEE Access 2020, 8, 19074–19088. [Google Scholar] [CrossRef]
  51. Pant, M.; Thangaraj, R.; Abraham, A. Particle Swarm Optimization: Performance Tuning and Empirical Analysis. In Foundations of Computational Intelligence; Springer: Berlin, Germany, 2009; Volume 3, pp. 101–128. [Google Scholar]
  52. Ahmed, A.M.; Rashid, T.A.; Saeed, S.A.M. Dynamic Cat Swarm Optimization algorithm for backboard wiring problem. Neural Comput. Appl. 2021, 33, 13981–13997. [Google Scholar] [CrossRef]
  53. Mohammed, H.; Rashid, T. A novel hybrid GWO with WOA for global numerical optimization and solving pressure vessel design. Neural Comput. Appl. 2020, 32, 14701–14718. [Google Scholar] [CrossRef] [Green Version]
  54. Maeterlinck, M. The Life of the White Ant; Dodd, Mead & Company: London, UK, 1939. [Google Scholar]
  55. Werber, B. Empire of the Ants; Le Livre de Poche: Paris, France, 1991. [Google Scholar]
  56. Hölldobler, B.; Wilson, E. The Ants; Belknap Press: Berlin, Germany, 1990. [Google Scholar]
  57. Hölldobler, B.; Wilson, E. Journey to the Ants: A Story of Scientific Exploration, 1st ed.; Harvard University Press: Cambridge, MA, USA, 1994. [Google Scholar]
  58. Franks, N.R.; Deneubourg, J.-L. Self-organizing nest construction in ants: Individual worker behavior and the nest’s dynamics. Anim. Behav. 1997, 54, 779–796. [Google Scholar] [CrossRef] [Green Version]
  59. Sumpter, D.J.T. Structures. In Collective Animal Behavior; Princeton University Press: Princeton, NJ, USA, 2010; pp. 151–172. [Google Scholar]
  60. Katoch, S.; Singh Chauhan, S.; Kumar, V. A review on genetic algorithm: Past, present, and future. Multimed. Tools Appl. 2021, 80, 8091–8126. [Google Scholar] [CrossRef]
  61. Meraihi, Y.; Ramdane-Cherif, A.; Acheli, D.; Mahseur, M. Dragonfly algorithm: A comprehensive review and applications. Neural Comput. Appl. 2020, 32, 16625–16646. [Google Scholar] [CrossRef]
  62. Price, K.; Awad, N.; Suganthan, P. The 100-Digit Challenge: Problem Definitions and Evaluation Criteria for the 100-Digit Challenge Special Session and Competition on Single Objective Numerical Optimization; Nanyang Technological University: Singapore, 2018. [Google Scholar]
  63. Cortés-Toro, E.M.; Crawford, B.; Gómez-Pulido, J.A.; Soto, R.; Lanza-Gutiérrez, J.M. A New Metaheuristic Inspired by the Vapour-Liquid Equilibrium for Continuous Optimization. Appl. Sci. 2018, 8, 2080. [Google Scholar] [CrossRef] [Green Version]
  64. LaTorre, A.; Molina, D.; Osaba, E.; Poyatos, J.; Del Ser, J.; Herrera, F. A prescription of methodological guidelines for comparing bio-inspired optimization algorithms. Swarm Evol. Comput. 2021, 67, 100973. [Google Scholar] [CrossRef]
  65. Baggett, B.M. Optimization of Aperiodically Spaced Phased Arrays for Wideband Applications. 3 May 2011. Available online: https://vtechworks.lib.vt.edu/bitstream/handle/10919/32532/Baggett_BMW_T_2011_2.pdf?sequence=1&isAllowed=y (accessed on 12 May 2020).
  66. Diao, J.; Kunzler, J.W.; Warnick, K.F. Sidelobe Level and Aperture Efficiency Optimization for Tiled Aperiodic Array Antennas. IEEE Trans. Antennas Propag. 2017, 65, 7083–7090. [Google Scholar] [CrossRef]
  67. Lebret, H.; Boyd, S. Antenna array pattern synthesis via convex optimization. IEEE Trans. Signal Process. 1997, 45, 526–532. [Google Scholar] [CrossRef] [Green Version]
  68. Jin, N.; Rahmat-Samii, Y. Advances in Particle Swarm Optimization for Antenna Designs: Real-Number, Binary, Single-Objective and Multiobjective Implementations. IEEE Trans. Antennas Propag. 2007, 55, 556–567. [Google Scholar] [CrossRef]
  69. Swagatam, D.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems. Available online: https://sci2s.ugr.es/sites/default/files/files/TematicWebSites/EAMHCO/contributionsCEC11/RealProblemsTech-Rep.pdf (accessed on 30 June 2020).
Figure 1. Some of the most prominent nature-inspired metaheuristic algorithms in literature.
Figure 1. Some of the most prominent nature-inspired metaheuristic algorithms in literature.
Mathematics 09 03111 g001
Figure 2. Ant nesting.
Figure 2. Ant nesting.
Mathematics 09 03111 g002
Figure 3. T and Tprevious computation: (a) T is computed as the slope side of the difference between the current and best worker ants’ deposition positions as one side and their fitness difference as the other side; (b) Tprevious is computed as the slope side of the difference between the current worker ant’s previous and the best deposition positions as one side and their fitness difference as the other side.
Figure 3. T and Tprevious computation: (a) T is computed as the slope side of the difference between the current and best worker ants’ deposition positions as one side and their fitness difference as the other side; (b) Tprevious is computed as the slope side of the difference between the current worker ant’s previous and the best deposition positions as one side and their fitness difference as the other side.
Mathematics 09 03111 g003
Figure 4. Flowchart of ANA.
Figure 4. Flowchart of ANA.
Mathematics 09 03111 g004
Figure 5. Box and whisker plot of ANA and FDO on the standard benchmark functions: (a) ANA versus FDO on F1; (b) ANA versus FDO on F2; (c) ANA versus FDO on F3; (d) ANA versus FDO on F4; (e) ANA versus FDO on F5; (f) ANA versus FDO on F7; (g) ANA versus FDO on F9; (h) ANA versus FDO on F10; (i) ANA versus FDO on F11; (j) ANA versus FDO on F12; (k) ANA versus FDO on F13; (l) ANA versus FDO on F14; (m) ANA versus FDO on F15; (n) ANA versus FDO on F16; (o) ANA versus FDO on F17; (p) ANA versus FDO on F18.
Figure 5. Box and whisker plot of ANA and FDO on the standard benchmark functions: (a) ANA versus FDO on F1; (b) ANA versus FDO on F2; (c) ANA versus FDO on F3; (d) ANA versus FDO on F4; (e) ANA versus FDO on F5; (f) ANA versus FDO on F7; (g) ANA versus FDO on F9; (h) ANA versus FDO on F10; (i) ANA versus FDO on F11; (j) ANA versus FDO on F12; (k) ANA versus FDO on F13; (l) ANA versus FDO on F14; (m) ANA versus FDO on F15; (n) ANA versus FDO on F16; (o) ANA versus FDO on F17; (p) ANA versus FDO on F18.
Mathematics 09 03111 g005aMathematics 09 03111 g005b
Figure 6. Configuration of an array with 10 elements.
Figure 6. Configuration of an array with 10 elements.
Mathematics 09 03111 g006
Table 1. Entities in the ANA algorithm.
Table 1. Entities in the ANA algorithm.
NatureAlgorithm
Worker antSearch agent
Deposition positionPotential solution
Deposition position specificationFitness function
Worker ant’s decision factorDeposition weight
Fittest deposition positionOptimum solution
Stationary stone and/or nestmatePrevious deposition position
Table 2. ANA’s mathematical notations.
Table 2. ANA’s mathematical notations.
NotationDescription
tCurrent iteration
iCurrent worker ant
mIteration number
nWorker ant number
Xt,iWorker ant’s current deposition position
Xt,ibestWorker ant’s local best deposition position
Xt,ipreviousWorker ant’s previous deposition position
Xt,ifitnessWorker ant’s current deposition position’s fitness
Xt,ibestfitnessWorker ant’s local best deposition position fitness
Xt,ipreviousfitnessWorker ant’s previous deposition position fitness
Xt+1,iWorker ant’s new deposition position
ΔXt+1,iWorker ant’s deposition position’s rate of change
TWorker ant’s current deposition tendency rate
TpreviousWorker ant’s previous deposition tendency rate
dwDeposition weight
rRandom number in the [−1, 1] range
Table 3. ANA, DA, PSO, and GA test results on the standard benchmark functions [38].
Table 3. ANA, DA, PSO, and GA test results on the standard benchmark functions [38].
Test FunctionANADA
[38]
PSO
[38]
GA
[38]
MeanStandard DeviationMeanStandard DeviationMeanStandard DeviationMeanStandard Deviation
F10.0162991620.0624571342.85 × 10−187.16 × 10−184.20 × 10−181.31 × 10−17748.5972324.9262
F22.36 × 10−52.15 × 10−51.49 × 10−53.76 × 10−50.0031540.0098115.9713581.533102
F31.2391723351.0354063711.29 × 10−62.10×10−60.0018910.0033111949.003994.2733
F42.37 × 10−68.86 × 10−160.0009880.0027760.0017480.00251521.163042.605406
F514.9530604130.635210727.6005586.78647363.4533180.12726133307.185,007.62
F70.7706963840.3660426320.0102930.0046910.0059730.0035830.1668720.072571
F925.462218734.31390098716.018839.47911310.447247.87980725.518866.66936
F105.54 × 10−152.37 × 10−150.231030.4870530.2801370.6018179.4987851.271393
F110.4111897120.0732390960.1933540.0734950.0834630.0350677.7199593.62607
F123.2199568412.526573090.0311010.0983498.57 × 10−112.71 × 10−101858.5025820.215
F131.76 × 10−239.47 × 10−23103.74291.24364150135.4006130.099121.32037
F144.26 × 10−141.54 × 10−13193.017180.6332188.1951157.2834116.055419.19351
F154.89 × 10−63.31 × 10−6458.2962165.3724263.0948187.1352383.918436.60532
F1623.760923550.048390796596.6629171.0631466.5429180.9493503.048535.79406
F17223.56221250.008813889229.9515184.6095136.1759160.0187118.43851.00183
F1831.510152250.020777872679.588199.4014741.6341206.7296544.101813.30161
Note: The bold values of the mean indicate the best solution has been obtained by the algorithm in comparison to the others.
Table 4. ANA, PSO, GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO test results on the standard benchmark functions [51].
Table 4. ANA, PSO, GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO test results on the standard benchmark functions [51].
Test Function ANA
(This Work)
PSO
[51]
GPSO
[51]
EPSO
[51]
LNPSO
[51]
VC-PSO
[51]
SO-PSO
[51]
F1Mean01.17 × 10−451.11 × 10−451.17 × 10−451.11 × 10−451.17 × 10−1081.51 × 10−108
Standard deviation05.22 × 10−464.76 × 10−465.22 × 10−464.76 × 10−464.36 × 10−1084.46 × 10−108
F5Mean14.706784922.191739.992848.9951654.4057386.303266.81079
Standard deviation0.177924161.62 × 1043.168913.9593644.1212443.994283.76973
F7Mean0.841202638.6816020.636020.3802970.5374610.4100420.806175
Standard deviation0.555923439.0015340.296580.2812340.2853610.2947630.868211
F9Mean1.12 × 10−722.339169.7505412.1739723.507139.999298.95459
Standard deviation3.32 × 10−715.932045.433799.27430115.304574.083862.65114
F10Mean5.42 × 10−153.48 × 10−183.14 × 10−183.37 × 10−183.37 × 10−185.47 × 10−194.59 × 10−19
Standard deviation1.74 × 10−158.36 × 10−198.60 × 10−198.60 × 10−198.60 × 10−191.78 × 10−181.54 × 10−18
F11Mean0.926502510.0316460.004750.0116110.0110090.001470.001847
Standard deviation0.022222570.0253220.012670.0197280.0191860.004690.004855
Note: The bold values of the mean indicate the best solution has been obtained by the algorithm in comparison to the others.
Table 5. ANA, DA, WOA, and SSA test results on CEC-C06 2019 benchmark functions [49].
Table 5. ANA, DA, WOA, and SSA test results on CEC-C06 2019 benchmark functions [49].
Test FunctionANADA
[49]
WOA
[49]
SSA
[49]
MeanStandard DeviationMeanStandard DeviationMeanStandard DeviationMeanStandard Deviation
CEC01--5.43 × 10106.69 × 10104.11 × 10105.42 × 10106.05 × 1094.75 × 109
CEC0242.87 × 10−1478.036887.788817.34950.004518.34340.0005
CEC0313.702404222.01 × 10−1113.70260.000713.7024013.70250.0003
CEC0438.5088782210.07245727344.356414.098394.675248.56341.693622.2191
CEC051.2245987090.1146323942.55720.32452.73420.29172.20840.1064
CEC06--9.89551.640410.70851.03256.07981.4873
CEC07116.59621438.825046006578.953329.398490.684194.832410.396290.556
CEC085.4728149970.4294618776.87340.50156.9090.42696.37230.5862
CEC092.0009639960.003417816.04672.8715.93711.65663.67040.2362
CEC102.7182818284.44 × 10−1621.26040.171521.27610.111121.040.078
Note: The bold values of the tests indicate the significant results.
Table 6. ANA, DA, PSO, and GA ranking on the standard benchmark functions.
Table 6. ANA, DA, PSO, and GA ranking on the standard benchmark functions.
Test FunctionANADAPSOGA
F13124
F22134
F33124
F41234
F52134
F74213
F93214
F101234
F113214
F123214
F131243
F141432
F151423
F161423
F173421
F181342
Table 7. ANA, DA, PSO, and GA total number of ranking on the standard benchmark functions.
Table 7. ANA, DA, PSO, and GA total number of ranking on the standard benchmark functions.
RankANADAPSOGA
First7441
Second2752
Third6154
Fourth1429
Table 8. ANA, PSO, GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO ranking on the standard benchmark functions.
Table 8. ANA, PSO, GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO ranking on the standard benchmark functions.
Test FunctionANAPSOGPSOEPSOLNPSOVC-PSOSO-PSO
F11646423
F56754123
F76741325
F91635742
F107634421
F117635412
Table 9. ANA, PSO, GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO total number of ranking on the standard benchmark functions.
Table 9. ANA, PSO, GPSO, EPSO, LNPSO, VC-PSO, and SO-PSO total number of ranking on the standard benchmark functions.
RankANAPSOGPSOEPSOLNPSOVC-PSOSO-PSO
First2001111
Second0000042
Third0030102
Fourth0022310
Fifth0012001
Sixth2401000
Seventh2200100
Table 10. ANA ranking on standard benchmark functions by type and in total.
Table 10. ANA ranking on standard benchmark functions by type and in total.
Test Function TypeTotal RankingTotal Ranking/No. of FunctionRanking (1–4)
Unimodal1515/62.50
Multimodal1010/42.50
Composite88/61.33
Total3333/162.06
Table 11. The p values of the Student’s t-test, Welch’s t-test, and Wilcoxon signed-rank test of ANA and FDO on the standard benchmark functions.
Table 11. The p values of the Student’s t-test, Welch’s t-test, and Wilcoxon signed-rank test of ANA and FDO on the standard benchmark functions.
Test FunctionStudent’s t-TestWelch’s t-TestWilcoxon Signed-Rank Test
F10.0662020.1378031.86 × 10−9
F20.0920680.1893240.685047
F30.000013.0678 × 10−61.86 × 10−9
F40.0467550.09885063.73 × 10−9
F50.046710.09768950.404495
F70.0005170.001043290.00761214
F90.0000215.827 × 10−51.061 × 10−5
F100.50.2958410.00559672
F110.0018980.004036450.00322299
F120.0012980.003726095.1446 × 10−6
F130.000011.95 × 10−131.2508 × 10−6
F14-0.01205383.73 × 10−9
F150.50.5362720.00021761
F160.000014.0245 × 10−51.86 × 10−9
F170.1002560.2038070.761065
F180.000011.3746 × 10−61.86 × 10−9
Note: The bold p values of the tests indicate the significant results.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hama Rashid, D.N.; Rashid, T.A.; Mirjalili, S. ANA: Ant Nesting Algorithm for Optimizing Real-World Problems. Mathematics 2021, 9, 3111. https://doi.org/10.3390/math9233111

AMA Style

Hama Rashid DN, Rashid TA, Mirjalili S. ANA: Ant Nesting Algorithm for Optimizing Real-World Problems. Mathematics. 2021; 9(23):3111. https://doi.org/10.3390/math9233111

Chicago/Turabian Style

Hama Rashid, Deeam Najmadeen, Tarik A. Rashid, and Seyedali Mirjalili. 2021. "ANA: Ant Nesting Algorithm for Optimizing Real-World Problems" Mathematics 9, no. 23: 3111. https://doi.org/10.3390/math9233111

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop