Next Article in Journal
A Method for Few-Shot Modulation Recognition Based on Reinforcement Metric Meta-Learning
Previous Article in Journal
Hybrid FEM-AI Approach for Thermographic Monitoring of Biomedical Electronic Devices
Previous Article in Special Issue
Optimization of Scene and Material Parameters for the Generation of Synthetic Training Datasets for Machine Learning-Based Object Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Swallow Search Algorithm (SWSO): A Swarm Intelligence Optimization Approach Inspired by Swallow Bird Behavior

by
Farah Sami Khoshaba
1,
Shahab Wahhab Kareem
1,2,* and
Roojwan Sc Hawezi
1
1
Information System Engineering Department, Technical College of Computer and Informatic Engineering, Erbil Polytechnic University, Erbil 44001, Iraq
2
Department of Computer Engineering, College of Engineering, Knowledge University, Erbil 44001, Iraq
*
Author to whom correspondence should be addressed.
Computers 2025, 14(9), 345; https://doi.org/10.3390/computers14090345
Submission received: 15 July 2025 / Revised: 15 August 2025 / Accepted: 18 August 2025 / Published: 22 August 2025
(This article belongs to the Special Issue Operations Research: Trends and Applications)

Abstract

Swarm Intelligence (SI) algorithms were applied widely in solving complex optimization problems because they are simple, flexible, and efficient. The current paper proposes a new SI algorithm, which is based on the bird-like actions of swallows, which have highly synchronized behaviors of foraging and migration. The optimization algorithm (SWSO) makes use of these behaviors to boost the ability of exploration and exploitation in the optimization process. Unlike other birds, swallows are known to be so precise when performing fast directional alterations and making intricate aerial acrobatics during foraging. Moreover, the flight patterns of swallows are very efficient; they have extensive capabilities to transition between flapping and gliding with ease to save energy over long distances during migration. This allows instantaneous changes of wing shape variations to optimize performance in any number of flying conditions. The model used by the SWSO algorithm combines these biologically inspired flight dynamics into a new computational model that is aimed at enhancing search performance in rugged terrain. The design of the algorithm simulates the swallow’s social behavior and energy-saving behavior, converting it into exploration, exploitation, control mechanisms, and convergence control. In order to verify its effectiveness, (SWSO) is applied to many benchmark problems, such as unimodal, multimodal, fixed-dimension functions, and a benchmark CEC2019, which consists of some of the most widely used benchmark functions. Comparative tests are conducted against more than 30 metaheuristic algorithms that are regarded as state-of-the-art, developed so far, including PSO, MFO, WOA, GWO, and GA, among others. The measures of performance included best fitness, rate of convergence, robustness, and statistical significance. Moreover, the use of (SWSO) in solving real-life engineering design problems is used to prove (SWSO)’s practicality and generality. The results confirm that the proposed algorithm offers a competitive and reliable solution methodology, making it a valuable addition to the field of swarm-based optimization.

Graphical Abstract

1. Introduction

Optimization is a mathematical and computational technique to find the best possible solution or outcome from a set of available alternatives while satisfying certain constraints [1]. This mathematical formula can be written as [2]:
m i n m i z e x R n   f i x o r m a x i m i z e x R n   f i x , i = 1 , 2 , 3 , 4 . . M
Subject to
h j x = 0 j = 1 , 2 , 3 , 4 J
g k x 0   o r   g k x 0 , k = 1 , 2 , 3 , 4 . . K
where f i x ,   h j x ,   g k x are functions of the design vector
x i = x 1 , x 2 , x 3 . , x n T
Here, the components x i of Equation (4) are called design or decision variables, and they can be real-valued continuous, integer-valued discrete, or a mix of these two types. The function   f i x is called the cost function or objective function; it is the indicator of how good or bad a solution is. R n represents the space formed by the decision variables, which is called the search space, while the space formed by the objective function values is called the solution space. The equalities for h j x and inequalities for g k x are called constraints [2].
Optimization has been applied to many problems and fields, starting from simple problems and ending with solving complex ones in which the environment is constantly changing and a quick decision must be made. These fields include engineering, economics, logistics, and operations research.
A simple way to classify optimization algorithms is to look at the nature of the algorithm; this leads to dividing it into two categories: deterministic algorithms and stochastic algorithms. Deterministic algorithms follow a strict, well-defined, and repeatable procedure without incorporating any randomness or stochastic elements. These algorithms work well for problems that are linear, continuous, differentiable, and convex. However, they often struggle with real-world optimization problems, which are usually non-linear, non--convex, high-dimensional, and NP-hard, with discrete or combinatorial search spaces [3]. To solve these problems, stochastic (or probabilistic) approaches have been developed. There are two types of stochastic algorithms: heuristic and metaheuristic. The term heuristic originates from the Greek verb heurísko, which means “find out” or “to discover” [4]. Behaving like a baby discovering new things by trial and error, heuristic optimization algorithms are used when solutions to a complex optimization problem can be found in a reasonable time, but there is no guarantee that optimal solutions are reached.
Further development of heuristic algorithms led to metaheuristic algorithms. The term metaheuristic was coined by Glover in 1989 [5] by adding the Greek prefix “meta,” which means ‘beyond’ or ‘higher level,’ to “heuristic” to refer to a set of methodologies conceptually ranked above heuristics in the sense that they guide the design of heuristics. In addition, all metaheuristic algorithms use a certain tradeoff between randomization and local search. A metaheuristic is a higher-level procedure or heuristic designed to find, generate, or select a lower-level procedure or heuristic (partial search algorithm) that may provide a sufficiently good solution for an optimization problem. Metaheuristics are strategies applied to find solutions to problems with significantly large search spaces, making them very difficult to explore using traditional methods [6].

1.1. Metaheuristic Algorithms Classifications

Metaheuristic algorithms can be classified based on their design inspiration and operational structure into four main types:
  • Based on search strategy: Metaheuristic can be classified into single-solution, which focuses on iteratively improving a single candidate solution. For example, Simulated Annealing (SA) [7], Tabu Search (TS) [5], or population-based [8] methods work with a group of solutions simultaneously, promoting diversity and exploration. For example, Genetic Algorithms (GAs) [9], Particle Swarm Optimization (PSO) [10], Ant Colony Optimization (ACO) [11], and Differential Evolution (DE) [2].
  • Based on the nature of inspiration: Metaheuristics can be classified into three types: bio-inspired algorithms, physics-based algorithms [12], and social and cultural algorithms. The first type mimics the way living organisms solve their problems in their environments, such as how birds flock, ants find food, or how evolution drives survival through natural selection. By imitating these behaviors, bio-inspired algorithms can solve complex optimization problems where traditional algorithms might struggle. There are several examples of this type of optimization, such as evolutionary-based [13] GA, swarm-based PSO [14], ACO, artificial bee colony (ABC) [15], and behavioral-based Cuckoo Search [16] and Firefly Algorithm [17]. The second type is inspired by physical laws and phenomena. For example, Simulated Annealing (thermodynamics) [18] and the gravitational search algorithm (Newtonian gravity) [19,20]. The third one is based on social behaviors or human-inspired processes, such as Harmony Search [21] (musical improvisation) and Teaching–Learning-Based Optimization (classroom dynamics) [22].
  • Based on trajectory control: Metaheuristics can be classified as deterministic, stochastic, or a mixture or hybrid of both. Deterministic algorithms follow a rigorous procedure with repeatable design variables and functions. For example, hill-climbing. On the other hand, stochastic algorithms always have some randomness to explore the search space and avoid local optima (most metaheuristics fall into this category) [3].
  • Based on single or multiple solutions: Metaheuristics can be classified as trajectory-based or population-based [22]. Trajectory-based metaheuristics focus on finding a single solution and use iterative improvement to refine that solution. For example, simulated annealing [23]. Population-based metaheuristics focus on finding multiple solutions and use a population of solutions to explore the search space. For example, the genetic algorithms (GAs), Ant Colony Optimization (ACO), and particle swarm optimization (PSO) [24].

1.2. Optimization Problems Classification

Understanding different optimization problem types helps in choosing the appropriate mathematical models and solution approaches. Each type has its challenges and requires a special technique for a solution. In general, optimization problems can be divided into six main categories based on their objectives, structural properties, constraints, solution methods, computational complexity, and application domains. As a matter of fact, the optimization problems usually involve a combination of these six types.
The classification of these problem types appears in Figure 1, according to research from a literature review and additional optimization studies.
Based on the problem objective, optimization can be either a single objective [25], which occurs when M = 1 in Equation (1), in which case the goal is either to minimize or to maximize a single function, or multiple objectives [26,27], which occurs when M > 1, in which case it needs to optimize multiple conflicting objectives at the same time. For a single objective, there is a single global best for the entire swarm and a personal best for each particle. It focuses on a single objective function, and the particles update their velocity and position to minimize or maximize this objective. For example, finding the shortest route between two cities. While in the multiobjective, instead of a single solution, it aims to find a Pareto front [28], which is a set of optimal solutions where no objective can be improved without worsening another. It focuses on multiple conflicting objectives and uses an external archive to store the non-dominated solutions that form the Pareto front [28]. For example, optimizing a vehicle design to minimize fuel consumption while maximizing performance and safety. Also, there is another objective classification based on the linearity of the involved functions. If all objective and constraint functions—namely f i x ,     h j x , and g k x —are linear, the optimization problem is called linear, and when at least one of these functions is non-linear, it is considered a non-linear optimization problem [29].
Convex and non-convex are the two types of optimization problems that are based on problem structure. If a line is connected between two points on a convex function, the function must be under the line. In a convex optimization problem, any local minimum is also a global minimum. This is the major advantage because the algorithm can find the best solution without getting stuck in suboptimal points. Non-convex optimization problems have complex landscapes with multiple local optima, making their computations hard [30].
When working with optimization problems, one of the first distinctions to make is whether or not the problem includes constraints. If it does, the problem is considered constrained; otherwise, it is unconstrained. Among constrained problems, the restrictions placed on the variables can be either hard—meaning they must be strictly followed—or soft, meaning there is some flexibility, and violations may be tolerated within certain limits [31]. The most common approach for solving constrained optimization problems is to use a penalty function. The constrained problem is transformed to an unconstrained one by penalizing the constraints and building a single objective function. The resulting single objective function is then minimized using an unconstrained optimization algorithm [2].
Optimization problems are classified into four types based on the solution methods which depend on several factors such as the problem complexity, the solution accuracy, and the computational cost. The first type is the exact method, where the solution is guaranteed optimal, but the computations are expensive. The second type is the heuristic method, which provides good solutions within a reasonable time, but the optimality is not guaranteed. The third type is the metaheuristic method, which provides a high-level problem algorithm capable of finding a near-optimal solution to different complex problems, but not necessarily the exact solution. Finally, there is the hybrid method, which combines multiple strategies to exploit the strengths of different algorithms.
Starting with the famous question: is solving a problem as easy as checking a solution? Most experts think not, but no one has been able to prove it either way, which is why it remains one of the most important open problems in mathematics and computer science today [32]. This question highlights the importance of classifying optimization problems according to their complexity. Based on this complexity, the computational complexity of optimization problems varies significantly from easy to hard problems. The easy ones fall under what is known as the class (P), which can be solved efficiently with polynomial-time algorithms; in other words, the time it takes grows reasonably as the input gets larger, like checking if a number is even or sorting a list alphabetically [33]. Then, there are the (NP) problems, which are a bit harder. In this type of problem, we might not know how to find a solution quickly, but if someone gives us the answer, we can verify it fast, like checking whether a finished Sudoku puzzle is correct. A subset of NP, called NP-complete, contains the hardest problems in NP—if we can figure out how to solve just one of them efficiently, it would mean we can solve all NP problems efficiently, which is why they are so important [34]. NP-hard problems are even tougher in some ways; they are at least as hard as NP-complete ones but do not necessarily have solutions that are easy to check, like the infamous Halting Problem [35]. Because finding exact solutions to NP-hard problems is computationally infeasible for large instances, researchers often rely on heuristic and metaheuristic algorithms, such as genetic algorithms, PSO, ACO, DE, and other bio-inspired approaches to find approximate solutions.
Finally, based on the application area, optimization is classified as function optimization, combinatorial optimization, optimization in an unknown search space, engineering design, and decision-making optimization. Function optimization is the process of optimizing a continuous mathematical function, for instance, minimizing cost or maximizing efficiency. On the contrary, combinatorial optimization is concerned with discrete variables and looks for an optimal element of a finite set. These optimization problems arise in numerous applications [36,37,38]. However, this set is too large to be enumerated; it is implicitly given by its combinatorial structure. The goal is to develop efficient algorithms by understanding and exploiting this structure. For example, it is seen in problems like the Traveling Salesman Problem (TSP) or the Knapsack Problem [18]. Optimization in an unknown search space occurs when the space is always changing or when it is too complicated to explore it directly. The metaheuristic approach is used in this type of problem, such as genetic algorithms, particle swarm optimization, ant colony optimization, differential evolution, and bio-inspired algorithms. Engineering design optimization focuses on identifying optimal design parameters for systems or components in fields such as structural design or control systems. Lastly, decision-making optimization addresses problems involving the selection of the best course of action under uncertainty, commonly in multiobjective contexts where trade-offs between conflicting objectives must be balanced [26].

1.3. Challenges

The difficulty is that most of the swarm intelligence algorithms, like PSO and ACO, have high computational complexity, which may become a hindrance in its efficient solution for real time context critical applications. This issue is realized in the scalability where it is quite difficult to accommodate large numbers of such agents. As such, the successful implementation of swarm intelligence applications depends on two essential factors: environmental disorder management and multiagent intercommunication optimization.

1.4. Contribution

The primary contributions of this paper are as follows:
  • The new Swallow Search optimization algorithm (SWSO) serves as a novel metaheuristic that finds inspiration from swallow migration patterns.
  • The proposed algorithm has been evaluated using a range of benchmark functions, including unimodal, multimodal, and fixed-dimension functions, as well as the functions of the CEC2019 benchmark.
  • A comparison of the (SWSO) algorithm’s performance has been made with different algorithms, such as MFO, PSO, GSA, BA, FPA, SMS, FA, GA, DA, WOA, BOA, COA, FDO, FOX, and AFO.
  • The SWSO algorithm has been applied to two constrained engineering design problems, and its performance has been compared to the more recent outcomes found in the literature.
To simplify reading the paper’s abbreviations and symbols, Table 1 and Table 2 summarize all used abbreviations and symbols used, respectively.
The rest of the paper is organized as follows: Section 2 presents the literature review, which includes an overview of previous studies. These studies are classified into three main groups: swarm intelligence and optimization algorithms, metaheuristic and hybrid algorithms, and finally, bio-inspired algorithms. In Section 3, the inspiration behind the proposed (SWSO) algorithm is discussed, and the biological behavior of the swallow bird is described. Section 4 describes the methodology and algorithm used in this paper, which is inspired by the remarkable social behaviors, such as foraging, migration, social interaction, flying in a V-shape flock, and power saving. Section 5 comprises an assessment of the performance of the SWSO technique, including a comparison with other approaches and an examination of its efficacy in specific scenarios. The proposed SWSO algorithm is applied in Section 6 on two of the standard engineering problems as a test case. Finally, we provide a conclusion of the presented work and some recommendations for future work in Section 7. Additional figures and the complete calculations of the relative ranks of the proposed algorithm over its competitors are provided in Appendix A.

2. Literature Review

In order to give a complete overview of recent developments in this area, the references are categorized into three major sections with regard to their content and topicality, the first category is swarm intelligence and optimization algorithms, the second category is metaheuristic and hybrid algorithms, and lastly, bio-inspired and hybrid optimization models. There is one paragraph that gives an overview of the main points made in each reference.

2.1. Swarm Intelligence and Optimization Algorithms

A novel nature-inspired metaheuristic optimization technique called “Apiary Organizational-Based Optimization Algorithm (AOOA)” is introduced [39]. The algorithm simulates the systematic behavior of honeybees within the apiary and focuses on their roles as queens, drones, and workers. The authors highlight AOOA’s ability to explore and exploit solution spaces effectively, showing an advancement in the optimization field. The problem of incorporating differential privacy (DP) into swarm intelligence (SI) to improve optimization was examined in Ref. [40]. The approach showed enhanced algorithmic efficiency by preventing local optima stagnation, thus relevant to precautionary domains such as health care. To overcome the limitations of the original Aquila Optimizer (AO), such as insufficient local exploitation ability, low optimization precision, and slow convergence rate, the authors in Ref. [41] improved the original Aquila Optimizer by enhancing the position updating and combining the golden sine operator with self-learning and social learning mechanisms from particle swarm algorithms. A non-linear equilibrium factor was introduced to balance global search and local exploitation capabilities, thereby improving convergence speed and accuracy. The PSAO was tested on different benchmark functions and engineering design problems to show the strength of the improved optimizer compared with the old one. In Ref. [42], the authors presented a new mutation-driven swarm intelligence algorithm with translation and rotation invariance for global optimization. The studied algorithm provided better results than the conventional methods in solving independent as well as coupled multimodal problems with stability and efficiency as was evident in the engineering optimization problems. Ref. [43] enhanced machine learning classifiers used to detect code smell by applying optimizers based on swarm intelligence. The approach raised prediction accuracy and computer performance and was useful for software engineering. Both heuristic-based and swarm intelligence techniques were used in Ref. [44] to optimize work–life balance issues. It was shown that the algorithms produce a positive outcome in the field of the task scheduling and the resources management, to the problems of personal and organizational efficiency. Ref. [14] presented a two-stage greedy auction algorithm for target assignment in large-scale UAV swarms. This method helped in well-coordination of tasks at the operational level especially surveillance and reconnaissance duties. The WOAD3QN-RP protocol, which integrates swarm intelligence and deep reinforcement learning for smart routing in wireless sensor networks, was proposed in Ref. [45]. This innovation increased energy efficiency, data rates, and network lifetime, which geared the solution for IoT applications. The paper [46] aimed to review the method of swarm intelligence in enhancing of sliding wear in nanocomposites. The latter demonstrated good performance in terms of balancing between the required computational precision and complexity, and pointed toward the further applications of the algorithms in material design and tribology. In Ref. [47], the authors introduce Swallow Swarm Optimization (SSO), a biological inspired swarm intelligence algorithm in which explorer, aimless, and leader particles are modeled having different behaviors and variable search radii. It is a role-based architecture that allows the swarm to come out of local minima and converge due to the decentralized decision-making system as a reflection of simple social interactions in nature. Effective and intuitive as it is, the design of SSO remains rather static and not as much mathematically versatile as more recent swarm-related solutions.

2.2. Metaheuristic and Hybrid Algorithms

In Ref. [48], complex optimization problems have been solved using the hybrid dolphin and sparrow optimization (DSO) algorithm. The integration of adaptive and stochastic schemes makes DSO exhibit good global search characteristics, fast convergence, and stability in solving constrained problems such as airfoil design. In Ref. [49], the authors propose a swarm intelligence and protein spatial characteristics algorithm (SIPSC-Kac) that integrates swarm intelligence to predict the sites of lysine acetylation by analyzing protein spatial features. The success of swarm intelligence in the problem of determining biomolecular protein modification sites was demonstrated through this bioinformatics application, thereby highlighting the potential of the concept in computational biology. Regarding a multimodal transportation system, the authors have proposed an optimizer for logistics distribution in Ref. [50], exploiting swarm intelligence. In Ref. [51], an electrical automation control system is optimized using an AI-based optimization method, drastically reducing response time and improving stability amid power and frequency disturbances, with a failure rate of 0.02 (turbine) due to optimization, indicating its efficiency and resilience in various engineering contexts.

2.3. Bio-Inspired and Hybrid Optimization Models

A literature review of features of algorithms and control techniques in swarm robotics for autonomous aerial robots was prepared by Ref. [52]. The single sensor implicitly underlined issues of cooperative robotics, power consumption, and real-time reactivity specific to environments. Thus, the elements are combined to enhance the characteristics of the approach in terms of improving the monitoring of livestock and controlling the pattern of grazing, which has (connection) a positive impact on the efficiency of cattle farming and its contribution to the type of the environment that the livestock is placed in. Field tests support the model to demonstrate the effectiveness of automating time-honored grazing patterns. The model incorporated SIA for routing and coverage to demonstrate the capacity and future that precision farming has. A novel approach to pathfinding inspired by the cooperative behavior between donkeys and smugglers is presented in Ref. [53]. The Donkey and Smuggler Optimization Algorithm (DSO) has dual modes: the Smuggler mode, which explores all possible paths to find the shortest one, and the Donkeys mode, which uses various donkey behaviors to enhance the search process. The Smuggler mode is useful at the beginning of the optimization process while the Donkeys mode is activated when the algorithm has already identified a promising solution and needs to improve it. The results show the DSO potential for solving complex optimization problems and unfamiliar search spaces. The [54] paper introduces a novel adaptive nature-inspired optimization algorithm called “Adaptive Fox Optimization (AFOX) Algorithm”, which is inspired by the hunting behavior of foxes. AFOX shows good performance in comparison to traditional one, particularly in speeding up convergence to the global solution and avoiding local optima optimization problems. The AFOX study was tested to solve real-world optimization problems as well as various benchmark functions, showing a significant advancement in the field of optimization, making it a promising tool for a wide range of real-world applications. Chicken swarm’s nature was the base for an innovative NSCSO optimization model that is developed in Ref. [55]. It utilizes non-dominated sorting for the integration of diverse objectives, and for solving multiobjective functions, the algorithm yields a better performance than other related work. Through NSCSO, the authors demonstrate that bio-inspired algorithms can effectively solve high-dimensional optimization problems. Its uses extend to the planning of various systems, including engineering design and resource allocation, to be a useful tool for decision making.

3. Swallow Search Optimization Algorithm (SWSO)

Flight of birds may represent one of the most fascinating wonders of nature. This flight has been a dream of mankind since ancient times, as people used to imitate how birds fly in the air by observing them and analyzing their interactions during flight, as well as studying the various factors that affect the mechanics of flight. Engineers have used this dream as inspiration to create novel technologies specifically within optimization algorithm development. In this section, the inspiration behind the proposed (SWSO) algorithm is discussed, and the biological behavior is described.

3.1. Inspiration

Swallows are medium-sized birds [56], they are 17 to 19 cm long, including 2 to 7 cm of elongated outer tail feathers. Their wingspan is 32 to 34.5 cm, and they weigh 16 to 22 g [57], with pointed narrow wings, short bills, and small weak feet [58]. The tail extends well beyond the wingtips, and the long outer feathers give the tail a deep fork, assisting in maneuvering and acrobatics during foraging [59]. Barn swallows are aerial-insectivores that breed and roost in a colony.
The barn swallow, as shown in Figure 2, is a bird of open country that normally nests in man-made structures, spreading with human expansion. It builds a cup nest from mud pellets in barns or similar structures. They fly with fluid wingbeats in bursts of straight flight, rarely gliding, and can execute quick, tight turns and dives [60]. These birds feed on the wing, snagging insects from just above the ground or water to heights of 100 feet or more. When aquatic insects hatch, Barn Swallows may join other swallow species in mixed foraging flocks [59].
Migratory swallows are a prominent model system in evolutionary, ecological, and behavioral research. They have evolved less pointed, more rounded wings compared to their resident counterparts. This is unusual because, traditionally, long-distance migratory birds are expected to have pointed wings to reduce drag and improve flight efficiency. However, in swallows, wing roundedness increases with migratory distance. This deviation from the expected pattern may be due to the need for migratory swallows to forage in low food availability and inclement weather, requiring more maneuverability at low speeds, or because their roosting habits in dense, low vegetation necessitate better take-off performance, favoring rounded wings [58].

3.2. Biological Behavior of Swallows

Swallows exhibit remarkable social behaviors, including the following:
  • Foraging: Swallows forage in groups, dynamically adjusting their flight patterns to locate and capture prey efficiently.
  • Migration: Swallows undertake long migratory journeys, navigating complex environments and adapting to changing conditions.
  • Social Interaction: Swallows communicate and coordinate within flocks, sharing information about food sources and dangers.
  • Fly in a V-shape flock: This method allows swallows to cover the greatest possible distance by taking advantage of the total air flow formed by the entire flock, thus reducing the effort expended during flight. Figure 3 shows the V-shaped formation during flight. This means that when each bird flaps its wings in the air, it generates energy that helps it fly and also helps the bird following it, which enables all the birds in the flock to cooperate in the flight process.
  • Power saving: Studies have shown that the V-shape formation in flight enables the entire flock to cover a distance of 71% more than if the bird flew alone. Not only that, but researchers have found that the birds alternate their places in the flock in a strange way. When one of the birds feels tired, it returns back so that it can take advantage of the air current resulting from the flock as a whole, thus saving power by reducing the effort expended to the maximum degree, then after resting a while, it returns to its place again, making way for another bird to rest.

4. Methodology and Mathematical Model

Understanding optimization problem types is important for selecting an effective algorithm because it determines the most suitable search strategy design. This problem type classification is based on multiple dimensions, including solution methods, objective types, problem structures, constraints, and complexity levels.
This paper presents a Swallow Search Optimization (SWSO) algorithm that functions as a non-deterministic bio-inspired metaheuristic method that tackles NP-hard single-objective optimization problems with non-convex and constrained structures and partial or total non-separability, especially during obstacle avoidance and formation constraint situations. Unlike Ref. [47], the proposed method gives new ways of controlling agent coordination as well as finding optimal solutions to constrained and complex optimization problems. Not only does this model the social action of swallows but it also adds energy-driven rotations of leadership, the PSO-style velocity update and even V-formation flight dynamics to guide swarm coordination. The algorithm starts by initializing both position and velocity parameters in continuous bounded domains that focus on continuous search areas. The adaptive cognitive-social parameters (c1, c2) together with energy-based leader selection work effectively in dynamic environments that require the continuous evolution of leadership and behavior. The dual feature of storing multiple solutions along with best position memory enables SWSO to optimize multimodal functions while maintaining diversity through stochastic exploration decay. SWSO implements a repulsion-based penalty mechanism to resolve constrained problems that contain spatial constraints such as obstacles. The mechanism guides agents through feasible regions to prevent them from entering prohibited zones while sustaining diversity among the population. The V-formation control system, together with leader energy management, enables efficient agent coordination that sets this SWSO variant apart from others when dealing with structured optimization landscapes found in real-world applications.
Mathematical Model and Algorithm
In nature, excellent examples of engineering solutions are found. These engineering solutions have been perfected over millions of years of evolution. By studying and understanding lessons from nature, new or better designs of materials and structures can be made. For biomimetic applications, it is important to have a clear understanding of biology [61,62], which is why this paper examines and mathematically models the swallow’s biological behavior to gain insights that can inspire the SWSO optimization algorithm.
The Swallow Search Optimization Algorithm (SWSO) is a nature-inspired metaheuristic based on the social and energetic behavior of swallow birds. It simulates their foraging, migration, and social communication behaviors to balance global exploration and local exploitation in solving complex optimization problems. The algorithm models a population of N agents (swallows), each represented by a position vector x R d , where d is the problem dimension. The goal is to minimize a real-valued objective function f : R d R .

4.1. Swallow Foraging Behavior

The foraging behavior of swallows is modeled to enhance exploration. Swallows search for food in groups, adjusting their positions based on successful foraging events.
  • Initialization
Each swallow’s position is initialized uniformly within the search space:
x i , j 0 u l b j , u b j ,   f o r   i = 1 , 2 , , N   a n d   j = 1 , 2 , , d  
where l b j and u b j are the lower and upper bounds for dimension j .
Each agent is also assigned a normalized energy value E i 0 0 , 1 , representing its readiness to lead the swarm.
  • Exploration (with probability 1 − α)
x i t + 1 = x i t + δ t . 2 r i 1 u b l b    
where,
  • δ t = e x p l o r e _ d e c a y t T is a decaying exploration factor, encouraging convergence over time.
  • r i   u 0 , 1 d is a random vector.
This simulates distributed search, preventing premature convergence.

4.2. Swallow Migration Behavior

Migrating birds have been monitored to fly across the globe with a minimum amount of food. For our design, migration behavior is modeled to enhance exploitation. Swallows move towards optimal regions while avoiding suboptimal areas.
  • Fitness Evaluation
At each iteration t , the fitness of each agent i is computed as follows:
f i t = f x i t
The globally best solution found so far is tracked:
x b e s t = a r g   m i n f i t x i t ,   f b e s t = a r g   m i n f i t i
  • Position Update Formula
Each non-leader swallow updates its position based on either exploitation (following the leader) or exploration (random search), governed by a probability α 0 , 1 .
x i t + 1 = x i t + r i x L t x i t + ε i  
where,
  • x L t is the position of the leader.
  • ε i N 0 , σ 2 is a small Gaussian perturbation (for local search).
This strategy exploits learned successful paths while maintaining variability for refinement.
  • Velocity Update Formula
All metaheuristics share a common concept of achieving a balance between exploration and exploitation [63]. Exploration is the ability of members to generate a variety of solutions in the search area, which results in a global search feature of the metaheuristic algorithms. Exploitation, on the other hand, indicates the ability of members to make the best use of past iterations’ solutions information to narrow the search, leading to a local search behavior of the metaheuristic. Different metaheuristic problems emphasize either exploratory approaches or exploitation methods in their problem-solving strategies. It is important to strike a balance between these two aspects to optimize the performance of a metaheuristic algorithm. Several strategies for balancing exist, such as static control, which uses fixed parameters [64], dynamic adjustment with parameters change over time [65], greedy selection that used for pure exploitation [66], randomized search for pure exploration [67], and feedback-based or self-adaptive methods that respond to agent performance or environmental cues [68]. The ε-greedy strategy, together with Bayesian methods [69], including Upper Confidence Bound (UCB) and Thompson Sampling, forms the advanced category of balancing strategies [70,71,72].
The proposed Swallow Search Optimization (SWSO) algorithm uses hybrid dynamic strategies which derive their principles from Particle Swarm Optimization (PSO). The algorithm regulates exploration–exploitation transitions using time-variable cognitive and social coefficients together with velocity-based update rules. This design fosters both rapid convergence and sustained diversity within the population.
The framework incorporates velocity-based updates, which apply momentum-based searches instead of random moving through the solution space. The algorithm uses historical positional data to enhance agent steering behavior as an adaptive system within the solution space. This facilitates a balanced search behavior, enabling global exploration in early iterations and focused exploitation in later stages, thereby improving solution quality and search efficiency.
V e l o c i t y i t + 1 = ω . V e l o c i t y i t + c 1 . r 1 . ( P b e s t , i t x i t ) + c 2 . r 2 . x L t x i t    
  • The term ω . V e l o c i t y i t retains a portion of the previous velocity, which controls the momentum, allows for smoother motion in the search space, and helps the agent to keep exploring in a similar direction. The typical range for the parameter w falls between 0 and 1, while its common use value amounts to 0.7.
  • The term c 1 . r 1 .   ( P b e s t , i t x i t ) is the cognitive component which reflects the personal experience of the agent and pulls the agent toward its own historically best position. c 1 is the cognitive learning coefficient, and r 1 is a random value in [0, 1], which adds stochasticity. In this case, ( P b e s t , i t ) is the personal best position found by particle iii on its search history and it augments the particle in exploiting parts of the search space that gave good fitness values previously.
  • The term c 2 . r 2 .   x L t x i t is the social component that encourages social learning by attracting the agent to the current leader’s position. c 2 is the social learning coefficient, and r 2 is another random value in [0, 1].
The resulting velocity is used to update the agent’s position to make the agent move in the direction of its new velocity.
x i t + 1 = x i t + V e l o c i t y i t
The algorithm produces better dynamic behavior through time and creates a shift from independent search toward swarm convergence, by decreasing c 1 , which reduces exploration and increases c 2 for better exploitation of the search space.

4.3. Social Interaction and Information Sharing

Swallows communicate to share information about food sources and predators, enhancing the search efficiency. The Figure 4 representation highlights key behavior of the SWSO and demonstrates how the velocity vector, energy decay mechanism, leader switching behavior, and repulsion-based constraint handling uniquely influence the position update of agents in SWSO—distinguishing it from earlier role-based methods with static agent types.
  • Leader Selection
When a bird changes its position from follower to leader status or vice versa, the entire flock behavior pattern transforms. The leader must actively direct flock movements because other birds will adapt their positions accordingly.
In the SWSO algorithm, a leader is selected based on combining both fitness and energy, encouraging high-performance and well-energized individuals:
l e a d e r   i n d e x = a r g   m i n i f i t + 1 E i t
Without Leader Switching, the problem is likely separable (with minimal interdependencies).
  • Energy Dynamics
Each agent’s energy is updated to simulate fatigue and partial recovery:
E i t + 1 = m i n m a x E i t 0.01 + 0.05 . u i , 0 , 1 , u i u 0 , 1
If the leader’s energy E L t falls below a threshold θ , a new leader is chosen:
N e w   l e a d e r = a r g   m i n i f i t + 1 E i t
The biological inspiration drawn from swallows—such as foraging in groups, energy-efficient V-formation flight, leader rotation, and rapid maneuvering—has been rigorously formalized in the proposed Swallow Search Optimization (SWSO) algorithm through a set of mathematical models. Swarm is modeled to have foraging behavior by distributed random exploration (Equation (6)) so that it can diversify its search in initial iterations. Directional migration (i.e., movement towards the best-performing individual) together with local refinement based on Gaussian noise (Equation (9)) is used to model migration and exploitation. Momentum, self-learning, and social influence are all added in velocity and position updates (Equations (10) and (11)) just as it has been observed with swallows changing direction based on memory and group dynamics. The biological process of alternating flight roles of the V-formation to minimize energy consumption and maximize coordination is simulated by leader switching and energy dynamics (Equations (12–14)). The constraints handling through repulsion-based constraints mimics obstacle avoidance, and hence, the model adapts to a constrained setting dynamically.
The pseudo-code of the SWSO algorithm is presented:
Algorithm 1. Pseudo code of the SWSO algorithm
1-
Initialize Swallow_pos randomly within [lb, ub]
2-
Set Velocity to zero matrix
3-
Set Best_score = ∞ and Best_pos = zero vector,
4-
Initialize P_best = Swallow_pos and P_best_fitness = ∞
5-
Initialize Energy randomly in [0, 1] for all agents
6-
Generate V_formation using generate_v_formation()
7-
FOR t = 1 to Max_iter DO
8-
FOR each swallow i = 1 to N DO
9-
Clamp Swallow_pos[i] within [lb, ub]
10-
Evaluate Swallow_fitness[i] = fobj(Swallow_pos[i])
11-
IF Swallow_fitness[i] < Best_score THEN
12-
Best_score ← Swallow_fitness[i]
13-
Best_pos ← Swallow_pos[i]
14-
END IF
15-
IF Swallow_fitness[i] < P_best_fitness[i] THEN
16-
P_best[i] ← Swallow_pos[i]
17-
P_best_fitness[i] ← Swallow_fitness[i]
18-
END IF
19-
END FOR
20-
Select Leader ← agent with min(fitness + (1 − Energy))
21-
Update adaptive parameters c1 and c2 over time
22-
FOR each swallow i ≠ Leader DO
23-
Generate random values r1, r2 ∈ [0, 1]
24-
Update Velocity[i] using: w × Velocity[i] + c1 × r1 × (P_best[i] − Swallow_pos[i]) + c2 × r2 × (Leader − Swallow_pos[i])
25-
Update Swallow_pos[i] += Velocity[i]
26-
IF rand ≥ alpha THEN
27-
Apply decaying random exploration
28-
END IF
29-
FOR each obstacle DO
30-
IF distance to obstacle < 1 THEN
31-
Apply repulsion force to avoid collision
32-
END IF
33-
END FOR, END FOR
34-
Update Energy[i] ← Energy[i] − 0.01 + 0.05 × rand
35-
Clamp Energy[i] within [0, 1]
36-
IF Energy [Leader] < 0.5 THEN
37-
Find nearest neighbor and swap with Leader
38-
END IF
39-
Apply maintain_v_formation() to Swallow_pos
40-
Store Best_score in cg_curve[t]
41-
IF t == 1 or mod(t,10) == 0 THEN
42-
Print: Iteration t | Best Score = Best_score
43-
END IF, END FOR
44-
RETURN Best_score, Best_pos, cg_curve

5. Validation and Comparison

5.1. Benchmark Functions

A set of benchmark functions, which are groups of functions that can be used to test the performance of any optimization problem [73], serve to evaluate the efficiency of the proposed SWSO algorithm. These benchmark functions cover all categories of unimodal, multimodal, fixed-dimension multimodal, and composite functions.
  • Unimodal functions: The single global optimum of unimodal functions, as shown in Figure 5, provides an excellent platform for testing both convergence efficiency and exploitation capabilities of algorithms.
  • Multimodal functions: Multimodal functions serve as crucial evaluation tools to test Swallow Search Optimization Algorithm’s (SWSO) ability to prevent premature convergence while escaping local minima during evaluations. The search space complexity is measured through these functions which contain multiple local optima to test SWSO’s exploration capabilities.
  • Fixed-dimension multimodal functions: Multimodal functions with fixed dimensions differ from those that provide scalable dimensional capabilities. The test functions with fixed dimensions present predetermined dimensions that limit their ability to evaluate problems of different sizes. Although using the multimodal functions provides adjustable design variable counts, their landscape structures still differ from those provided by the multimodal with fixed dimensions [74].
  • The composite functions: The complexity of composite test functions increases because they use transformations including random shifts of the global optimum along with search space rotations and boundary-based placement of optima. Composite functions provide excellent benchmarking capabilities for SWSO under dynamic and deceptive conditions because they include features which enhance robustness and adaptability across various optimization scenarios [30,73]. Table 3, Table 4 and Table 5 describe the unimodal, multimodal, and fixed-dimension multimodal, respectively. These tables give a detailed overview of the equations, problem dimensions D i m ( n ) , upper and lower bands in the boundaries, and minimum goal values of each function ( F m i n )  [75].
Table 3. Unimodal benchmark functions.
Table 3. Unimodal benchmark functions.
Function NameFunction D i m ( n ) Range F m i n
Sphere F 1 X = i = 1 n x i 2 30[−100, 100]0
Schwefel 2.22 F 2 X = i = 1 n x i + i = 1 n x i 30[−10, 10]0
Schwefel 1.2 F 3 X = i = 1 n j = 1 i x j 2 30[−100, 100]0
Schwefel 2.21 F 4 X = max 1 i N x i 30[−100, 100]0
Rosenbrock F 5 X = i = 1 n 1 100 x i + 1 x i 2 2 + 1 x i 2 30[−30, 30]0
Step F 6 X = i = 1 n x i + 0.5 2 30[−100, 100]0
Quartic F 7 X = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ) 30[−1.28, 1.28]0
Table 4. Multimodal basic functions.
Table 4. Multimodal basic functions.
Function NameFunction D i m ( n ) Range F m i n
Schwefel 2.26 F 8 X = i = 1 n x i s i n x i 30[−500, 500]−418.9829 n
Shifted Rastrigin F 9 X = i = 1 n x i 2 10 c o s 2 π x i + 10 n 30[−5.12, 5.12]0
Ackley F 10 X = 20 exp 1 5 1 n i = 1 n x i 2 exp 1 n i = 1 n c o s 2 π x i + 20 + e 30[−32, 32]0
Griewangk F 11 X = 1 4000 i = 1 n x i 2 i = 1 n c o s x i i + 1 30[−600, 600]0
Penalized function one F 12 X = π n { 10 s i n 2 π y 1 + i = 1 n 1 y i 1 2 1 + 10 s i n 2 π y i + 1 + y n 1 2 } + i = 1 n u i
Where : y i = 1 + x i + 1 4 ,               u i x i , a , k , m = x i , 10 , 100 , 4 u i = k x i a m , x i > a 0 , a x i a k ( x i a ) m , x i < a
30[−50, 50]0
Penalized function two F 13 X = 1 10 { s i n 2 3 π x 1 + i = 1 n x i 1 2 1 + s i n 2 3 π x i + 1 + x n 1 2 1 + s i n 2 2 π x n } + i = 1 n u i x i , 5 , 100 , 4 30[−50, 50]0
Table 5. Fixed-dimension benchmark functions.
Table 5. Fixed-dimension benchmark functions.
Function NameFunction D i m ( n ) Range F m i n
De Jong 5 t h Function F 14 X = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 6 1 2[−65, 65]1
Kowalik’s Function F 15 X = i = 1 11 a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 4[−5, 5]0.00030
Six-Hump Camel F 16 X = 4 x 1 2 2.1 x 1 4 + x 1 6 3 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
Branin Function F 17 X = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π c o s x 1 + 10 2[−5, 5]0.397887
Goldstein-Price Function F 18 X = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] [ 30 + 2 x 1 3 x 2 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
Hartmann 3-Dimensional Function F 19 X = i = 1 4 α i e x p j = 1 3 a i j x j p i j 2 3[0, 1]−3.86278
Hartmann 6-Dimensional Function F 20 X = i = 1 4 α i e x p j = 1 6 a i j x j p i j 2 6[0, 1]−3.32237
Shekel Function F 21 X = i = 1 5 x a i x a i T + C i 1 4[0, 10]−10.1532
Shekel Function F 22 X = i = 1 7 x a i x a i T + C i 1 4[0, 10]−10.4028
Shekel Function F 23 X = i = 1 10 x a i x a i T + C i 1 4[0, 10]−10.5364
In addition to functional classification based on landscape characteristics, these benchmark functions are arranged into four primary categories based on variable interaction and separability patterns, which range from simple to complex combinations. Separable functions are the first and simplest type, since their variables operate independently; the sphere function serves as a representative example of a separable function [76]. The second is a more complex one involving partially non-separable functions, where a subset of variables is interdependent while the remaining ones remain independent, introducing moderate difficulty. The third type is the m-non-separable, which emerges frequently in benchmark suites proposed by the CEC [76]. The fourth type displays the most challenging optimization characteristics since its fully non-separable functions cause all variables to entangle with one another throughout the search space [75]. Functions like the rotated versions of Rastrigin and Ackley belong to this class and are designed to strictly test the global search capability of algorithms [76], as summarized in Table 6.

5.2. Comparison with Other Algorithms

Simulation tests are implemented using MATLAB R2020a on Lenovo laptop with the following specifications: system model: 20VD, 11th Generation Intel Core i5-1135G7 processor with a base speed of 2.4 GHz (8 logical cores), 16 GB RAM, Windows 11 Pro operating system, and manufactured by Lenovo, headquartered in Beijing, China. In the SWSO algorithm, the following probability is set to α = 0.8, the exploration decay rate is 0.9, the energy threshold for leader switching is 0.3, and the adaptive inertia is sampled randomly in the range [0.6, 1.0]. A Gaussian perturbation scales the random exploration term ε i N 0 , 0.052 2 , and convergence is tracked at each iteration (see Appendix A, Figure A1). Particularly, the iteration was set to 1000 in both the conventional benchmark functions and the complex real-world engineering problems, and the number of independent runs was 30. To make all the comparative evaluations consistent, the mean values of the results were reported. The SWSO algorithm was tested as well using a number of benchmark functions of unimodal, multimodal, fixed-dimension multimodal, and CEC 2019 composite functionality.
For SWSO, the following parameter values were used throughout all benchmark tests (based on extensive tuning and empirical studies): Population size (30), Initial energy (E0 = 1.0), Energy decay factor (λ = 0.005), Cognitive coefficient (c1) linearly decreasing from 2.5 to 0.5, Social coefficient (c2) linearly increasing from 0.5 to 2.5, Gaussian disturbance (σ = 0.1), and Repulsion threshold (ε = 10−6).
For all comparative algorithms, it used parameter values as originally recommended in their respective publications or widely accepted benchmarks, including the PSO (c1 = c2 = 2.0, w = 0.729), GA (Crossover rate = 0.8, mutation rate = 0.01), MFO (b = 1, t = iteration index), WOA (linearly decreases from 2 to 0), GWO (α, β, δ coefficients as per default settings), and DE (F = 0.5, CR = 0.9).
All algorithms used a population size of 50 and a maximum of 1000 iterations to ensure fair comparison.
A total of 29 functions were operated to test them. The behavior of the algorithm was compared with over 30 standard optimization strategies, grouped into six distinct categories of algorithms (group-A to group-F). These are PSO, MFO, GSA, FA, GA, WOA, INFO, HHO, RIME, TSA, and FOX. Some of the evaluation measures involved average fitness values, standard deviation of fitness values, convergence rate, and durability of multiple runs. Analysis was conducted by Wilcoxon rank-sum tests to ensure significance is confirmed. Comparison of the results obtained is consistent and indicates that SWSO performs competitively or above in all types of functions, especially in exploration (multimodal) and convergence accuracy (unimodal or fixed dimension problems). In the CEC2019 benchmarks, it passed its tests, assuring its quality in feasible situations of simulation necessities.
Based on the mentioned different benchmark function types, the results of the SWSO are compared with other SI algorithms, and the comparison results are illustrated in four sections:

5.2.1. The Unimodal Benchmark Test Functions (F1–F7): Exploitation Capability

Based on unimodal benchmark functions, which mainly measure the ability of an algorithm to find the best global solution, the SWSO algorithm compared with two different groups of SI algorithms, group-A which are the following: MFO [77], PSO [78], GSA [79], BA [80], FPA [81], SMS [82], FA [83], and IFOX [84], and group-B which are: WOA [85], FEP [86], IACO [87], VGWO, GWO [88], INFO [89], SCA [90], and RIME [12]. The comparison with group-A and group-B is presented in Table 7, parts a and b, respectively, where the best global minimum solution for each function is shown in bold in the tables.

5.2.2. The Multimodal Benchmark Test Functions (F8–F13): Exploration Capability

To evaluate the SWSO algorithm on multimodal benchmark functions (F8–F13), which are designed to challenge an algorithm’s exploration capabilities by featuring numerous local optima, the SWSO algorithm was compared with two different groups of SI algorithms, group-C which are MFO, PSO, GSA, BA, FPA, SMS, FA, and GA [77] and group-D, which are presented in Ref. [89], MINFO, INFO, SCSO, AVOA, SCA, HHO, GWO, RIME, and ZOA. The comparison between group-C and group-D is presented in Table 8, parts a and b, respectively.

5.2.3. The Fixed-Dimension Benchmark Test Functions (F14–F19): Balanced Optimization

To analyze the SWSO algorithm on the F14–F19 benchmark functions, which concentrate on how well it balances exploration and exploitation, we used group-C, as it had been used in comparable research and made fair comparisons possible. Table 9 presents the results obtained for group-C.

5.2.4. CEC2019 Benchmark Test Functions (CEC01–CEC10): Surrogate Optimization in the Real World [25]

In order to provide a comprehensive assessment of the SWSO algorithm, experimental data is compared with three Swarms Intelligence (SI) opponent groups on the CEC2019 benchmarkable functions. In Table 10, part a, you can see the results for group-C algorithms [89], which have previously been used for previous categories. Table 10, part b, shows the results from group-E algorithms, which consist of IWOA, NRO, BDA, CEBA, IPSO, IAGA, BBOA, FOX, and IFOX [84]. Group-F algorithms, including Hybrid FOX-TSA, FOX, TSA, PSO, GWO, MRSO, and RSO [91,92], are compared to the results in Table 10c.

5.3. Result Analysis and Discussion

In this section, we analyze the SWSO’s performance by running it on different benchmark functions. These benchmark functions are designed to measure separate capabilities of the algorithm. To check its performance and consistency, the SWSO algorithm is compared with some popular SI algorithms, and the results tables are illustrated in Appendix A. Table A1a and Table A2a show the average performance ranks of group-A and B optimizers, respectively. The algorithms are ranked according to how well they reduced the objective function for every test, so a lower rank means a better algorithm.
In addition, Table A1b and Table A2b present the standard deviation ranks for the same groups, highlighting how consistently each algorithm performs across multiple runs. Here, a lower rank indicates higher stability and less variation in results. The SWSO algorithm not only maintained its leading position in terms of average performance but also demonstrated excellent consistency, frequently ranking first or very close to it in the standard deviation tables. This dual strength—accuracy and reliability—further reinforces SWSO’s robustness across the unimodal benchmark functions.
Meanwhile, Table A1c and Table A2c serve as a summarized view of both aspects—highlighting which algorithms appeared most frequently in the top five positions for both performance and stability. It condenses the ranking information into a more comparative format, reaffirming SWSO’s superiority by tallying how often it ranked first. Ultimately, across all three tables, SWSO maintains a leading position, offering both accuracy and reliability, which are critical traits in optimization problem solving.

5.3.1. Unimodal Functions (F1–F7)

First, we compared SWSO with group-A, which includes WOA, FEP, ACO, VGWO, INFO, SCA, GWO, and RIME, as seen in Table A1. Here, SWSO dominated, consistently taking the top spot for average performance on six out of seven unimodal benchmark functions (F1, F2, F3, F4, F5, and F7). The only exception was on F6, where it fell in fifth place and FEP and INFO took the highest marks. Overall, SWSO had the highest score of 1.57 on average, outdoing other algorithms in this group. For consistency, measured by standard deviation, SWSO was equally impressive, often ranking first or very close to it across most functions, ending with an excellent average rank of 1.71. INFO was its closest competitor in both average and standard deviation in this set.
Then, we brought in group-B, consisting of MFO, PSO, GSA, BA, FPA, SMS, FA, and IFOX, with their results in Table A2. The behavior was much the same. SWSO scored the highest on average for six unimodal functions (F1, F2, F3, F4, F5, and F7), appearing at the top once again. Just like with group-A, F6 was the exception, ranking fifth, with GSA taking the lead. For this group, SWSO’s average ranking was still 1.57, which was much better than the other methods. When checking consistency, SWSO led on five of the functions by securing the first position for standard deviation. Otherwise, the algorithm consistently did well and reached a strong average rank of 1.57. It seems that F6 simply behaves differently than the rest of the unimodal functions, giving other algorithms a chance to shine, but for almost all other unimodal problems, SWSO proves to be exceptionally robust and accurate across both comparison groups.

5.3.2. Multimodal Functions (F8–F13)

To evaluate the SWSO algorithm on multimodal benchmark functions (F8–F13), the SWSO algorithm was compared against two distinct sets of optimization algorithms. The first, group-C, includes MINFO, INFO, SCSO, AVOA, SCA, HHO, GWO, RIME, and ZOA, with their ranks detailed in Table A3. Here, SWSO demonstrated a strong showing, frequently ranking first for average performance on functions like F8 through F11. While it placed a bit lower on F12 and F13, its overall average rank in group-C was an impressive 1.83. For the standard deviation, which represents the consistency of the algorithm, SWSO also performed admirably, often securing the top spot and maintaining an average rank of 2.00, placing it among the most reliable in this group.
Next, we expanded our comparison to group-D, which includes MFO, PSO, GSA, BA, FPA, SMS, FA, and GA, with their performance laid out in Table A4. Against this group, SWSO’s average performance was even more remarkable. It took the first rank on F8, F9, F10, F11, and F12, only slipping to second on F13. This resulted in an impressive overall average rank of just 1.16, making it clearly superior to all others in group-D. In terms of consistency, SWSO again excelled. While it was fourth on F8, it dominated the standard deviation ranks for F9, F10, F11, and F12 and came in second on F13. This resulted in an excellent average standard deviation rank of 1.66 for group-D, strengthening its position as a consistently high performer. Across both rigorous comparisons, SWSO proved to be an exceptionally effective and reliable algorithm for tackling complex multimodal optimization problems.

5.3.3. Fixed Dimension (F14–F19)

To evaluate the SWSO algorithm on fixed-dimension benchmark functions, the SWSO algorithm was compared against group-D again, and their detailed ranks are illustrated in Table A5. Starting with the average ranks in Table A5a, which tells us about the solution quality. On five out of the six functions (F15, F16, F17, F18, and F19), SWSO was able to take the lead. The only time it did not finish in first place was on F14; it came in third, while MFO took the lead. Considering every performance as a whole, the average rank of SWSO came out to be 1.33, making it rank as the best among the other algorithms for solution quality.
Next, when we look at consistency, shown by the standard deviation ranks in Table A5b, the SWSO’s performance was equally remarkable. Just like with average ranks, it clinched the first position for consistency on five out of the six functions (F15, F16, F17, F18, and F19). Again, F14 gave a third-place result, meaning steady performance throughout the runs, even when it was not in the first position. The constant performance put this algorithm at the top of group-D, as shown in Table A5c, with an average standard deviation rank of 1.33. In summary, on these standardized benchmark functions, SWSO gave the highest-quality answers and did so reliably, placing it clearly above the rest in group-D.

5.3.4. The CEC2019 Benchmark Test Functions (CEC01–CEC10)

It is clear from Table A6 that the proposed SWSO approach achieved a remarkable result, with a mean average rank of 1.9, which is clearly better than group-C on all 10 functions, clearly outperforming its peers such as MINFO (2.1), INFO (3.9), and RIME (3.8). Moreover, SWSO ranked first in 8 out of 10 functions, underscoring its strong exploration–exploitation balance and robustness. While its standard deviation rank was slightly higher at 4.3, it still indicates competitive reliability, showing that the algorithm maintains consistent performance across runs. These results firmly establish SWSO as the top-performing algorithm in group-C.
For group-E (Table A7), SWSO also demonstrated strong competitiveness with an average performance rank of 3.6, slightly trailing PSO (2.5) and Hybrid (2.7). Notably, it claimed first place in three functions (Cec08, Cec10, and tied in Cec03) and performed consistently well across the rest, showing solid generalization. The standard deviation rank of 4.5 further confirms its stability, remaining within a tight performance band compared to other algorithms like RSO (5.6) and MRSO (5.6). Overall, SWSO remains among the top tier of group-E competitors.
In group-F (Table A8), although SWSO faced stiffer competition, it still delivered a commendable average rank of 3.9, matching IAGA (3.9) and IPSO (3.0), and exceeding IWOA (5.2) and CEBA (7.4). It ranked first on four functions (Cec04, Cec06, Cec09, Cec10) and maintained a standard deviation rank of 4.3, indicating strong reliability across stochastic runs. Despite being challenged by variants like IFOX (2.6) and NRO (2.6), SWSO remained a top-five performer on most benchmarks.
In conclusion, SWSO consistently performed among the top three algorithms across all benchmark groups. Its dominance in group-C, competitive edge in group-E, and resilience in group-F demonstrate that it is a well-rounded, effective, and reliable optimization algorithm, excelling in both solution quality and consistency. These findings underscore SWSO’s suitability for complex optimization tasks in dynamic and uncertain environments.
Table A9 summarizes the results of the Swallow Swarm Optimization (SWSO) algorithm for the four categories of benchmark functions. It contains the average rank for both average performance and standard deviation, as well as the group of algorithms that the proposed algorithm is compared to. Indeed, statistical validation is a crucial part of performance comparison. In this study, the Wilcoxon signed-rank test was conducted at a 5% significance level (p < 0.05) to assess the robustness of the proposed (SWSO) algorithm against other algorithms across all benchmark problems.
SWSO is an adaptive energy-based leader selection algorithm that incorporates V-formation coordination. This considerably reduces the number of computations performed by avoiding the unnecessary evaluations and expediting faster convergence. As a measure of computational complexity, SWSO has a time complexity of O ( N × D × T ) ,where ( N : number of agents, D : dimensionality, T : number of iterations), similar to the PSO but more efficient given the optimization of the exploration/exploitation trade-off. Memory overhead is kept low due to the reuse of velocity and the fact that only the position vector, the velocity, the best score, and the remaining energy of each agent need to be carried in memory.
In comparison with algorithms, such as GSA or GA, which perform more memory-consuming tracking of diversity in the population or cross-over operations, SWSO implements simple rules of social and cognitive movement that need fewer memory writes on each turn. Simulation tests on relatively inexpensive hardware (Intel i5-1135G7, 16GB RAM) also do not need GPU acceleration, and this fact is documented as such.
When compared to all the tested benchmarks, SWSO regularly demonstrated the highest or very close to optimal average and standard deviation values, on average and with standard deviation parameters (Table 7, Table 8, Table 9 and Table 10 in Section 5.2). These findings substantiate the fact that the convergence was faster, especially on unimodal functions (F1–F7) and multimodal functions (F8–F13) where convergence was realized early, and also escaped the local minima, more so reliably than MFO, PSO, and other swarm algorithms. This is because of the dynamic alternation of the exploration and exploitation through adopting adaptive inertia and energy-sensitive leadership cycles.
The algorithm showed better performance than other common ones in various benchmarking test functions. In the categories of Unimodal, Multimodal, and Fixed-Dimension, SWSO ranks first, outperforming all other algorithms. In the CEC2019 benchmark, challenges and diversity were higher and SWSO achieved first place in Group-C, third place in Group-F, and fourth place in Group-E. Even though SWSO ranked slightly lower in CEC2019, its outstanding performance in the first three categories makes it the best among all algorithms. The outcomes show that SWSO is suitable for tackling many different optimization challenges.

6. Case Studies (Engineering Optimization Problems)

Any optimization algorithm should be tested on at least one of the standard engineering problems to measure its robustness. Such problems are described as complex, multisufficient, and constrained options that make them optimal to test optimization algorithms by comparison with benchmark functions [93]. The SWSO algorithm is implemented to two structural engineering design tasks in this paper, and its performance will be evaluated against the latest results reported in the literature using the metaheuristic algorithms and other state-of-the-art methods. These are the welded beam design, pressure vessel design, speed reducer design, I-beam design, and clutch brake design.

6.1. The Welded Beam Design Problem

The welded beam design problem shown in Figure 6, is a structural engineering benchmark optimization problem focused on the minimization of total fabrication cost including the cost of the weld material ( C 1 = 0.10471 ), the cost of the beam material ( C 2 =   0.04811 ), and the labor involved in the welding ( C 3 = 1 ) [93,94]. The design variables of this problem often include the thickness of the weld ( h =   x 1 ), the length of the weld ( l = x 2 ), the height of the beam ( t = x 3 ), and the width of the beam ( b = x 4 ). The system has certain structural and geometric constraints like the shear stress in the weld ( τ ), the bending stress in the beam ( σ ), the buckling load carrying capacity of the beam ( P c ), and the ultimate maximum deflection at the unsupported end ( δ ). These constraints shown in Equations (15)–(21), as presented in Ref. [94].
g 1 x = τ x τ m a x   0
g 2 x = σ x σ m a x   0
g 3 x = x 1 x 4   0
g 4 x = 0.10471 x 1 2 + 0.04811 x 3 x 4 14 + x 2 5 0
g 5 x = 0.125 x 1   0  
g 6 x = δ x δ m a x   0
g 7 x = P P c x   0
where,
τ X = τ 2 + 2 τ τ + τ 2
τ = ( P 2 ) / ( x 1 x 2 )
τ = ( M R ) / J
M = P ( L + x 4 / 2 )
R = [ ( x 2 2 ) / 4 + ( ( x 1 + x 3 ) / 2 ) 2 ]
J = 2 x 1 x 2 R
σ ( X ) = ( 6 P L ) / ( x 4 x 3 2 )
δ ( X ) = ( 6 P L 3 ) / ( E   x 3 2 x 4 )
P c ( X ) = [ 4.013 E ( ( x 2 3 x 4 6 ) / 36 ) ] / L 2 [ 1 ( x 3 / ( 2 L ) ) × ( E / ( 4 G ) ) ]
The problem has the following constant parameters [76]:
Applied load ( P   =   6000   l b ), Beam length ( L   =   14   i n ), Maximum deflection ( δ m a x = 0.25   i n ), Young’s modulus ( E   =   30   ×   10 6   p s i ), Shear modulus ( G   =   12   ×   10 6   p s i ), Max shear stress ( τ m a x =   13,600   p s i ), and Max normal stress ( σ m a x = 30,000   p s i ).
The design variables have these ranges: 0.1 x 1 ,   x 4 , 2 ,   0.1 x 2 ,   x 3 10 .
The goal is to find the values of [ x 1 , x 2 , x 3 , x 4 ] that minimize the objective function represented in Equation (31).
f X = 1.10471   x 1 2 x 2 + 0.04811   x 3 x 4 14 + x 2
In Ref. [95], another algorithm is a mixed BES-GO algorithm that surpassed over ten new methods, such as PSO, GWO, and ALO, to obtain the optimal solution. Ref. [12] proposed RIME as a physics-based optimizer that achieved a better result than six commonly used algorithms such as GSA, GWO, and SSA. As reported in Ref. [96], MSSSA demonstrated its high prowess to deal with intricate constraints that were highly stable and accurate. Recently SMR was used to solve this issue in Ref. [97] where it compared favorably to PSO, ABC, DE, GOA, and WOA when used in constrained optimization problems. The above modern metaheuristic developments confirm the optimal solutions that are able to find solutions to the welded beam design.
Although this optimization problem has been solved through a number of metaheuristic methods that have been introduced in recent years, this paper shows that the (SWSO) algorithm is effective in solving this engineering design problem compared to them.
The simulation results of the Welded Beam Design (WBD) challenge are shown in Table 11. The table indicates that SWSO has the lowest result of the cost which is 1.5878. Compared to the other algorithms [95,96], SWSO has the smallest value that is optimized.

6.2. Pressure Vessel Design Problem

The problem of vessel design under pressure shown in Figure 7. The four variables that are considered in the design are shown in Table 12.
The objective is to identify the possible values of [ x 1 ,   x 2 ,   x 3 ,   x 4 ] that can minimize the total cost of material, forming, and welding of a pressure vessel, shown in Equation (32), by maintaining the four production standards together with the safe standards kept, which are shown by Equations (33)–(36).
Minimize   f X = 0.6224   T s R L + 1.7781   T h R 2 + 3.1661 T s 2 L + 19.84 T h 2 L            
g 1 x = 0.0193 R T s   0
g 2 x = 0.0095 R T h   0
g 3 x = 1,296,000 π R 2 L 4 3 π R 3 0
g 4 x = L 240 0
For the design variables, the following ranges were adopted in several studies [12,96,112]: 0 x 1 ,   x 2 , 99 ,   10 x 3 ,   x 4 200 . Other sources, on the contrary, have applied different ranges 0.0625 x 1 ,   x 2 , 6.1875 ,   10 x 3 , x 4 200 , obtained by scaling the discrete variables T s and T h by 0.0625, to transform them into real dimensions [113,114,115]. We applied both these ranges to the proposed SWSO algorithm, referring to the first range as case-A and the second one as case-B. Table 13 shows the two cases simulation results of the Pressure Vessel Design (PVD) challenge. The table indicates that SWSO has the lowest result of the cost, which is USD 5754.7738 for case-A and USD 5770.3503 for case-B. Compared to the other algorithms [95,96], SWSO has the smallest value that is optimized. The objective function value (cost) is given in USD as defined initially by Refs. [116,117].

7. Conclusions and Future Work

Although the algorithm has been shown to be very good in many different test functions and in real-world engineering applications, there are also some limitations that need to be taken into consideration when using it in practice. Precisely, the algorithm could experience scalability issues in high-dimensional or real-time applications, given that the computational costs are characteristic of swarm intelligence methods. Moreover, the performance of SWSO also depends on several control parameters (e.g., energy threshold, inertia weight, and learning coefficients), and those particular values have to be introduced with certain care in different problem domains. The leader switching mechanism, as an elegant way to increase dynamic adaptability, will create sensitivity when improperly parameterized. Also, the repulsion-based constraint-handling method can be less efficient in certain circumstances of highly constrained problems or problems that are not spatial in character. The algorithm is based on the need for a continuous search space, so it would need to be adapted for discrete or combinatorial problems. Although the algorithm has some good empirical outcomes, the formal convergence analysis is yet to be established, and it is assumed to be one of the considerations for future work.
Studies of swallow flight incorporate valuable knowledge for designing modern drone technology and air vehicles. The implementation of swallow-inspired principles like agility and aerodynamic design and swarm functionality and wing flexibility and energy efficiency by engineers permits the creation of drones with improved capability across various operational domains. The presented Swallow Search Optimization (SWSO) algorithm framework did not present obstacle avoidance capabilities, so future work aims to add an obstacle detection mechanism. Research on this innovation will draw from practical challenges that drone pilots encounter when they need to route around dynamic and static obstacles in actual drone operations. The incorporation of biological avoidance strategies into the algorithm intends to produce a simulation-to-real-world drone operational bridge. Overall, the proposed SWSO algorithm demonstrated strong numerical performance across diverse benchmark categories. Specifically, it achieved an average rank of 1.57 for unimodal functions and 1.49 for multimodal functions, maintaining first place in both categories. On fixed-dimension multimodal benchmarks, SWSO further improved, achieving an average rank of 1.33, which again led all compared algorithms. Even on the demanding CEC2019 composite problems, SWSO remained highly competitive with an average rank of 3.1 (performance) and 4.36 (stability), consistently placing in the top tier of algorithms. These results, combined with the robustness and reliability demonstrated across multiple runs, underscore the effectiveness of SWSO as a versatile swarm intelligence method for global optimization.

Author Contributions

Conceptualization, S.W.K.; Methodology, S.W.K. and R.S.H.; Software, F.S.K.; Validation, F.S.K.; Resources, R.S.H.; Data curation, F.S.K.; Writing—original draft, F.S.K.; Writing—review & editing, S.W.K. and R.S.H.; Supervision, S.W.K.; Project administration, R.S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available in Kaggle at https://www.kaggle.com/code/kooaslansefat/cec-2022-benchmark, accessed on 19 August 2024.

Acknowledgments

We gratefully acknowledge the Cornell Lab of Ornithology | Macaulay Library for providing the media asset ML58330191.

Conflicts of Interest

The authors declares no conflicts of interest.

Appendix A

Appendix A.1. Result Analysis and Discussion

Appendix A.1.1. Unimodal Functions (F1–F7)

Table A1. (a): A comparison between the average ranks of group-A optimization algorithms on unimodal benchmark functions. (b): A comparison between the standard deviation ranks of group-A optimization algorithms on unimodal benchmark functions. (c): Comparison of group-A optimization algorithms on unimodal benchmark functions (Average and Standard Deviation Ranks).
Table A1. (a): A comparison between the average ranks of group-A optimization algorithms on unimodal benchmark functions. (b): A comparison between the standard deviation ranks of group-A optimization algorithms on unimodal benchmark functions. (c): Comparison of group-A optimization algorithms on unimodal benchmark functions (Average and Standard Deviation Ranks).
(a)
Test FunctionSWSOWOAFEPACOVGWOINFOSCAGWORIME
F1135772846
F2136572849
F3137652948
F4145892736
F5152734968
F6571342968
F7128653947
Average Rank1.573.854.856.005.712.428.424.426.28
(b)
Test FunctionSWSOWOAFEPACOVGWOINFOSCAGWORIME
F1135762948
F2135762849
F3135762948
F4145892736
F5425761938
F6341572836
F7138762945
Average Rank1.713.144.856.856.571.858.423.577.14
(c)
Test Function1st2nd3rd4th5thSWSO RankSubtotal
Average Ranks 11
F1SWSOINFOWOAGWOFEP1
F2SWSOINFOWOAGWOIACO1
F3SWSOINFOWOAGWOVGWO1
F4SWSOINFOGWOWOAFEP1
F5SWSOFEPVGWOINFOWOA1
F6FEPINFOIACOVGWOSWSO5
F7SWSOWOAINFOGWOVGWO1
Standard Deviation Ranks 12
F1SWSOINFOWOAGWOFEP1
F2SWSOINFOWOAGWOFEP1
F3SWSOINFOWOAGWOFEP1
F4SWSOINFOGWOWOAFEP1
F5INFOWOAGWOSWSOFEP4
F6FEPINFOSWSOWOAIACO3
F7SWSOINFOWOAGWORIME1
  • Average SWSO rank for average performance (F1–F7): 11/7 = 1.57
  • Average SWSO rank for standard deviation (F1–F7): 12/7 = 1.71
Table A2. (a): A comparison between the average ranks of group-B optimization algorithms on unimodal benchmark functions. (b): A comparison between the standard deviation ranks of group-B optimization algorithms on unimodal benchmark functions. (c): Comparison of group-B optimization algorithms on unimodal benchmark functions (Average and Standard Deviation Ranks).
Table A2. (a): A comparison between the average ranks of group-B optimization algorithms on unimodal benchmark functions. (b): A comparison between the standard deviation ranks of group-B optimization algorithms on unimodal benchmark functions. (c): Comparison of group-B optimization algorithms on unimodal benchmark functions (Average and Standard Deviation Ranks).
(a)
Test FunctionSWSOMFOPSOGSABAFPASMSFAIFOX
F1134297685
F2125978374
F3153694872
F4182385649
F5154376985
F6532196874
F7156497382
Average Rank1.574.423.713.578.145.856.287.284.14
(b)
Test FunctionSWSOMFOPSOGSABAFPASMSFAIFOX
F1134297685
F2125698374
F3264795183
F4172385649
F5142396785
F6432196875
F7154397286
Average Rank1.574.283.283.578.856.284.717.145.28
(c)
Test Function1st2nd3rd4th5thSWSO RankSubtotal
Average Ranks 11
F1SWSOGSAMFOPSOSMS1
F2SWSOMFOSMSIFOXPSO1
F3SWSOIFOXPSOFPAMFO1
F4SWSOPSOGSAFPAFA1
F5SWSOIFOXGSAPSOMFO1
F6GSAPSOMFOIFOXSWSO5
F7SWSOIFOXSMSGSAMFO1
Standard Deviation Ranks 11
F1SWSOGSAMFOPSOIFOX1
F2SWSOMFOSMSIFOXPSO1
F3SMSSWSOIFOXPSOFPA2
F4SWSOPSOGSAFAFPA1
F5SWSOPSOGSAMFOIFOX1
F6GSAPSOMFOSWSOIFOX4
F7SWSOSMSGSAPSOMFO1
  • Average SWSO rank for average performance (F1–F7): 11/7 = 1.57
  • Average SWSO rank for standard deviation (F1–F7): 11/7 = 1.57

Appendix A.1.2. Multimodal Functions (F8–F13)

Table A3. (a): A comparison between the average ranks of group-C optimization algorithms on multimodal benchmark functions. (b): A comparison between the standard deviation ranks of group-C optimization algorithms on multimodal benchmark functions. (c): Comparison of group-C optimization algorithms on multimodal benchmark functions (Average and Standard Deviation Ranks).
Table A3. (a): A comparison between the average ranks of group-C optimization algorithms on multimodal benchmark functions. (b): A comparison between the standard deviation ranks of group-C optimization algorithms on multimodal benchmark functions. (c): Comparison of group-C optimization algorithms on multimodal benchmark functions (Average and Standard Deviation Ranks).
(a)
Test FunctionSWSOMINFOINFOSCSOAVOASCAHHOGWORIMEZOA
F819641028375
F91111141231
F101111141231
F111111141231
F1242571103698
F1334591102768
Average Rank1.833.003.163.832.505.662.663.665.164.00
(b)
Test FunctionSWSOMINFOINFOSCSOAVOASCAHHOGWORIMEZOA
F821910734856
F91111141231
F101111141231
F111111141231
F1242571103698
F1335481102769
Average Rank2.001.833.504.662.005.832.004.504.834.33
(c)
Test Function1st2nd3rd4th5thSWSO RankSubtotal
Average Ranks 11
F8SWSOSCAGWOSCSOZOA1
F9SWSOGWORIMESCA-1
F10SWSOGWORIMESCA-1
F11SWSOGWORIMESCA-1
F12AVOAMINFOHHOSWSOINFO4
F13AVOAHHOSWSOMINFOINFO3
Standard Deviation Ranks 12
F8MINFOSWSOSCAHHORIME2
F9SWSOGWORIMESCA-1
F10SWSOGWORIMESCA-1
F11SWSOGWORIMESCA-1
F12AVOAMINFOHHOSWSOINFO4
F13AVOAHHOSWSOINFOMINFO3
  • Average SWSO rank for average performance (F8–F13): 11/6 = 1.83
  • Average SWSO rank for standard deviation (F8–F13): 12/6 = 2.00
Table A4. (a): A comparison between the average ranks of group-D optimization algorithms on multimodal benchmark functions. (b): A comparison between the standard deviation ranks of group-D optimization algorithms on multimodal benchmark functions. (c): Comparison of group-D optimization algorithms on multimodal benchmark functions (Average and Standard Deviation Ranks).
Table A4. (a): A comparison between the average ranks of group-D optimization algorithms on multimodal benchmark functions. (b): A comparison between the standard deviation ranks of group-D optimization algorithms on multimodal benchmark functions. (c): Comparison of group-D optimization algorithms on multimodal benchmark functions (Average and Standard Deviation Ranks).
(a)
Test FunctionSWSOMFOPSOGSABAFPASMSFAGA
F8183297546
F9136254789
F10125374968
F11125384967
F12135284769
F13215394768
Average Rank1.163.164.832.57.664.57.336.07.83
(b)
Test FunctionSWSOMFOPSOGSABAFPASMSFAGA
F8498612735
F9154293768
F10169278345
F11125394768
F12135294768
F13236495178
Average Rank1.664.666.163.167.334.335.335.337.00
(c)
Test Function1st2nd3rd4th5thSWSO RankSubtotal
Average Ranks 7
F8SWSOGSAPSOFASMS1
F9SWSOGSAMFOFPABA1
F10SWSOMFOGSAFPAPSO1
F11SWSOMFOGSAFPAPSO1
F12SWSOGSAMFOFPAPSO1
F13MFOSWSOGSAFPAPSO2
Standard Deviation Ranks 10
F8BAFPAFASWSOGA4
F9SWSOGSAFPAPSOMFO1
F10SWSOGSASMSFAGA1
F11SWSOMFOGSAFPAPSO1
F12SWSOGSAMFOFPAPSO1
F13SMSSWSOMFOGSAFPA2
  • Average SWSO rank for average performance (F8–F13): 7/6 = 1.16
  • Average SWSO rank for standard deviation (F8–F13): 10/6 = 1.66

Appendix A.1.3. Fixed Dimension (F14–F19)

Table A5. (a): A comparison between the average ranks of group-D optimization algorithms on fixed dimension benchmark functions. (b): A comparison between the standard deviation ranks of group-D optimization algorithms on fixed dimension benchmark functions. (c): Comparison of group-D optimization algorithms on multimodal benchmark functions (Average and Standard Deviation Ranks).
Table A5. (a): A comparison between the average ranks of group-D optimization algorithms on fixed dimension benchmark functions. (b): A comparison between the standard deviation ranks of group-D optimization algorithms on fixed dimension benchmark functions. (c): Comparison of group-D optimization algorithms on multimodal benchmark functions (Average and Standard Deviation Ranks).
(a)
Test FunctionSWSOMFOPSOGSABAFPASMSFAGA
F14318274695
F15147392685
F16127493856
F17136294785
F18128493675
F19156893472
Average Rank1.332.837.003.838.663.166.167.334.66
(b)
Test FunctionSWSOMFOPSOGSABAFPASMSFAGA
F14318296475
F15147392685
F16128694735
F17156782493
F18129483765
F19198345672
Average Rank1.333.837.664.167.833.665.666.664.16
(c)
Test Function1st2nd3rd4th5thSWSO RankSubtotal
Average Ranks 8
F14MFOGSASWSOFPAGA3
F15SWSOFPAGSAMFOGA1
F16SWSOMFOFPAGSAFA1
F17SWSOGSAMFOFPAGA1
F18SWSOMFOFPAGSAGA1
F19SWSOGAFPASMSMFO1
Standard Deviation Ranks 8
F14MFOGSASWSOSMSGA3
F15SWSOFPAGSAMFOGA1
F16SWSOMFOFAFPAGA1
F17SWSOFPAGASMSMFO1
F18SWSOMFOFPAGSAGA1
F19SWSOGAGSABAFPA1
  • Average SWSO rank for average performance (F14–F19): 8/6 = 1.33
  • Average SWSO rank for standard deviation (F14–F19): 8/6 = 1.33

Appendix A.1.4. The CEC2019 Benchmark Test Functions (CEC01–CEC10)

Table A6. (a): A comparison between the average ranks of group-C optimization algorithms on CEC2019 benchmark functions. (b): A comparison between the standard deviation ranks of group-C optimization algorithms on CEC2019 benchmark functions. (c): Comparison of group-C optimization algorithms on CEC2019 benchmark functions (Average and Standard Deviation Ranks).
Table A6. (a): A comparison between the average ranks of group-C optimization algorithms on CEC2019 benchmark functions. (b): A comparison between the standard deviation ranks of group-C optimization algorithms on CEC2019 benchmark functions. (c): Comparison of group-C optimization algorithms on CEC2019 benchmark functions (Average and Standard Deviation Ranks).
(a)
Test FunctionSWSOMINFOINFOSCSOAVOASCAHHOGWORIMEZOA
Cec0181534102796
Cec022113142354
Cec031111111111
Cec0414567910328
Cec0512376810549
Cec062166387954
Cec0713598106742
Cec0813568710942
Cec0913485106729
Cec101244255523
Average Rank1.92.13.95.34.57.25.95.63.84.8
(b)
Test FunctionSWSOMINFOINFOSCSOAVOASCAHHOGWORIMEZOA
Cec0181632105794
Cec0241263857109
Cec0341263107859
Cec0414510897326
Cec0519105638427
Cec0618107935264
Cec0762811094753
Cec0879846231051
Cec0913486957210
Cec1010921463857
Average Rank4.34.75.75.15.96.95.26.35.16.0
(c)
Test Function1st2nd3rd4th5thSWSO RankSubtotal
Average Ranks 19
Cec01MINFOHHOSCSOAVOAINFO8
Cec02MINFO, INFO, AVOASWSO, HHOSCSO, GWOSCA, ZOARIME2
Cec03SWSO, MINFO, INFO, SCSO, AVOA, SCA, HHO, GWO, RIME, ZOA----1
Cec04SWSORIMEGWOMINFOINFO1
Cec05SWSOMINFOINFORIMEGWO1
Cec06MINFOSWSOAVOAZOARIME2
Cec07SWSOZOAMINFORIMEINFO1
Cec08SWSOZOAMINFORIMEINFO1
Cec09SWSORIMEMINFOINFOAVOA1
Cec10SWSOMINFO, AVOA, RIMEZOAINFO, SCSOSCA, HHO, GWO1
Standard Deviation Ranks 43
Cec01MINFOAVOASCSOZOAHHO8
Cec02MINFOINFOAVOASWSOHHO4
Cec03MINFOINFOAVOASWSORIME4
Cec04SWSORIMEGWOMINFOINFO1
Cec05SWSORIMESCAGWOSCSO1
Cec06SWSOGWOSCAZOAHHO1
Cec07SCSOMINFOZOAHHORIME6
Cec08ZOASCAHHOSCSORIME7
Cec09SWSORIMEMINFOINFOHHO1
Cec10SCSOINFOHHOAVOARIME10
  • Average SWSO rank for average performance (Cec01–Cec10): 19/10 = 1.9
  • Average SWSO rank for standard deviation (Cec01–Cec10): 43/10 = 4.3
Table A7. (a): A comparison between the average ranks of group-E optimization algorithms on CEC2019 benchmark functions. (b): A comparison between the standard deviation ranks of group-E optimization algorithms on CEC2019 benchmark functions. (c): Comparison of group-E optimization algorithms on CEC2019 benchmark functions (Average and Standard Deviation Ranks).
Table A7. (a): A comparison between the average ranks of group-E optimization algorithms on CEC2019 benchmark functions. (b): A comparison between the standard deviation ranks of group-E optimization algorithms on CEC2019 benchmark functions. (c): Comparison of group-E optimization algorithms on CEC2019 benchmark functions (Average and Standard Deviation Ranks).
(a)
Test FunctionSWSOIWOANROBDACEBAIPSOIAGABBOAFOXIFOX
Cec0198371056412
Cec0231057968412
Cec038346723175
Cec0417469358102
Cec054615923783
Cec061223425232
Cec075716932894
Cec086212511341
Cec091536934782
Cec101223332233
Average Rank3.95.22.65.27.43.03.94.65.42.6
(b)
Test FunctionSWSOIWOANROBDACEBAIPSOIAGABBOAFOXIFOX
Cec019738456612
Cec021659487623
Cec031687386524
Cec041869745523
Cec051689367425
Cec0649821710635
Cec077658254413
Cec088637524111
Cec0919510367824
Cec1010794156823
Average Rank4.37.06.07.33.35.66.25.31.83.3
(c)
Test Function1st2nd3rd4th5thSWSO RankSubtotal
Average Ranks 39
Cec01FOXIFOXNROBBOAIPSO9
Cec02FOXIFOXSWSOBBOANRO3
Cec03BBOAIPSOIWOA, IAGANROIFOX8
Cec04SWSOIFOXIPSONROIAGA1
Cec05NROIPSOIAGA, IFOXSWSOBDA4
Cec06SWSOIWOA, NRO, IPSO, BBOA, IFOXBDA, FOXCEBAIAGA1
Cec07NROIAGAIPSOIFOXSWSO5
Cec08NRO, IPSO, IAGA, IFOXIWOA, BDABBOAFOXCEBA6
Cec09SWSOIFOXNRO, IPSOIAGAIWOA1
Cec10SWSOIWOA, NRO, IAGA, BBOABDA, CEBA, IPSO, FOX, IFOX--1
Standard Deviation Ranks 43
Cec01FOXIFOXNROCEBAIPSO9
Cec02SWSOFOXIFOXCEBANRO1
Cec03SWSOFOXCEBAIFOXBBOA1
Cec04SWSOFOXIFOXIPSOIAGA, BBOA1
Cec05SWSOFOXCEBABBOAIFOX1
Cec06CEBABDAFOXSWSOIFOX4
Cec07FOXCEBAIFOXIAGA, BBOANRO, IPSO7
Cec08BBOA, FOX, IFOXIPSONROIAGACEBA8
Cec09SWSOFOXCEBAIFOXNRO1
Cec10CEBAFOXIFOXBDAIPSO10
  • Average SWSO rank for average performance (Cec01–Cec10): 39/10 = 3.9
  • Average SWSO rank for standard deviation (Cec01–Cec10): 43/10 = 4.3
Table A8. (a): A comparison between the average ranks of group-F optimization algorithms on CEC2019 benchmark functions. (b): A comparison between the standard deviation ranks of group-F optimization algorithms on CEC2019 benchmark functions. (c): Comparison of group-F optimization algorithms on CEC2019 benchmark functions (Average and Standard Deviation Ranks).
Table A8. (a): A comparison between the average ranks of group-F optimization algorithms on CEC2019 benchmark functions. (b): A comparison between the standard deviation ranks of group-F optimization algorithms on CEC2019 benchmark functions. (c): Comparison of group-F optimization algorithms on CEC2019 benchmark functions (Average and Standard Deviation Ranks).
(a)
Test FunctionSWSOHybridFOXTSAPSOGWOMRSORSO
Cec0181532476
Cec0251432245
Cec0321211122
Cec0446135298
Cec0554831267
Cec0643152678
Cec0724631578
Cec0814563278
Cec0941523678
Cec1012645378
Average Rank3.62.74.33.32.53.36.36.8
(b)
Test FunctionSWSOHybridFOXTSAPSOGWOMRSORSO
Cec0181623475
Cec0267351248
Cec0321685734
Cec0436125487
Cec0513852467
Cec0634817265
Cec0742531876
Cec0871265834
Cec0938124576
Cec1083127654
Average Rank4.53.64.13.64.05.05.65.6
(c)
Test Function1st2nd3rd4th5thSWSO RankSubtotal
Average Ranks 36
Cec01HybridPSOTSAGWOFOX8
Cec02HybridPSO, GWOTSAFOX, MRSOSWSO, RSO5
Cec03Hybrid, TSA, PSO, GWOSWSO, FOX, MRSO, RSO---2
Cec04FOXGWOTSASWSOPSO4
Cec05PSOGWOTSAHybridSWSO5
Cec06FOXPSOHybridSWSOTSA4
Cec07PSOSWSOTSAHybridGWO2
Cec08SWSOGWOPSOHybridFOX1
Cec09HybridTSAPSOSWSOFOX4
Cec10SWSOHybridGWOTSAPSO1
Standard Deviation Ranks 45
Cec01HybridTSAPSOGWORSO8
Cec02PSOGWOFOXMRSOTSA6
Cec03HybridSWSOMRSORSOPSO2
Cec04FOXTSASWSOGWOPSO3
Cec05SWSOPSOHybridGWOTSA1
Cec06TSAGWOSWSOHybridRSO3
Cec07PSOHybridTSASWSOFOX4
Cec08HybridFOXMRSORSOPSO7
Cec09FOXTSASWSOPSOGWO3
Cec10FOXTSAHybridRSOMRSO8
  • Average SWSO rank for average performance (Cec01–Cec10): 36/10 = 3.6
  • Average SWSO rank for standard deviation (Cec01–Cec10): 45/10 = 4.5
Table A9. Final results of SWSO performance on benchmark categories.
Table A9. Final results of SWSO performance on benchmark categories.
CategoryCompared Group(s)Average Rank (Avg)Average Rank (SD)SWSO Rank
UnimodalGroup A and B1.571.641st
MultimodalGroup C and D1.491.831st
Fixed-Dimension ModalGroup D1.331.331st
CEC2019Group C, Group E, and Group F3.14.361st, 3rd, 4th
Figure A1. Convergence curve of the SWSO algorithms in some functions.
Figure A1. Convergence curve of the SWSO algorithms in some functions.
Computers 14 00345 g0a1aComputers 14 00345 g0a1b

References

  1. Arora, J.S. Introduction to Optimum Design, 3rd ed.; Academic Press: Cambridge, MA, USA, 2012. [Google Scholar] [CrossRef]
  2. Yang, X.-S. Nature-Inspired Optimization Algorithms; Elsevier: Amsterdam, The Netherlands, 2014; Available online: https://www.researchgate.net/publication/263171713 (accessed on 19 August 2024).
  3. Arcos-García, Á.; Álvarez-García, J.A.; Soria-Morillo, L.M. Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods. Neural Netw. 2018, 99, 158–165. [Google Scholar] [CrossRef]
  4. Hjeij, M.; Vilks, A. A brief history of heuristics: How did research on heuristics evolve? Humanit. Soc. Sci. Commun. 2023, 10, 64. [Google Scholar] [CrossRef]
  5. Glover, F. Tabu search—Part I. ORSA J. Comput. 1989, 1, 190–206. [Google Scholar] [CrossRef]
  6. Blum, C.; Roli, A. Metaheuristics in combinatorial optimization: Overview and conceptual comparison. ACM Comput. Surv. 2003, 35, 268–308. [Google Scholar] [CrossRef]
  7. Correia, A.; Worrall, D.E.; Bondesan, R. Neural Simulated Annealing. In Proceedings of the International Conference on Learning Representations (ICLR), Vienna, Austria, 25–29 April 2022. [Google Scholar]
  8. Samsuddin, S.; Othman, M.S.; Yusuf, L.M. A review of single and population-based metaheuristic algorithms solving multi depot vehicle routing problem. Int. J. Comput. Syst. Softw. Eng. 2018, 4, 80–93. [Google Scholar] [CrossRef]
  9. Du, X.; Zhang, Y.; Wang, J.; Li, C. GA-OMTL: Genetic algorithm optimization for multi-task learning. Expert Syst. Appl. 2025, 232, 122973. [Google Scholar] [CrossRef]
  10. Refaat, A.; Elbaz, A.; Khalifa, A.-E.; Elsakka, M.M.; Kalas, A.; Elfar, M.H. Performance evaluation of a novel self-tuning particle swarm optimization algorithm-based maximum power point tracker for porton exchange membrane fuel cells under different operating conditions. Energy Convers. Manag. 2024, 301, 118014. [Google Scholar] [CrossRef]
  11. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B Cybern. 1996, 26, 29–41. [Google Scholar] [CrossRef]
  12. Su, H.; Zhao, D.; Heidari, A.A.; Liu, L.; Zhang, X.; Mafarja, M.; Chen, H. RIME: A physics-based optimization. Neurocomputing 2023, 532, 183–214. [Google Scholar] [CrossRef]
  13. Thammachantuek, I.; Ketcham, M.; Mirjalili, S. Path planning for autonomous mobile robots using multi-objective evolutionary particle swarm optimization. PLoS ONE 2022, 17, e0271924. [Google Scholar] [CrossRef]
  14. Wang, G.; Wang, F.; Wang, J.; Li, M.; Gai, L.; Xu, D. Collaborative target assignment problem for large-scale UAV swarm based on two-stage greedy auction algorithm. Aerosp. Sci. Technol. 2024, 149, 109146. [Google Scholar] [CrossRef]
  15. Akay, B.; Karaboga, D. Artificial bee colony algorithm for large-scale problems and engineering design optimization. J. Intell. Manuf. 2012, 23, 1001–1014. [Google Scholar] [CrossRef]
  16. Pathak, V.K.; Srivastava, A.K. A novel upgraded bat algorithm based on cuckoo search and Sugeno inertia weight for large scale and constrained engineering design optimization problems. Eng. Comput. 2022, 38, 1731–1758. [Google Scholar] [CrossRef]
  17. Wang, C.; Liu, K. A Randomly Guided Firefly Algorithm Based on Elitist Strategy and Its Applications. IEEE Access 2019, 7, 141343–141355. [Google Scholar] [CrossRef]
  18. Černý, V. Thermodynamical approach to the traveling salesman problem: An efficient simulation algorithm. J. Optim. Theory Appl. 1985, 45, 41–51. [Google Scholar] [CrossRef]
  19. Yang, Z.; Cai, Y.; Li, G. Improved Gravitational Search Algorithm Based on Adaptive Strategies. Entropy 2022, 24, 1826. [Google Scholar] [CrossRef] [PubMed]
  20. Bernardo, R.M.C.; Torres, D.F.M.; Herdeiro, C.A.R.; Santos, M.P. Universe-inspired algorithms for Control Engineering: A review. arXiv 2024, 10, e31771. [Google Scholar] [CrossRef] [PubMed]
  21. Dubey, S.R.; Kumar, V.; Kaur, M.; Dao, T.-P. A Systematic Review on Harmony Search Algorithm: Theory, Literature, and Applications. Math. Probl. Eng. 2021, 2021, 5594267. [Google Scholar] [CrossRef]
  22. Kumar, N.; Kumar, V. A review of Teaching–Learning-Based Optimization and its applications. Swarm Evol. Comput. 2020, 55, 100–135. [Google Scholar]
  23. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  24. He, Q.; Wang, L. An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng. Appl. Artif. Intell. 2007, 20, 89–99. [Google Scholar] [CrossRef]
  25. Bacanin, N.; Budimirovic, N.; Venkatachalam, K.; Strumberger, I.; Alrasheedi, A.F.; Abouhawwash, M. Novel Chaotic Opposi-tional Fruit Fly Optimization Algorithm for Feature Selection Applied on COVID-19 Patients’ Health Prediction. PLoS ONE 2022, 17, e0275727. [Google Scholar] [CrossRef] [PubMed]
  26. Li, R.; Wang, H.; Wu, Y.; Shi, X.; Song, M.; Zhang, W.; Wang, Z. A review on multi-objective optimization of building performance—Insights from bibliometric analysis. Heliyon 2025, 11, e42480. [Google Scholar] [CrossRef] [PubMed]
  27. Khodadadi, N.; Khodadadi, E.; Abdollahzadeh, B.; Ei-Kenawy, E.-S.M.; Mardanpour, P.; Zhao, W.; Gharehchopogh, F.S.; Mirjalili, S. Multi-objective generalized normal distribution optimization: A novel algorithm for multi-objective problems. Clust. Comput. 2024, 27, 10589–10631. [Google Scholar] [CrossRef]
  28. Kang, S.; Li, K.; Wang, R. A survey on pareto front learning for multi-objective optimization. J. Membr. Comput. 2024, 7, 128–134. [Google Scholar] [CrossRef]
  29. Nocedal, J.; Wright, S.J. Numerical Optimization, 2nd ed.; Springer Science and Business Media: New York, NY, USA, 2006. [Google Scholar] [CrossRef]
  30. Boyd, S.; Vandenberghe, L. Convex Optim; Cambridge University Press: Cambridge, UK, 2004; Available online: https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf (accessed on 19 August 2024).
  31. Samuel, A.L. Some Studies in Machine Learning Using the Game of Checkers. IBM J. Res. Dev. 1959, 3, 210–229. [Google Scholar] [CrossRef]
  32. Clay Mathematics Institute. P vs. NP Problem. 2000. Available online: https://www.claymath.org/wp-content/uploads/2022/06/pvsnp.pdf (accessed on 19 August 2024).
  33. Datta, S.; Roy, S.; Davim, J.P. Optimization Techniques: An Overview. In Optimization in Industry; Datta, S., Davim, J., Eds.; Management and Industrial Engineering; Springer: Cham, Germany, 2019. [Google Scholar] [CrossRef]
  34. Almufti, S. Using Swarm Intelligence for solving NP Hard Problems. Acad. J. Nawroz Univ. 2017, 6, 46–50. [Google Scholar] [CrossRef]
  35. Wolchover, N. (2014, February 6). The Questions That Computers Can Never Answer. Wired. Available online: https://www.wired.com/2014/02/halting-problem (accessed on 19 August 2024).
  36. Sivakumar, R.; Angayarkanni, S.A.; Ramana, R.Y.V.; Sadiq, A.S. Traffic flow forecasting using natural selection based hybrid bald eagle search-grey wolf optimization algorithm. PLoS ONE 2022, 17, e0275104. [Google Scholar] [CrossRef]
  37. Peres, F.; Castelli, M. Combinatorial Optimization Problems and Metaheuristics: Review, Challenges, Design, and Development. Appl. Sci. 2021, 11, 6449. [Google Scholar] [CrossRef]
  38. Fu, S.; Li, K.; Huang, H.; Ma, C.; Fan, Q.; Zhu, Y. Red-billed blue magpie optimizer: A novel metaheuristic algorithm for 2D/3D UAV path planning and engineering design problems. Artif. Intell. Rev. 2024, 57, 134. [Google Scholar] [CrossRef]
  39. Al-Sharqi, M.A.; Al-Obaidi, A.T.S.; Al-Mamory, S.O. Apiary Organizational-Based Optimization Algorithm: A new nature-inspired metaheuristic algorithm. Int. J. Intell. Eng. Syst. 2024, 17, 482–494. [Google Scholar]
  40. Zhang, Z.; Zhu, H.; Xie, M. Differential privacy may have a potential optimization effect on some swarm intelligence algorithms besides privacy-preserving. Inf. Sci. 2024, 654, 119870. [Google Scholar] [CrossRef]
  41. Wu, S.; He, B.; Zhang, J.; Chen, C.; Yang, J. PSAO: An enhanced Aquila Optimizer with particle swarm mechanism for engineering design and UAV path planning problems. Alex. Eng. J. 2024, 106, 474–504. [Google Scholar] [CrossRef]
  42. Wang, H.; Shi, L. A multi-direction guided mutation-driven stable swarm intelligence algorithm with translation and rotation invariance for global optimization. Appl. Soft Comput. 2024, 159, 111614. [Google Scholar] [CrossRef]
  43. Jain, S.; Saha, A. Improving and comparing performance of machine learning classifiers optimized by swarm intelligent algorithms for code smell detection. Sci. Comput. Program. 2024, 237, 103140. [Google Scholar] [CrossRef]
  44. Gülmez, E.; Koruca, H.I.; Aydin, M.E.; Urganci, K.B. Heuristic and swarm intelligence algorithms for work-life balance problem. Comput. Ind. Eng. 2024, 187, 109857. [Google Scholar] [CrossRef]
  45. Yang, X.; Yan, J.; Wang, D.; Xu, Y.; Hua, G. WOAD3QN-RP: An intelligent routing protocol in wireless sensor networks—A swarm intelligence and deep reinforcement learning based approach. Expert Syst. Appl. 2024, 246, 123089. [Google Scholar] [CrossRef]
  46. Fountas, N.A.; Kechagias, J.D.; Vaxevanidis, N.M. Swarm intelligence algorithms for optimising sliding wear of nanocomposites. Tribol. Mater. 2024, 3, 44–50. [Google Scholar] [CrossRef]
  47. Neshat, M.; Sepidnam, G.; Sargolzaei, M. Swallow swarm optimization algorithm: A new method to optimization. Neural Comput. Appl. 2013, 23, 429–454. [Google Scholar] [CrossRef]
  48. Kareem, S.W.; Mohammed, A.S.; Khoshaba, F.S. Novel nature-inspired meta-heuristic optimization algorithm based on hybrid dolphin and sparrow optimization. Int. J. Nonlinear Anal. Appl. 2023, 14, 355–373. [Google Scholar] [CrossRef]
  49. Yao, Z.; Shangguan, H.; Xie, W.; Liu, J.; He, S.; Huang, H.; Li, F.; Chen, J.; Zhan, Y.; Wu, X.; et al. SIPSC-Kac: Integrating swarm intelligence and protein spatial characteristics for enhanced lysine acetylation site identification. Int. J. Biol. Macromol. 2024, 282, 137237. [Google Scholar] [CrossRef] [PubMed]
  50. Xu, J.; Di Nardo, M.; Yin, S. Improved Swarm Intelligence-Based Logistics Distribution Optimizer: Decision Support for Multimodal Transportation of Cross-Border E-Commerce. Mathematics 2024, 12, 763. [Google Scholar] [CrossRef]
  51. Ma, R.; Kareem, S.W.; Kumar, P.; Kalra, A.; Miah, S.; Doewes, R.I. Optimization of electric automation control model based on artificial intelligence algorithm. Wirel. Commun. Mob. Comput. 2022, 2022, 7762493. [Google Scholar] [CrossRef]
  52. Chen, T.; Zheng, H.; Chen, J.; Zhang, Z.; Huang, X. Novel intelligent grazing strategy based on remote sensing, herd perception and UAVs monitoring. Comput. Electron. Agric. 2024, 219, 108807. [Google Scholar] [CrossRef]
  53. Shamsaldin, A.S.; Rashid, T.A.; Al-Rashid Agha, R.A.; Al-Salihi, N.K.; Mohammadi, M. Donkey and smuggler optimization algorithm: A collaborative working approach to path finding. J. Comput. Des. Eng. 2019, 6, 562–583. [Google Scholar] [CrossRef]
  54. Alrahhal, H.; Jamous, R. AFOX: A new adaptive nature-inspired optimization algorithm. Artif. Intell. Rev. 2023, 56, 15523–15566. [Google Scholar] [CrossRef]
  55. Huang, H.; Zheng, B.; Wei, X.; Zhou, Y.; Zhang, Y. NSCSO: A novel multi-objective non-dominated sorting chicken swarm optimization algorithm. Sci. Rep. 2024, 14, 4310. [Google Scholar] [CrossRef]
  56. Mardiastuti, A.; Hartono, T.T.; Firmansyah, F.S.; Manurung, R.; Refiandy, M. Barn swallow roosting at an oil gathering station. IOP Conf. Ser. Earth Environ. Sci. 2023, 1271, 012021. [Google Scholar] [CrossRef]
  57. Hobson, K.A.; Kardynal, K.J.; Van Wilgenburg, S.L.; Albrecht, G.; Salvadori, A.; Fox, J.W.; Brigham, R.M. A Continent-Wide Migratory Divide in North American Breeding Barn Swallows (Hirundo rustica). PLoS ONE 2015, 10, e0129340. [Google Scholar] [CrossRef]
  58. Encyclopædia Britannica. Swallow. In Britannica.com. 17 June 2025. Available online: https://www.britannica.com/animal/swallow-bird (accessed on 19 August 2024).
  59. Brown, C.R.; Brown, M.B. Barn Swallow (Hirundo rustica) (Version 2.0). In The Birds of North America; Rodewald, P.G., Ed.; Cornell Lab of Ornithology: New York, NY, USA, 1999; Available online: https://www.allaboutbirds.org/guide/Barn_Swallow/id (accessed on 19 August 2024).
  60. Barn Swallow|Audubon Field Guide. Available online: https://www.audubon.org (accessed on 13 March 2024).
  61. Vogel, S. Life in Moving Fluids. The Physical Biology of Flow, 2nd ed.; Princeton University Press: Princeton, NJ, USA, 1994. [Google Scholar]
  62. Pennycuick, C.J. Power Requirements for Horizontal Flight in the Pigeon Columba Livia. J. Exp. Biol. 1968, 49, 527–555. [Google Scholar] [CrossRef]
  63. Kareem, S.W. A nature-inspired metaheuristic optimization algorithm based on crocodiles hunting search (CHS). Int. J. Swarm Intell. Res. 2022, 13, 1–23. [Google Scholar] [CrossRef]
  64. Eiben, A.E.; Smith, J.E. Introduction to Evolutionary Computing, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar] [CrossRef]
  65. López-Ibáñez, M.; Stützle, T. Parameter control in metaheuristics. J. Heuristics 2012, 18, 769–793. [Google Scholar] [CrossRef]
  66. Tokic, M. Adaptive ε-greedy exploration in reinforcement learning based on value differences. In KI 2010: Advances in Artificial Intelligence; Dillmann, R., Beyerer, J., Hanebeck, U.D., Schultz, T., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6359, pp. 203–210. [Google Scholar] [CrossRef]
  67. Barros-Everett, T.; Montero, E.; Rojas-Morales, N. Parameter Prediction for Metaheuristic Algorithms Solving Routing Problem Instances Using Machine Learning. Appl. Sci. 2025, 15, 2946. [Google Scholar] [CrossRef]
  68. Hsieh, F.-S. A self-adaptive meta-heuristic algorithm based on success rate and differential evolution for improving the performance of ridesharing systems with a discount guarantee. Algorithms 2024, 17, 9. [Google Scholar] [CrossRef]
  69. Kareem, S.W.; Okur, M.C. Bayesian network structure learning using hybrid Bee optimization and greedy search. In Proceedings of the 3rd International Mediterranean Science and Engineering Congress (IMSEC 2018), Çukurova University, Adana, Turkey, 24–26 October 2018; pp. 1–7. Available online: https://www.researchgate.net/publication/333320417 (accessed on 13 March 2024).
  70. Auer, P.; Cesa-Bianchi, N.; Fischer, P. Finite-time analysis of the multiarmed bandit problem. Mach. Learn. 2002, 47, 235–256. [Google Scholar] [CrossRef]
  71. Russo, D.; Van Roy, B.; Kazerouni, A.; Osband, I.; Wen, Z. A tutorial on Thompson sampling. Found. Trends Mach. Learn. 2018, 11, 1–96. [Google Scholar] [CrossRef]
  72. Awla, H.Q.; Kareem, S.W.; Mohammed, A.S. A comparative evaluation of Bayesian networks structure learning using Falcon Optimization Algorithm. Int. J. Interact. Multimed. Artif. Intell. 2023; in press. [Google Scholar] [CrossRef]
  73. Martínez-Ríos, F.; Murillo-Suárez, A. A new swarm algorithm for global optimization of multimodal functions over multi-threading architecture hybridized with simulating annealing. Procedia Comput. Sci. 2018, 135, 449–456. [Google Scholar] [CrossRef]
  74. Hussain, K.; Salleh, M.N.M.; Cheng, S.; Naseem, R. Common benchmark functions for metaheuristic evaluation: A review. JOIV Int. J. Inform. Vis. 2017, 1, 218–223. [Google Scholar] [CrossRef]
  75. Tang, K.; Chen, Y.; Suganthan, P.N.; Liang, J.J. Benchmark Functions for the CEC’2010 Special Session and Competition on Large-Scale Global Optimization; Technical Report; Nature Inspired Computation and Applications Laboratory, USTC: Hefei, China, 2009. [Google Scholar]
  76. Liang, J.J.; Qu, B.Y.; Suganthan, P.N.; Hernández-Díaz, A.G. Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session on Real-Parameter Optimization; Technical Report 201212; Computational Intelligence Laboratory, Zhengzhou University: Zhengzhou, China, Technical Report; Nanyang Technological University: Singapore, 2013. [Google Scholar]
  77. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  78. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  79. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  80. Yang, X.S.; He, X. Bat algorithm: Literature review and applications. Int. J. Bio-Inspired Comput. 2013, 5, 141–149. [Google Scholar] [CrossRef]
  81. Jia, Y.; Wang, S.; Liang, L.; Wei, Y.; Wu, Y. A flower pollination optimization algorithm based on cosine cross-generation differential evolution. Sensors 2023, 23, 606. [Google Scholar] [CrossRef]
  82. Cuevas, E.; Echavarría, A.; Ramírez-Ortegón, M.A. An optimization algorithm inspired by the States of Matter that improves the balance between exploration and exploitation. Appl. Intell. 2014, 40, 256–272. [Google Scholar] [CrossRef]
  83. Khan, W.A.; Hamadneh, N.N.; Tilahun, S.L.; Ngnotchouye, J.M.T. A Review and Comparative Study of Firefly Algorithm and Its Modified Version; InTech: Toulon, France, 2016. [Google Scholar] [CrossRef]
  84. Jumaah, M.A.; Ali, Y.H.; Rashid, T.A. Improved FOX Optimization Algorithm. arXiv 2025, arXiv:2504.09574. [Google Scholar] [CrossRef]
  85. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  86. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar] [CrossRef]
  87. Calis, G.; Yuksel, O. An improved ant colony optimization algorithm for construction site layout problems. J. Build. Constr. Plan. Res. 2015, 3, 221–232. [Google Scholar] [CrossRef]
  88. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  89. Ebeed, M.; Hassan, S.; Kamel, S.; Nasrat, L.; Mohamed, A.W.; Youssef, A.-R. Smart building energy management with renewables and storage systems using a modified weighted mean of vectors algorithm. Sci. Rep. 2025, 15, 4733. [Google Scholar] [CrossRef]
  90. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  91. Aula, S.A.; Rashid, T.A. FOX-TSA: Navigating complex search spaces and superior performance in benchmark and real-world optimization problems. Ain Shams Eng. J. 2024, 16, 103185. [Google Scholar] [CrossRef]
  92. Abdulla, H.S.; Ameen, A.A.; Saeed, S.I.; Mohammed, I.A.; Rashid, T.A. MRSO: Balancing exploration and exploitation through modified rat swarm optimization for global optimization. Algorithms 2024, 17, 423. [Google Scholar] [CrossRef]
  93. Gim, G.H.H. Optimal Design of a Class of Welded Beam Structures Based on Design for Latitude. Master’s Thesis, Missouri University of Science and Technology, Rolla, MO, USA, 1984. [Google Scholar]
  94. Ragsdell, K.M.; Phillips, D.T. Optimal design of a class of welded structures using geometric programming. J. Eng. Ind. 1976, 98, 1021–1025. [Google Scholar] [CrossRef]
  95. Houssein, E.H.; Gafar, M.H.A.; Fawzy, N.; Sayed, A.Y. Recent metaheuristic algorithms for solving some civil engineering optimization problems. Sci. Rep. 2025, 15, 7929. [Google Scholar] [CrossRef]
  96. Ren, J.; Wei, H.; Yuan, Y.; Li, X.; Luo, F.; Wu, Z. Boosting sparrow search algorithm for multi-strategy-assist engineering optimization problems. AIP Adv. 2022, 12, 095201. [Google Scholar] [CrossRef]
  97. Sakthivel, R.; Selvadurai, K. Slime Mould Reproduction: A new optimization algorithm for constrained engineering problems. J. Comput. Sci. 2024, 20, 96–105. [Google Scholar] [CrossRef]
  98. Karami, H.; Anaraki, M.V.; Farzin, S.; Mirjalili, S. Flow Direction Algorithm (FDA): A novel optimization approach for solving optimization problems. Comput. Ind. Eng. 2021, 156, 107224. [Google Scholar] [CrossRef]
  99. Yu, H.; Zhao, N.; Wang, P.; Chen, H.; Li, C. Chaos-enhanced synchronized bat optimizer. Appl. Math. Model. 2020, 77, 1201–1215. [Google Scholar] [CrossRef]
  100. Mohapatra, S.; Mohapatra, P. American zebra optimization algorithm for global optimization problems. Sci. Rep. 2023, 13, 31876. [Google Scholar] [CrossRef]
  101. Khalilpourazari, S.; Khalilpourazary, S. An efficient hybrid algorithm based on water cycle and moth-flame optimization algorithms for solving numerical and constrained engineering optimization problems. Soft. Comput. 2019, 23, 1699–1722. [Google Scholar] [CrossRef]
  102. Kaveh, A.; Mahdavi, V. Colliding bodies optimization: A novel meta-heuristic method. Comput. Struct. 2014, 139, 18–27. [Google Scholar] [CrossRef]
  103. Cuevas, E.; Cienfuegos, M. A new algorithm inspired in the behavior of the social-spider for constrained optimization. Expert Syst. Appl. 2014, 41, 412–425. [Google Scholar] [CrossRef]
  104. A Hashim, F.; Mostafa, R.R.; Abu Khurma, R.; Qaddoura, R.; A Castillo, P. A new approach for solving global optimization and engineering problems based on modified sea horse optimizer. J. Comput. Des. Eng. 2024, 11, 73–98. [Google Scholar] [CrossRef]
  105. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  106. Ong, P.; Ho, C.S.; Chin, D.D.V.S. An improved cuckoo search algorithm for design optimization of structural engineering problems. Commun. Comput. Appl. Math. 2020, 2, 31–44. [Google Scholar]
  107. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  108. Holland, J.H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; The MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  109. Sun, P.; Liu, H.; Zhang, Y.; Tu, L.; Meng, Q. An intensify atom search optimization for engineering design problems. Appl. Math. Model. 2021, 89, 837–859. [Google Scholar] [CrossRef]
  110. Kaveh, A.; Bakhshpoori, T. Water evaporation optimization: A novel physically inspired optimization algorithm. Comput. Struct. 2016, 167, 69–85. [Google Scholar] [CrossRef]
  111. Shang, C.; Zhou, T.-T.; Liu, S. Optimization of complex engineering problems using modified sine cosine algorithm. Sci. Rep. 2022, 12, 20528. [Google Scholar] [CrossRef]
  112. Chang, C.; Zhang, T.; Chen, S. Solving the problem of pressure vessel with constraint conditions through marine predators algorithm. Mod. Econ. Manag. Forum 2025, 6, 127–139. [Google Scholar] [CrossRef]
  113. Gandomi, A.H.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 245–265. [Google Scholar] [CrossRef]
  114. Beausoleil, R.P. Solving engineering optimization problems with Tabu/Scatter Search (Resolviendo problemas de optimización en ingeniería con búsqueda Tabú/Dispersa). Rev. Matemática: Teoría Apl. 2017, 24, 157–188. [Google Scholar]
  115. Frimpong, S.A.; Darkwah, K.F. Implementation of the Sine–Cosine Algorithm to the Pressure Vessel Design Problem. Int. J. Innov. Sci. Res. Technol. IJISRT 2022, 7, 1069–1073. Available online: https://www.ijisrt.com/assets/upload/files/IJISRT22DEC721.pdf (accessed on 19 August 2024).
  116. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar] [CrossRef]
  117. Coello, C.A.C. Use of a self-adaptive penalty approach for engineering optimization problems. Comput. Ind. 2000, 41, 113–127. [Google Scholar] [CrossRef]
  118. Hu, G.; Wang, J.; Li, M.; Hussien, A.G.; Abbas, M. Multi-strategy enhanced jellyfish search algorithm for engineering applications. Eng. Comput. 2023, 11, 851. [Google Scholar] [CrossRef]
  119. Li, L.D.; Li, X.; Yu, X. A multi-objective constraint-handling method with PSO algorithm for constrained engineering optimization problems. In Proceedings of the IEEE World Congress on Computational Intelligence (CEC 2008), Hong Kong, China, 1–6 June 2008; pp. 1–4. [Google Scholar] [CrossRef]
  120. Kaveh, A.; Talatahari, S. A novel heuristic optimization method: Charged system search. Acta Mech. 2010, 213, 267–289. [Google Scholar] [CrossRef]
  121. Mezura-Montes, E.; Coello, C.A.C.; Velázquez-Reyes, J.; Muñoz-Dávila, L. Multiple trial vectors in differential evolution for engineering design. Eng. Optim. 2007, 39, 567–589. [Google Scholar] [CrossRef]
  122. Belkourchia, Y.; Azrar, L.; Zeriab, E.M. A hybrid optimization algorithm for solving constrained engineering design problems. In Proceedings of the 2019 5th International Conference on Optimization and Applications (ICOA), Kenitra, Morocco, 25–26 April 2019; pp. 1–6. [Google Scholar] [CrossRef]
  123. Sandgren, E. Nonlinear integer and discrete programming in mechanical design. In Proceedings of the ASME Design Technology Conference, Kissimmee, FL, USA, 25–28 September 1988; pp. 95–105. [Google Scholar]
Figure 1. Classification of optimization problems.
Figure 1. Classification of optimization problems.
Computers 14 00345 g001
Figure 2. The barn swallow (Hirundo rustica) reprinted with permission from Ref. [59]. Christopher Clark, © 2017, Cornell Lab of Ornithology | Macaulay Library (ML58330191).
Figure 2. The barn swallow (Hirundo rustica) reprinted with permission from Ref. [59]. Christopher Clark, © 2017, Cornell Lab of Ornithology | Macaulay Library (ML58330191).
Computers 14 00345 g002
Figure 3. (a) Swallow migration; (b) Swallow drinking water.
Figure 3. (a) Swallow migration; (b) Swallow drinking water.
Computers 14 00345 g003
Figure 4. Particle dynamics in Swallow Search Optimization (SWSO).
Figure 4. Particle dynamics in Swallow Search Optimization (SWSO).
Computers 14 00345 g004
Figure 5. Unimodal benchmark functions.
Figure 5. Unimodal benchmark functions.
Computers 14 00345 g005aComputers 14 00345 g005b
Figure 6. The welded beam design.
Figure 6. The welded beam design.
Computers 14 00345 g006
Figure 7. Pressure vessel design.
Figure 7. Pressure vessel design.
Computers 14 00345 g007
Table 1. Abbreviations used in this paper according to their appearance.
Table 1. Abbreviations used in this paper according to their appearance.
AbbreviationTerm
SISwarm Intelligence
SWSOSwallow Search Optimization
PPolynomial Time
NPNondeterministic Polynomial Time
NP-CompleteNondeterministic Polynomial-time Complete
NP-HardNondeterministic Polynomial-time Hard
TSPTraveling Salesman Problem
MOPSOMultiobjective Particle Swarm Optimization
MOCH-PSOMultiobjective Constraint-Handling-PSO
AOOAApiary Organizational-Based Optimization Algorithm
DPDifferential Privacy
AOAquila Optimizer
PSAOAn enhanced Aquila Optimizer with Particle Swarm
WSNsWireless Sensor Networks
D3QNDueling Double Deep Q-Network
RPRouting Protocol
WOAD3QN-RPWhale Optimization Algorithm uses D3QN
SSOSwallow Swarm Optimization
Kaclysine acetylation
SIPSC-KacSwarm Intelligence and Protein Spatial Characteristics-Kac
B2C e-commerceBusiness-to-Consumer electronic commerce
PSOParticle Swarm Optimization
IPSOImproved Particle Swarm Optimization
GA-PSO-SQPGenetic Algorithm- Particle Swarm Optimization- Sequential Quadratic Programming
CPSOCo-Evolutionary Particle Swarm Optimization
MPPTMaximum Power Point Tracking
PEMProton Exchange Membrane
DSODonkey and Smuggler Optimization
AFOXAdaptive Fox Optimization
FOXFox Optimization
IFOXImproved FOX
NSCSONon-Dominated Sorting Chicken Swarm Optimization
GWOGrey Wolf Optimizer
VGWOVariant Grey Wolf Optimizer
GAGenetic Algorithm
IAGAImproved Adaptive Genetic Algorithm
DADragon-Fly Algorithm
BDABinary Dragonfly Algorithm
WOAWhale Optimization Algorithm
IWOAImproved Whale Optimization Algorithm
BOA Butterfly Optimization Algorithm
COAChimp Optimization Algorithm
FDOFitness Dependent Optimizer
MFOMoth–Flame Optimization
GSAGravitational Search Algorithm
BABat Algorithm
CEBAChaos-Enhanced Bat Algorithm
FPAFlower Pollination Algorithm
SMSState of Mater Search
FAFirefly Algorithm
DEDolphin Echolocation
FEPFast Evolutionary Programing
ACOAnt Colony Optimization
IACOImproved Ant Colony Optimization
INFOWeighted Mean of Vectors Algorithm
MINFOModified Weighted Mean of Vectors Algorithm
SCSOSand Cat Search Optimizer
AVOAAfrican Vultures Optimization Algorithm
SCASine Cosine Algorithm
MSCAModified Sine Cosine Algorithm
HHOHarris Hawks Optimization
RIMEA RIME-Ice physical phenomenon-based optimization.
ZOAZebra Optimization Algorithm
NRONuclear Reaction Optimization
BBOABrown-Bear Optimization Algorithm
TSATree-Seed Algorithm
FOX-TSAHybrid FOX- Tree-Seed Algorithm
RSORat Swarm Optimization
MRSOModified Rat Swarm Optimization
SMRSlime Mould Reproduction
MSSSAMultistrategy-Sparrow Search Algorithm
ASOAtom Search Optimization
SOSnake Optimizer
MVOMultiverse Optimizer
ABCArtificial Bee Colony
MSHOModified Sea Horse Optimizer
AZOAAmerican Zebra Optimization Algorithm
FDAFlow Direction Algorithm
CBOColliding Bodies Optimization
CSCuckoo Search
SSO-CSocial Spider Optimization
ACSAAn Improved Cuckoo Search Algorithm
WEOAWater Evaporation Optimization Algorithm.
WCMFOWater Cycle and Moth-Flame Optimization
EJSEnhanced Jellyfish Search
CSSCharged System Search
MDEMultiple Differential Evolution
Table 2. List of symbols used in this paper.
Table 2. List of symbols used in this paper.
SymbolsDefinition
α Probability of following the leader
θ Energy threshold for leader switching
ω t Adaptive inertia weight
δ t Exploration decay factor
ε i Local random Gaussian noise
Table 6. Cec2019 benchmark functions.
Table 6. Cec2019 benchmark functions.
No.Functions D i m ( n ) Range F m i n
Cec01Storn’s Chebyshev Polynomial Fitting Problem9[−8192, 8192]1
Cec02Inverse Hilbert Matrix Problem16[−16,384, 16,384]1
Cec03Lennard–Jones Minimum Energy Cluster18[−4, 4]1
Cec04Rastrigin’s Function10[−100, 100]1
Cec05Griewank’s Function10[−100, 100]1
Cec06Weierstrass Function10[−100, 100]1
Cec07Modified Schwefel’s Function10[−100, 100]1
Cec08Expanded Schaffer’s F6 Function10[−100, 100]1
Cec09Happy Cat Function10[−100, 100]1
Cec10Ackley Function10[−100, 100]1
Table 7. (a): Comparison result of Average (Ave) and Standard deviation (SD) for the proposed model using classical unimodal benchmark functions, with results of group-A. (b): Comparison result of Average (Ave) and Standard deviation (SD) for the proposed model using classical unimodal benchmark functions, with results of group-B.
Table 7. (a): Comparison result of Average (Ave) and Standard deviation (SD) for the proposed model using classical unimodal benchmark functions, with results of group-A. (b): Comparison result of Average (Ave) and Standard deviation (SD) for the proposed model using classical unimodal benchmark functions, with results of group-B.
(a)
SWSOMFOPSOGSABAFPASMSFAIFOX
FAve
F11.2466 × 10−841.1700 × 10−41.36 × 10−42.53 × 10−162.0792 × 10+42.0364 × 10+21.2000 × 10+27.4807 × 10+3−2.0 × 10+2
F21.5366 × 10−486.3900 × 10−44.21 × 10−25.57 × 10−28.9786 × 10+11.1169 × 10+12.0531 × 10−23.9325 × 10+13.6 × 10−2
F33.5466 × 10−866.9673 × 10+27.01 × 10+18.96 × 10+26.2481 × 10+42.3757 × 10+23.7820 × 10+41.7357 × 10+43.5 × 10−3
F49.2386 × 10−537.0686 × 10+11.09 × 10+07.35 × 10+04.9743 × 10+11.2573 × 10+16.9170 × 10+13.3954 × 10+1−9.6 × 10+2
F54.214 × 10+01.3915 × 10+29.67 × 10+16.75 × 10+11.9951 × 10+61.0975 × 10+46.3822 × 10+63.7950 × 10+67.4 × 10+0
F61.039 × 10+01.1300 × 10−41.02 × 10−42.5 × 10−161.7053 × 10+41.7538 × 10+24.1439 × 10+47.8287 × 10+32.2 × 10−2
F73.027 × 10−49.1155 × 10−21.23 × 10−18.94 × 10−26.0451 × 10+01.3594 × 10−14.9520 × 10−21.9063 × 10+03.7 × 10−3
FSD
F16.326 × 10−841.5000 × 10−42.02 × 10−49.67 × 10−175.8924 × 10+37.8398 × 10+10.0000 × 10+08.9485 × 10+22.2 × 10−1
F26.4714 × 10−488.7700 × 10−44.54 × 10−21.94 × 10−14.1958 × 10+12.9196 × 10+04.7180 × 10−32.4659 × 10+03.3 × 10−2
F32.4891 × 10−851.8853 × 10+22.21 × 10+13.18 × 10+22.9769 × 10+41.3665 × 10+20.0000 × 10+01.7401 × 10+34.8 × 10−2
F46.5246 × 10−525.2751 × 10+03.17 × 10−11.74 × 10+01.0144 × 10+12.2900 × 10+03.8767 × 10+01.8697 × 10+01.1 × 10+2
F51.360 × 10+01.2026 × 10+26.01 × 10+16.22 × 10+11.2524 × 10+61.2057 × 10+47.2997 × 10+57.5903 × 10+51.6 × 10+2
F62.990 × 10−19.8700 × 10−58.28 × 10−51.74 × 10−164.9176 × 10+36.3453 × 10+13.2952 × 10+39.7521 × 10+23.4 × 10−1
F71.993 × 10−44.6420 × 10−24.50 × 10−24.34 × 10−23.0453 × 10+06.1212 × 10−22.4015 × 10−24.6006 × 10−15.3 × 10−2
(b)
SWSOWOAFEPIACOVGWOINFOSCAGWORIME
FAve
F11.2466 × 10−841.41 × 10−305.70 × 10−4−2.0 × 10+2−2.0 × 10+25.453 × 10−532.125 × 10+27.667 × 10−226.491 × 10+0
F21.5366 × 10−481.06 × 10−218.10 × 10−33.3 × 10−35.4 × 10−23.880 × 10−264.671 × 10−11.344 × 10−134.601 × 10+0
F33.5466 × 10−865.39 × 10−71.60 × 10−21.4 × 10−28.0 × 10−36.698 × 10−502.290 × 10+42.249 × 10−33.317 × 10+3
F49.2386 × 10−537.26 × 10−23.00 × 10−1−1.6 × 10+2−3.3 × 10+26.986 × 10−275.866 × 10+15.559 × 10−51.952 × 10+1
F54.214 × 10+02.79 × 10+15.06 × 10+03.2 × 10+11.3 × 10+12.542 × 10+12.570 × 10+62.877 × 10+12.489 × 10+3
F61.039 × 10+03.12 × 10+007.4 × 10−21.5 × 10−16.157 × 10−63.62 × 10+21.764 × 10+06.476 × 10+0
F73.027 × 10−41.43 × 10−31.42 × 10−11.5 × 10−28.1 × 10−34.792 × 10−31.944 × 10+05.162 × 10−39.443 × 10−2
FSD
F16.326 × 10−844.91 × 10−301.30 × 10−45.8 × 10−14.1 × 10−11.48 × 10−535.474 × 10+11.84 × 10−221.285 × 10+0
F26.4714 × 10−482.39 × 10−217.70 × 10−43.8 × 10−23.7 × 10−27.744 × 10−271.78 × 10−13.639 × 10−149.219 × 10−1
F32.4891 × 10−852.93 × 10−61.40 × 10−21.8 × 10−11.1 × 10−11.956 × 10−506.724 × 10+36.62 × 10−46.295 × 10+2
F46.5246 × 10−523.97 × 10−15.00 × 10−17.8 × 10+12.2 × 10+21.587 × 10−271.170 × 10+11.372 × 10−54.246 × 10+0
F51.360 × 10+07.64 × 10−15.87 × 10+05.5 × 10+22.8 × 10+25.659 × 10−15.99 × 10+59.021 × 10−15.231 × 10+2
F62.990 × 10−15.32 × 10−109.2 × 10−12.7 × 10+01.370 × 10−68.40 × 10+14.020 × 10−11.130 × 10+0
F71.993 × 10−41.15 × 10−33.52 × 10−12.5 × 10−11.4 × 10−11.133 × 10−34.345 × 10−11.355 × 10−31.997 × 10−2
Table 8. (a): Comparison result of Average (Ave) and Standard deviation (SD) for the proposed model using classical multimodal benchmark functions, with results of group-C. (b): Comparison result of Average (Ave) and Standard deviation (SD) for the proposed model using classical multimodal benchmark functions, with results of group-D.
Table 8. (a): Comparison result of Average (Ave) and Standard deviation (SD) for the proposed model using classical multimodal benchmark functions, with results of group-C. (b): Comparison result of Average (Ave) and Standard deviation (SD) for the proposed model using classical multimodal benchmark functions, with results of group-D.
(a)
SWSOMFOPSOGSABAFPASMSFAGA
FAve
F8−2.1935 × 10+3−8.505 × 10+3−3.575 × 10+3−2.355 × 10+36.555 × 10+4−8.095 × 10+3−3.945 × 10+3−3.665 × 10+3−6.335 × 10+3
F908.465 × 10+11.245 × 10+23.105 × 10+19.625 × 10+19.275 × 10+11.535 × 10+22.155 × 10+22.375 × 10+2
F108.8825 × 10−161.265 × 10+09.175 × 10+03.745 × 10+01.595 × 10+16.845 × 10+01.915 × 10+11.465 × 10+11.785 × 10+1
F1101.915 × 10−21.245 × 10+14.875 × 10−12.205 × 10+22.725 × 10+04.215 × 10+26.975 × 10+11.805 × 10+2
F125.0135 × 10−28.945 × 10−11.395 × 10+14.635 × 10−12.895 × 10+74.115 × 10+08.745 × 10+63.685 × 10+53.415 × 10+7
F131.715 × 10−11.165 × 10−11.185 × 10+47.625 × 10+01.095 × 10+86.245 × 10+11.005 × 10+85.565 × 10+61.085 × 10+8
FSD
F82.3185 × 10+27.265 × 10+24.315 × 10+23.825 × 10+201.555 × 10+24.045 × 10+22.145 × 10+23.335 × 10+2
F901.625 × 10+11.435 × 10+11.375 × 10+11.965 × 10+11.425 × 10+11.865 × 10+11.725 × 10+11.905 × 10+1
F1007.305 × 10−11.575 × 10+01.715 × 10−17.755 × 10−11.255 × 10+02.395 × 10−14.685 × 10−15.315 × 10−1
F1102.175 × 10−24.175 × 10+04.985 × 10−25.475 × 10+17.285 × 10−12.535 × 10+11.215 × 10+13.245 × 10+1
F121.2485 × 10−28.815 × 10−15.855 × 10+01.385 × 10−12.185 × 10+61.045 × 10+01.415 × 10+61.725 × 10+51.895 × 10+6
F133.035 × 10−21.935 × 10−13.075 × 10+41.235 × 10+01.055 × 10+89.485 × 10+101.695 × 10+63.855 × 10+6
(b)
SWSOMINFOINFOSCSOAVOASCAHHOGWORIMEZOA
FAve
F82.1935 × 10+3−1.2575 × 10+4−6.9225 × 10+3−4.8135 × 10+3−1.525 × 10+4−3.3845 × 10+3−1.2565 × 10+4−4.5665 × 10+3−8.7595 × 10+3−5.425 × 10+3
F9000001.2665 × 10+201.4675 × 10+19.3545 × 10+10
F108.8825 × 10−168.8825 × 10−168.8825 × 10−168.8825 × 10−168.8825 × 10−162.345 × 10+ 018.8825 × 10−164.3005 × 10−123.925 × 10+08.8825 × 10−16
F11000005.6205 × 10+001.9015 × 10−21.4605 × 10+00
F125.0135 × 10−22.4215 × 10−51.375 × 10−12.3375 × 10−19.175 × 10−71.785 × 10+74.9735 × 10−51.465 × 10−11.3565 × 10+14.2465 × 10−1
F131.715 × 10−14.175 × 10−14.655 × 10−12.895 × 10+01.145 × 10−74.245 × 10+62.085 × 10−41.195 × 10+08.905 × 10−12.885 × 10+0
FSD
F82.3185 × 10+25.2525 × 10−49.2505 × 10+29.7165 × 10+27.295 × 10+22.6695 × 10+22.8885 × 10+08.1675 × 10+25.5385 × 10+26.7835 × 10+2
F9000003.8275 × 10+103.6605 × 10+01.4315 × 10+10
F10000007.895 × 10+008.9845 × 10−135.365 × 10−10
F11000009.2435 × 10−105.5835 × 10−33.2275 × 10−20
F121.2485 × 10−25.4135 × 10−62.3185 × 10−25.9155 × 10−21.9025 × 10−72.5155 × 10+61.6985 × 10−54.0165 × 10−23.4205 × 10+09.1455 × 10−2
F133.035 × 10−21.165 × 10−11.075 × 10−12.755 × 10−13.945 × 10−81.3405 × 10+65.535 × 10−52.465 × 10−11.645 × 10−13.475 × 10−1
Table 9. Comparison result of Average (Ave) and Standard deviation (SD) for the proposed model using classical fixed-dimension benchmark functions, with results of group-C.
Table 9. Comparison result of Average (Ave) and Standard deviation (SD) for the proposed model using classical fixed-dimension benchmark functions, with results of group-C.
SWSOMFOPSOGSABAFPASMSFAGA
FAve
F149.980 × 1018.25 × 10−311.38 × 10+25.43 × 10191.30 × 10+21.01 × 10+11.06 × 10+21.76 × 10+29.21 × 10+1
F154.661 × 10−46.67 × 10+11.67 × 10+22.04 × 10+15.44 × 10+21.14 × 10+11.56 × 10+23.54 × 10+29.67 × 10+1
F16−1.03+01.19 × 10+23.95 × 10+22.45 × 10+26.97 × 10+22.35 × 10+24.07 × 10+23.08 × 10+23.69 × 10+2
F174.286 × 10−13.45 × 10+24.86 × 10+23.15 × 10+27.45 × 10+23.55 × 10+25.19 × 10+25.49 × 10+24.51 × 10+2
F183.0908+01.04 × 10+12.57 × 10+27.00 × 10+15.44 × 10+25.48 × 10+11.54 × 10+21.75 × 10+29.59 × 10+1
F19−3.8617+07.07 × 10+27.90 × 10+28.82 × 10+28.96 × 10+25.73 × 10+26.12 × 10+28.30 × 10+25.24 × 10+2
FSD
F141.235 × 1041.08 × 10−301.16 × 10+21.35 × 10191.19 × 10+23.16 × 10+12.69 × 10+18.69 × 10+12.79 × 10+1
F157.774 × 10−55.32 × 10+11.64 × 10+26.31 × 10+11.49 × 10+23.38 × 10+06.82 × 10+11.03 × 10+29.70 × 10+0
F168.761 × 10−42.83 × 10+11.22 × 10+24.91 × 10+11.91 × 10+23.96 × 10+16.54 × 10+13.74 × 10+14.28 × 10+1
F172.721 × 10−24.31 × 10+16.73 × 10+11.01 × 10+21.43 × 10+22.06 × 10+14.27 × 10+11.63 × 10+23.15 × 10+1
F189.449 × 10−23.75 × 10+02.00 × 10+24.83 × 10+11.99 × 10+24.21 × 10+19.69 × 10+18.32 × 10+15.38 × 10+1
F193.57 × 10−41.95 × 10+21.89 × 10+24.52 × 10+18.63 × 10+11.49 × 10+21.55 × 10+21.57 × 10+22.29 × 10+1
Table 10. (a): Comparing the results of the proposed algorithm with group-C algorithms on CEC2019 test functions. (b) Comparing the results of the proposed algorithm with group-E algorithms on CEC2019 test functions. (c) Comparing the results of the proposed algorithm with group-F algorithms on CEC2019 test functions.
Table 10. (a): Comparing the results of the proposed algorithm with group-C algorithms on CEC2019 test functions. (b) Comparing the results of the proposed algorithm with group-E algorithms on CEC2019 test functions. (c) Comparing the results of the proposed algorithm with group-F algorithms on CEC2019 test functions.
(a)
SWSOMINFOINFOSCSOAVOASCAHHOGWORIMEZOA
FAve
Cec019.55 × 10+83.84 × 10+35.50 × 10+45.37 × 10+45.46 × 10+45.45 × 10+108.73 × 10+33.47 × 10+81.45 × 10+105.53 × 10+4
Cec021.84 × 10+11.83 × 10+11.83 × 10+11.87 × 10+11.83 × 10+11.92 × 10+11.84 × 10+11.87 × 10+12.03 × 10+11.92 × 10+1
Cec031.370 × 10+11.37 × 10+11.37 × 10+11.37 × 10+11.37 × 10+11.37 × 10+11.37 × 10+11.37 × 10+11.37 × 10+11.37 × 10+1
Cec044.84 × 10+11.18 × 10+22.47 × 10+22.63 × 10+33.39 × 10+23.89 × 10+29.17 × 10+21.14 × 10+26.33 × 10+13.59 × 10+2
Cec052.09 × 10+02.29 × 10+02.35 × 10+03.28 × 10+03.24 × 10+03.84 × 10+04.65 × 10+02.85 × 10+02.49 × 10+04.24 × 10+0
Cec061.07 × 10+11.02 × 10+11.16 × 10+11.16 × 10+11.08 × 10+11.30 × 10+11.26 × 10+11.32 × 10+11.11 × 10+11.09 × 10+1
Cec072.82 × 10+23.21 × 10+25.97 × 10+29.64 × 10+29.37 × 10+21.21 × 10+37.51 × 10+28.76 × 10+24.10 × 10+23.02 × 10+2
Cec082.88 × 10+05.65 × 10+06.33 × 10+06.57 × 10+07.16 × 10+06.73 × 10+07.27 × 10+07.25 × 10+05.82 × 10+05.60 × 10+0
Cec093.47 × 10+03.99 × 10+04.54 × 10+03.58 × 10+25.58 × 10+04.03 × 10+26.62 × 10+07.98 × 10+03.81 × 10+03.88 × 10+2
Cec101.97 × 10+12.12 × 10+12.14 × 10+12.14 × 10+12.12 × 10+12.16 × 10+12.16 × 10+12.16 × 10+12.12 × 10+12.13 × 10+1
FSD
Cec018.48 × 10+81.19 × 10+31.33 × 10+43.52 × 10+33.49 × 10+31.38 × 10+108.79 × 10+31.11 × 10+83.49 × 10+94.27 × 10+3
Cec025.75 × 1023.65 × 10153.46 × 10−151.27 × 1011.31 × 1041.75 × 1019.78 × 1021.39 × 1014.37 × 10−13.00 × 10−1
Cec031.11 × 1091.82 × 10−152.80 × 10152.82 × 1061.50 × 10108.37 × 1058.70 × 1069.36 × 1063.80 × 10−92.40 × 10−5
Cec041.06 × 10+13.27 × 10+15.28 × 10+17.55 × 10+26.74 × 10+16.88 × 10+21.98 × 10+21.72 × 10+11.15 × 10+11.79 × 10+2
Cec055.83 × 10−26.23 × 1017.29 × 1012.83 × 1013.76 × 1011.86 × 1015.93 × 1012.43 × 1011.18 × 10−15.63 × 10−1
Cec066.00 × 10−11.86 × 10+02.26 × 10+01.60 × 10+02.20 × 10+07.68 × 1011.25 × 10+06.62 × 1011.37 × 10+08.72 × 10−1
Cec071.72 × 10+21.19 × 10+22.39 × 10+22.88 × 10+12.58 × 10+22.52 × 10+21.52 × 10+22.38 × 10+21.67 × 10+21.22 × 10+2
Cec087.45 × 1018.95 × 1017.54 × 1016.25 × 1016.94 × 1013.44 × 1015.76 × 1019.34 × 1016.48 × 10−13.40 × 10−1
Cec095.39 × 10−21.38 × 1012.78 × 1019.89 × 10+16.42 × 1011.01 × 10+26.14 × 1011.06 × 10+01.00 × 10−11.20 × 10+2
Cec105.10 × 10+03.58 × 10+01.46 × 1011.15 × 10−15.66 × 1019.52 × 1011.60 × 1012.96 × 10+05.73 × 10−11.44 × 10+0
(b)
SWSOIWOANROBDACEBAIPSOIAGABBOAFOXIFOX
FAve
Cec019.55 × 10+81.1 × 10+84.5 × 10+66.6 × 10+79.8 × 10+81.1 × 10+75.5 × 10+77.2 × 10+66.2 × 10+27.0 × 10+5
Cec021.84 × 10+15.0 × 10+31.3 × 10+28.2 × 10+24.7 × 10+35.9 × 10+21.4 × 10+38.9 × 10+15.3 × 10+01.5 × 10+1
Cec031.370 × 10+15.8 × 10+06.1 × 10+08.0 × 10+01.2 × 10+15.4 × 10+05.8 × 10+04.8 × 10+01.2 × 10+17.5 × 10+0
Cec044.84 × 10+13.7 × 10+32.9 × 10+23.4 × 10+31.8 × 10+42.7 × 10+23.7 × 10+29.9 × 10+32.4 × 10+42.3 × 10+2
Cec052.09 × 10+02.6 × 10+01.3 × 10+02.5 × 10+06.4 × 10+01.4 × 10+01.7 × 10+03.4 × 10+05.2 × 10+01.7 × 10+0
Cec061.07 × 10+11.1 × 10+11.1 × 10+11.2 × 10+11.4 × 10+11.1 × 10+18.1 × 10+01.1 × 10+11.2 × 10+11.1 × 10+1
Cec072.82 × 10+24.4 × 10+22.8 × 10+13.2 × 10+21.3 × 10+39.9 × 10+15.3 × 10+16.0 × 10+21.3 × 10+31.6 × 10+2
Cec082.88 × 10+01.1 × 10+01.0 × 10+01.1 × 10+01.8 × 10+01.0 × 10+01.0 × 10+01.3 × 10+01.6 × 10+01.0 × 10+0
Cec093.47 × 10+05.6 × 10+19.3 × 10+01.1 × 10+28.8 × 10+29.3 × 10+01.4 × 10+12.2 × 10+28.7 × 10+26.3 × 10+0
Cec101.97 × 10+12.1 × 10+12.1 × 10+12.2 × 10+12.2 × 10+12.2 × 10+12.1 × 10+12.1 × 10+12.2 × 10+12.2 × 10+1
FSD
Cec018.48 × 10+81.5 × 10+83.5 × 10+72.5 × 10+84.2 × 10+75.8 × 10+75.9 × 10+75.9 × 10+71.4 × 10+41.3 × 10+7
Cec025.75 × 10−24.2 × 10+23.9 × 10+22.0 × 10+32.0 × 10+24.5 × 10+24.3 × 10+24.2 × 10+25.6 × 10+01.6 × 10+2
Cec031.11 × 10−91.5 × 10+02.2 × 10+01.6 × 10+09.8 × 1022.2 × 10+01.5 × 10+01.1 × 10+05.0 × 10−29.2 × 10−1
Cec041.06 × 10+12.1 × 10+31.5 × 10+36.0 × 10+32.0 × 10+31.3 × 10+31.4 × 10+31.4 × 10+33.4 × 10+21.2 × 10+3
Cec055.83 × 10−23.8 × 1014.8 × 1011.1 × 10+02.8 × 1013.8 × 1014.1 × 1012.9 × 1011.4 × 10−13.7 × 10−1
Cec066.00 × 1018.4 × 1018.0 × 1013.9 × 1017.3 × 10−27.6 × 1011.2 × 10+07.0 × 1015.8 × 10−16.2 × 10−1
Cec071.72 × 10+21.6 × 10+21.2 × 10+24.1 × 10+25.3 × 10+11.2 × 10+21.1 × 10+21.1 × 10+23.2 × 10+19.5 × 10+1
Cec087.45 × 1011.0 × 1015.4 × 1022.6 × 1016.1 × 1025.1 × 1026.0 × 1024.9 × 10−24.9 × 10−24.9 × 10−2
Cec095.39 × 10−28.9 × 10+14.7 × 10+12.2 × 10+21.7 × 10+15.0 × 10+15.4 × 10+15.6 × 10+14.9 × 10+03.7 × 10+1
Cec105.10 × 10+01.7 × 1016.2 × 1018.6 × 1021.0 × 10−21.1 × 1011.3 × 1013.3 × 1017.4 × 10−28.0 × 10−2
(c)
SWSOHybrid FOXTSAFOXTSAPSOGWOMRSORSO
FAve
Cec019.55 × 10+82.33 × 1066.51 × 10+31.60 × 10+31.33 × 10+33.05 × 10+31.588 × 10+56.263 × 10+4
Cec021.84 × 10+11.71 × 10+11.83 × 10+11.74 × 10+11.73 × 10+11.73 × 10+11.83 × 10+11.84 × 10+1
Cec031.37 × 10+11.27 × 10+11.370 × 10+11.27 × 10+11.27 × 10+11.27 × 10+11.37 × 10+11.37 × 10+1
Cec044.84 × 10+11.37 × 10+31.80 × 10+14.76 × 10+16.37 × 10+14.13 × 10+19.20 × 10+38.86 × 10+3
Cec052.09 × 10+01.92 × 10+06.30 × 10+01.55 × 10+01.26 × 10+01.35 × 10+04.57 × 10+04.631 × 10+0
Cec061.07 × 10+18.39 × 10+05.68 × 10+01.03 × 10+16.51 × 10+01.06 × 10+11.09 × 10+11.16 × 10+1
Cec072.82 × 10+23.12 × 10+24.56 × 10+22.96 × 10+21.65 × 10+23.85 × 10+26.11 × 10+27.89 × 10+2
Cec082.88 × 10+05.28 × 10+05.68 × 10+05.71 × 10+05.19 × 10+04.60 × 10+06.31 × 10+06.32 × 10+0
Cec093.47 × 10+01.35 × 10+03.80 × 10+02.43 × 10+02.79 × 10+03.99 × 10+04.96 × 10+25.86 × 10+2
Cec101.97 × 10+12.01 × 10+12.10 × 10+12.04 × 10+12.08 × 10+12.03 × 10+12.13 × 10+12.14 × 10+1
FSD
Cec018.48 × 10+82.65 × 1032.73 × 10+42.68 × 10+28.21 × 10+21.93 × 10+33.199 × 10+51.392 × 10+4
Cec025.75 × 1027.52 × 1024.60 × 1041.35 × 1026.63 × 10151.13 × 1047.231 × 1031.981 × 101
Cec031.11 × 1091.02 × 1098.42 × 1041.13 × 1036.78 × 1041.04 × 1031.337 × 1061.828 × 104
Cec041.06 × 10+08.77 × 10+26.98 × 10+08.78 × 10+03.38 × 10+11.53 × 10+13.203 × 10+32.152 × 10+3
Cec055.83 × 1021.15 × 1017.49 × 1012.39 × 1019.23 × 1022.00 × 1014.133 × 1014.290 × 101
Cec066.00 × 1016.39 × 1011.59 × 10+04.19 × 1011.44 × 10+04.90 × 1011.011 × 10+08.597 × 101
Cec071.72 × 10+21.45 × 10+21.97 × 10+21.62 × 10+21.05 × 10+23.33 × 10+22.291 × 10+22.154 × 10+2
Cec087.45 × 1013.46 × 1013.62 × 1016.05 × 1014.87 × 1011.00 × 10+04.138 × 1014.334 × 101
Cec095.39 × 1029.61 × 10+06.09 × 1032.97 × 1023.61 × 1019.07 × 1011.493 × 10+21.362 × 10+2
Cec105.10 × 10+08.16 × 1028.72 × 1038.11 × 1024.76 × 10+04.89 × 1011.490 × 1011.116 × 101
Table 11. Algorithmic comparison of the welded beam design problem.
Table 11. Algorithmic comparison of the welded beam design problem.
ReferencesAlgorithmMinimum Costhltb
Proposed modelSWSO1.58780.16774.06469.99850.1682
[97]SMR1.65700.20163.23249.04610.2057
[96]MSSSA1.69520.20413.28309.03660.2057
[98]FDA1.69540.20553.25789.03660.2057
[99]CEBA1.69770.20473.27349.03900.2058
[100]AZOA1.72000.46901.94005.72000.5140
[12]RIME1.7228210.20803.25009.05370.2086
[101]WCMFO1.72350.20673.44959.03670.2057
[77]MFO1.724520.20573.47039.03640.2057
[102]CBO1.72460.20573.47049.03720.2057
[103]SSO-C1.72480.20573.47049.03660.2057
[15]ABC1.72480.20573.47049.03660.2057
[104]MSHO1.72480.20573.47049.03660.2057
[105]SO1.72480.20573.47059.03660.2057
[106]ACSA1.72490.20573.47059.03660.2057
[107]MVO1.72540.20563.47219.04090.2057
[90]SCA1.75910.20463.53629.00420.2100
[80]BA1.78510.20263.52719.00750.2105
[108]GA1.87390.16414.032510.0000.2236
[109]ASO2.08680.17537.95699.84770.1732
[79]GSA2.17280.14705.490710.0000.2177
[110]WEOA2.21820.20577.09239.03660.2057
[111]MSCA2.38320.24426.20638.31210.2443
Table 12. PVD design parameters.
Table 12. PVD design parameters.
ParametersParameter’s TitleType of Variable
x 1   =   T s shell thicknessDiscrete
x 2   =   T h head thicknessDiscrete
x 3   =   R inner radiusContinuous
x 4   =   L length of the vessel cylindrical sectionContinuous
Table 13. Algorithmic comparison of the pressure vessel design problem.
Table 13. Algorithmic comparison of the pressure vessel design problem.
ReferencesAlgorithmMinimum Cost T s T h RL
Case-A
This paperSWSO5754.77380.74480.379240.6413195.6608
[118]EJS5870.12500.77700.384840.4253198.5706
[88]GWO5870.39030.77410.383340.3196200.0000
[119]MOCH-PSO5971.40030.79640.399441.0039190.8011
[120]CSS6059.08880.81250.437542.1036176.5726
[121]MDE6059.70160.81250.437542.0984176.6360
[77]MFO6059.71430.81250.437542.0984176.6365
[85]WOA6059.74100.81250.437542.0983176.6390
[12]RIME6055.58680.87500.437545.9482135.3594
Case-B
This paperSWSO5770.35030.76500.376141.1353188.9529
[122]GA-PSO-SQP5798.79890.88090.433745.6342137.2499
[115]CSA5888.52130.82590.381442.7444168.7212
[120]ACO6059.08880.81250.437542.1036176.5727
[113]CS6059.71430.81250.437542.0984176.6365
[80]BA6059.71430.81250.437542.0984176.6365
[28]IPSO6059.71430.81250.437542.0984176.6366
[24]CPSO6061.07770.81250.437542.0912176.7465
[123]Branch-bound8129.10361.12500.625047.7000117.7010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khoshaba, F.S.; Kareem, S.W.; Hawezi, R.S. Swallow Search Algorithm (SWSO): A Swarm Intelligence Optimization Approach Inspired by Swallow Bird Behavior. Computers 2025, 14, 345. https://doi.org/10.3390/computers14090345

AMA Style

Khoshaba FS, Kareem SW, Hawezi RS. Swallow Search Algorithm (SWSO): A Swarm Intelligence Optimization Approach Inspired by Swallow Bird Behavior. Computers. 2025; 14(9):345. https://doi.org/10.3390/computers14090345

Chicago/Turabian Style

Khoshaba, Farah Sami, Shahab Wahhab Kareem, and Roojwan Sc Hawezi. 2025. "Swallow Search Algorithm (SWSO): A Swarm Intelligence Optimization Approach Inspired by Swallow Bird Behavior" Computers 14, no. 9: 345. https://doi.org/10.3390/computers14090345

APA Style

Khoshaba, F. S., Kareem, S. W., & Hawezi, R. S. (2025). Swallow Search Algorithm (SWSO): A Swarm Intelligence Optimization Approach Inspired by Swallow Bird Behavior. Computers, 14(9), 345. https://doi.org/10.3390/computers14090345

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop