Next Article in Journal
CAG-Net: A Novel Change Attention Guided Network for Substation Defect Detection
Next Article in Special Issue
Population-Based Metaheuristic Algorithms for a Hybrid Batch-Continuous Production Scheduling Problem in a Distributed Pharmaceutical Supply Chain
Previous Article in Journal
Gait Dynamics Classification with Criticality Analysis and Support Vector Machines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Tuning-Free Constrained Team-Oriented Swarm Optimizer (CTOSO) for Engineering Problems

by
Adel BenAbdennour
and
Abdulmajeed M. Alenezi
*
College of Engineering, Islamic University of Madinah, Madinah 42351, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(1), 176; https://doi.org/10.3390/math14010176
Submission received: 1 December 2025 / Revised: 24 December 2025 / Accepted: 28 December 2025 / Published: 2 January 2026

Abstract

Constrained optimization problems (COPs) are frequent in engineering design yet remain challenging due to complex search spaces and strict feasibility requirements. Existing swarm-based optimizers often rely on penalty functions or algorithm-specific control parameters, whose performance is sensitive to problem-dependent tuning and may lead to premature convergence or infeasible solutions when feasible regions are narrow. This paper introduces the Constrained Team-Oriented Swarm Optimizer (CTOSO), a tuning-free metaheuristic that adapts the ETOSO framework by replacing linear exploiter movement with spiral search and integrating Deb’s feasibility rule. The population divides into Explorers, promoting diversity through neighbor-guided navigation, and Exploiters, performing intensified local search around the global best solution. Extensive evaluation on twelve constrained engineering benchmark problems shows that CTOSO achieves a 100% feasibility rate and attains the highest overall composite performance score among the compared algorithms under limited function-evaluation budgets. On the CEC 2017 constrained benchmark suite, CTOSO attains an average feasibility rate of 79.78%, generating feasible solutions on 14 out of 15 problems. Statistical analysis using Wilcoxon signed-rank tests and Friedman ranking with Nemenyi post hoc comparison indicates that CTOSO performs significantly better than several baseline optimizers, while exhibiting no statistically significant differences with leading evolutionary methods under the same experimental conditions. The algorithm’s design, requiring no tuning of algorithm-specific control parameters, makes it suitable for real-world engineering applications where tuning effort must be minimized.

1. Introduction

Constrained Optimization Problems (COPs) pose a fundamental challenge in engineering design, where optimal performance must be achieved without violating strict physical and functional limitations. These problems can be formally defined as the search for a vector of decision variables x in a D -dimensional bounded decision space, which minimizes an objective function f ( x ) subject to a set of inequality and equality constraints.
In practice, the primary challenge of constrained optimization problems arises from the inherent tradeoff between optimality and feasibility. The feasible region is often disjoint, non-convex, and occupies only a small fraction of the decision space, making it difficult for traditional optimization methods to navigate effectively. Metaheuristic algorithms have been widely applied to this domain due to their flexibility and gradient-free nature. However, their effectiveness depends on two critical components: the constraint-handling technique (CHT) and the search algorithm’s ability to balance exploration and exploitation.
Among CHTs, penalty function methods are widely used but suffer from a critical flaw. Their performance is highly sensitive to the chosen penalty coefficients, which require problem-specific tuning. As a parameter-free alternative, Deb’s feasibility rule has emerged as a popular choice, prioritizing feasibility over objective value. While this rule is effective, it can lead to premature convergence if the search algorithm itself lacks a robust mechanism for maintaining population diversity and navigating complex landscapes.
A Team-Oriented Swarm Optimizer (TOSO) [1] proposed a novel population structure based on a fixed division of labor. This was followed by the Enhanced Team-Oriented Swarm Optimizer (ETOSO) [2], which refined the internal mechanics of each team by introducing a linear weight-based exploitation strategy for exploiters and a neighbor-guided strategy for explorers, significantly improving performance on unconstrained benchmark mathematical problems.
This paper presents the Constrained Team-Oriented Swarm Optimizer (CTOSO), a tuning-free constrained optimization algorithm obtained through a deliberate re-engineering of the Enhanced Team-Oriented Swarm Optimizer (ETOSO). Unlike ETOSO, which relies on linear exploiter movement and auxiliary recovery mechanisms, CTOSO introduces fundamental algorithmic modifications that improve robustness and usability in constrained search spaces. Within the existing literature, the team-oriented swarm paradigm has been represented by the original TOSO algorithm and its enhanced variant ETOSO. The present work introduces CTOSO as a further development within this framework.
The main contributions of this work are summarized as follows:
  • Structurally simplified constrained optimization framework:
    CTOSO eliminates recovery-based operators and algorithm-specific control parameters, reducing algorithmic complexity and minimizing tuning effort in constrained optimization problems.
  • Feasibility-driven spiral exploitation mechanism:
    The linear exploiter movement of ETOSO is replaced with a single-center, adaptive spiral contraction around the global best solution, enabling controlled intensification without introducing additional control parameters.
  • Consistent feasibility integration via Deb’s rule:
    Deb’s feasibility rule is applied uniformly across all solution comparisons, selections, and updates, ensuring stable convergence and reliable feasibility preservation when feasible regions are narrow or highly constrained.
  • Design tailored for low evaluation budgets:
    CTOSO is explicitly designed and evaluated under fixed and limited function-evaluation budgets, making it suitable for black-box engineering problems where evaluation cost is high and rapid convergence is required.
The remainder of this paper is organized as follows: Section 2 provides a review of related work in constrained optimization and metaheuristics. Section 3 details the formal mathematical model of the Constrained Team-Oriented Swarm Optimizer (CTOSO). Section 4 presents the initial validation of the proposed algorithm on the CEC 2017 constrained benchmark suite. Section 5 provides a focused comparative evaluation against the predecessor variants, TOSO and ETOSO, to isolate the impact of the structural modifications. Section 6 details the twelve constrained engineering benchmark problems, the comparative algorithms, and the experimental setup. Section 7 presents the comprehensive analysis of the empirical results. Section 8 discusses the findings in depth. Finally, Section 9 concludes the paper and suggests directions for future work.

2. Related Work

Constrained optimization problems (COPs) are fundamental across numerous engineering domains, where design variables must satisfy nonlinear inequalities, equalities, and bound constraints. Their feasible regions often form nonconvex, disjoint, or extremely narrow manifolds, making classical gradient-based methods inadequate for black-box or simulation-driven engineering models [3]. Recent reviews emphasize that evolutionary and swarm-based metaheuristics have become dominant for COPs due to their population diversity, robustness to noise, and ability to explore irregular landscapes [4].
Within this broad class of population-based optimizers, evolutionary and swarm intelligence algorithms have been widely adopted in structural, civil, mechanical, and energy engineering. Methods such as Genetic Algorithms (GA) [5], Differential Evolution (DE) [6], Particle Swarm Optimization (PSO) [7], and a wide range of nature-inspired approaches, including Grey Wolf Optimizer (GWO) [8], Moth Flame Optimizer (MFO) [9], Whale Optimization Algorithm (WOA) [10], and Teaching-Learning-Based Optimization (TLBO) [11], have been successfully applied to practical design problems involving discrete constraints, nonlinear material laws, and simulation-based objective functions [12,13,14]. It has been reported that these population-based optimizers have largely replaced classical mathematical programming techniques in real-world structural optimization, particularly in cases involving discontinuities or high-dimensional constraints. In parallel, significant progress has been documented in hybrid, machine-learning-assisted, and physics-informed metaheuristics for engineering optimization under constraints.
Despite these advances, constraint handling remains a central challenge across both evolutionary and swarm-based methods. Penalty functions continue to be widely used due to their conceptual simplicity; however, their performance is highly sensitive to penalty coefficients, which must be carefully tuned to balance feasibility and objective improvement [15]. To reduce this sensitivity, adaptive, dynamic, and co-evolutionary penalty mechanisms have been proposed, adjusting penalty parameters based on population statistics or online learning strategies [16,17]. Although such approaches alleviate manual tuning to some extent, several studies report that they remain problem-dependent and exhibit limited robustness on highly nonlinear COPs [18].
To overcome the limitations of penalty-based methods, feasibility-driven constraint-handling techniques have gained considerable attention. Among these, Deb’s feasibility rule [19] remains the most influential parameter-free approach and is extensively used in modern constrained evolutionary optimization. By deterministically prioritizing feasible solutions and ranking infeasible ones according to constraint violation magnitude, Deb’s rule eliminates the need for penalty coefficients entirely. Feasibility-based techniques have demonstrated strong robustness and ease of integration across a wide range of applications, including structural engineering, energy systems, biomedical optimization, and logistics design. Nevertheless, when feasible regions are extremely small or severely distorted, feasibility-driven selection may lead to premature convergence, motivating hybridizations with repair operators or multi-stage search schemes [20].
In parallel, relaxation-based constraint-handling strategies have evolved as an alternative means of balancing feasibility and exploration. ε-constrained methods, which progressively tighten feasibility tolerance during the search, have shown strong performance on COPs with complex feasible boundaries. An improved ε-constrained differential evolution method incorporating adaptive gradient-based repair has demonstrated superior performance on real-world constrained mechanical and industrial design problems [21]. Related approaches, including stochastic feasibility control and multi-objective reformulations of COPs, have also been explored to enhance exploration while maintaining constraint awareness [22].
Repair and decoder-based techniques remain particularly relevant in engineering contexts where feasibility is governed by domain-specific physical laws or geometric relationships. Recent studies have integrated local surrogate models, physics-based correctors, and boundary-informed mapping operators to enforce feasibility efficiently in simulation-driven design environments [23]. However, the reliance of these approaches on problem-specific knowledge and their tendency to distort search trajectories limit their general applicability.
More recently, adaptive and learning-driven constraint-handling frameworks have emerged as a major research trend. Reinforcement learning has been employed to guide penalty adjustment, operator selection, and strategy switching, leading to self-adaptive COP solvers that outperform fixed constraint-handling techniques on many benchmark problems [24,25]. Adaptive constraint-handling selection mechanisms have also been proposed for constrained multi-objective optimization, dynamically choosing the most effective strategy based on population-level metrics [26]. Ensemble-based approaches further extend this idea by maintaining and adaptively weighting multiple constraint-handling techniques throughout the optimization process to exploit complementary strengths [27,28].
Alongside developments in constraint-handling strategies, the swarm intelligence literature continues to expand with the introduction of new population interaction mechanisms and search dynamics. Recent swarm optimizers, including the Random Average Marine Predators Algorithm (RAMPA) [29], improved variants of the Tunicate Swarm Algorithm such as IMATSA [30], and competitive swarm-based frameworks, represent a growing trend toward adaptive and competition-driven search strategies. These approaches typically enhance exploration–exploitation balance through dynamic control of search operators, inter-agent competition, or adaptive parameter adjustment based on population feedback. While such mechanisms have shown promising performance on complex and high-dimensional problems, they introduce additional algorithmic layers and control logic that influence convergence behavior. In contrast, the present work deliberately avoids adaptive learning components and competitive selection schemes, focusing instead on a tuning-free, team-oriented swarm structure with feasibility-driven selection. This design choice prioritizes simplicity, robustness, and reproducibility under constrained optimization with limited evaluation budgets, which is the primary scope of this study.
In the same domain, modern swarm algorithms such as the Grasshopper Optimization Algorithm [31], Harris Hawks Optimization [32], Prairie Dog Optimization [33], Raven Roost Optimization [34], and the Slime Mould Algorithm [35] illustrate the breadth and continued evolution of swarm-based optimization research. Despite their diversity, many of these approaches still rely on algorithm-specific control parameters or require careful tuning, which limits their robustness in constrained engineering applications and motivates further investigation into tuning-free swarm optimizers.
The survey of the related work reveals three main gaps. First, many constrained optimization methods still rely on penalty functions or adaptive penalty rules, which require problem-dependent tuning and often behave inconsistently across different engineering problems. Second, several studies report that feasibility-rule approaches can suffer from premature convergence, because the population may be pushed too quickly toward the first feasible region and lose the diversity needed to reach better feasible areas. Third, most swarm and evolutionary algorithms were originally designed for unconstrained problems, so their update rules do not naturally fit constrained spaces where feasible regions are narrow or irregular. Together, these points highlight the need for an approach that avoids problem-dependent penalties, prevents premature convergence, and uses search operators that take feasibility into account during the search. These trends motivate the development of a tuning-free feasibility-driven framework designed specifically to address these observed limitations.

3. The CTOSO Algorithm

The Constrained Team-Oriented Swarm Optimizer (CTOSO) is a metaheuristic designed to efficiently solve complex COPs. CTOSO represents a substantial reworking of ETOSO’s core mechanics, replacing the original linear exploitation strategy with spiral-based local search and introducing constraint handling, while preserving the team-oriented population structure. It builds upon this foundation by integrating Deb’s feasibility rule for constraint handling throughout all selection processes. This structure balances the exploration and exploitation phases while requiring no tuning of algorithm-specific control parameters, making it particularly suitable for real-world engineering applications. In this section, x represents a D-dimensional vector, and f(x) and v(x) represent its objective and violation values, respectively.

3.1. Population Definition and Team Architecture

Let the swarm at iteration t be a population P(t) of ps candidate solutions (individuals):
P(t) = {x1(t), x2(t), …, xps(t)}. Each individual xi(t) is a D-dimensional vector xi(t) = (xi,1(t), …, xi,D(t)) within the bounds [lb, ub]. Associated with each individual is its objective value fi(t) and its total constraint violation vi(t). The population is deterministically partitioned into two equal-sized teams:
  • Exploiters: XE(t) = {x1(t), …, xps/2(t)}
  • Explorers: XO(t) = {xps/2 + 1(t), …, xps(t)}
The update equation for an individual is determined strictly by its team membership.

3.2. Constraint Handling via Deb’s Rule

CTOSO employs Deb’s feasibility rule for all solution comparisons. The total constraint violation v(x) for a solution x is calculated as the L2-norm (Euclidean distance) of constraint breaches:
v ( x ) = k [ m a x ( 0 , g k ( x ) ) ] 2 + l [ h l ( x ) ] 2
A solution xi is considered superior to xj if any of the following conditions holds:
  • xi is feasible and xj is not.
  • Both are feasible and f(xi) < f(xj).
  • Both are infeasible and v(xi) < v(xj).
This dominance logic is applied consistently for updating the global best solution x g b e s t as well as for all internal selection decisions inside the explorer and exploiter teams, ensuring that feasible solutions are always prioritized during the entire search process.

3.3. Exploiter Update Model: Spiral-Based Search

Individuals in the exploiter team XE(t) perform an intensified local search by executing a spiral movement around the current global best solution xgbest(t). This spiral-based approach fundamentally differs from ETOSO’s linear movement strategy, providing more sophisticated local search capabilities around promising regions. This behavior is inspired by the Moth-Flame Optimizer (MFO). The position of each exploiter xi in XE(t) is updated in every dimension d in {1, …, D} as follows:
x i , d ( t + 1 ) = D i , d e ( b r e , d ) cos ( 2 π r e , d ) + x g b e s t , d ( t )
where
  • Di,d = |xgbest,d(t) − xi,d(t)| is the distance from the individual to the global best in dimension d
  • re,d is a random scalar uniformly distributed in [−1, 1]
  • b = (t/FEmax) − 1 an internal adaptive state (not a user-tuned parameter) that deterministically transitions the search from global to local as the evaluation budget is consumed. At the beginning of a run, b 1 , so the exponent b r e , d   lies in [−1, 1] and the radial factor e b r e , d   ranges from approximately 0.37   to 2.72 , allowing moderate contraction and expansion around x g b e s t . As FE approaches its maximum, b converges to 0 and thus e b r e , d approaches 1, so the spiral reduces to small oscillations whose amplitude is governed mainly by the shrinking distance D i , d . This schedule produces a smooth transition from broader spiral search in early iterations to fine-grained exploitation near convergence, without introducing additional control parameters. All candidate solutions are clamped to the bounds lb and ub after the position update to maintain variable feasibility.
CTOSO differs from spiral-based optimizers such as MFO and the Spiral Dynamic Algorithm (SDA). In CTOSO, the spiral operator is employed strictly as a local intensification mechanism restricted to the exploiter team and guided solely by the current global best solution, whereas MFO applies spiral motion uniformly across the population within a flame-sorting and flame-reduction hierarchy.
CTOSO also differs from the Spiral Dynamic Algorithm (SDA) in several structural aspects. While the exploiter update in (2) follows a spiral-shaped trajectory, its update mechanism is not the SDA. The two approaches differ in their core mathematical structure and update logic, in the following aspects:
  • Independent dimension-wise stochastic update vs. matrix-based operator: In the canonical SDA, spiral motion is defined through a rotation–contraction operator, where a rotation matrix is applied to the solution vector. This matrix operation couples the decision variables through a global transformation of the state vector [36]. In contrast, CTOSO does not construct or apply any rotation matrix. The exploiter update is computed using a scalar expression inside a loop over dimensions, where each coordinate x i , d is updated directly based on its own value and the corresponding coordinate of the global best. This results in an independent dimension-wise update rather than a coupled matrix-based transformation.
  • Distance-modulated spiral amplitude: In CTOSO, the spiral step size in each dimension is explicitly proportional to the distance x g b e s t , d x i , d . As a result, the spiral amplitude in each coordinate naturally decreases as that coordinate approaches the global best. In SDA, the spiral radius is controlled by predefined contraction parameters that are part of the rotation–contraction operator, rather than being derived from coordinate-wise distances to the current best solution [36,37].
  • Decoupled stochastic spiral phase vs. deterministic spiral mapping: In CTOSO, stochasticity is introduced by independently sampling the spiral phase r e , d   for each dimension during the exploiter update. Consequently, different dimensions of the position vector may follow distinct spiral trajectories around the global best solution. In canonical SDA formulations, once the rotation and contraction parameters of the spiral operator are fixed, the resulting spiral mapping is deterministic and applied as a single structured transformation of the position vector [36].
  • Budget-driven spiral scheduling: CTOSO controls the spiral contraction strength through a budget-dependent schedule, where the spiral constant is updated based on the remaining number of function evaluations. This removes the need for user-defined spiral parameters. In SDA, spiral behavior is governed by explicitly defined rotation angles and contraction factors that must be selected in advance as part of the spiral operator [36,37].
For these reasons, CTOSO employs a distance-scaled, dimension-wise stochastic spiral exploitation operator, whereas SDA is defined by a matrix-based rotation–contraction spiral operator with explicitly parameterized dynamics.

3.4. Explorer Update Model: Adaptive Neighbor-Guided Movement

Individuals in the Explorer team XO(t) are responsible for promoting population diversity and exploring new regions of the search space. Each explorer identifies its neighborhood best using a ring topology with wrap-around indexing. The neighborhood best selection uses a feasibility-based comparison for efficient rank-based comparison, consistent with standard metaheuristic implementations, while employing an advanced ring topology for coordinated movement. The explorer then moves toward this local leader using an adaptive scaling factor. This is a vector operation:
x i ( t + 1 ) = x i ( t ) + w r o ( x n b e s t , i ( t ) x i ( t ) )
where
  • r o is a D-dimensional vector of uniform random numbers in [0, 1].
  • ⊙ denotes elementwise multiplication.
  • w is an adaptive scaling factor that normalizes the movement based on the objective landscape of the explorer team: w = f ( x g b e s t ) 1 + m a x f ( x j ) j X O . This prevents excessively large moves that could destabilize the search, especially when objective values are large. Following the update, all new positions are clamped to the problem’s lower and upper bounds lb and ub.

3.5. Algorithmic Flow and Tuning-Free Design

The design of CTOSO is motivated by the points raised in the related work. The algorithm avoids the use of penalty functions and relies solely on Deb’s rule for selection, making it fully free of problem-dependent parameters. The movement rules of both the exploiter and explorer teams help reduce premature convergence by maintaining diversity and by using feasibility information without forcing the search to collapse toward the first feasible point. Feasibility is incorporated into the update logic, the spiral exploiter follows feasibility-aware guidance, and the explorer team selects neighborhood leaders using simple feasibility-based comparisons. Together, these elements shape CTOSO into a tuning-free method with feasibility naturally integrated into its search behavior.
In addition to these design choices, CTOSO maintains a simple and lightweight structure. It adapts its search behavior through the internal dynamics of the spiral constant b , the adaptive weight w , and the deterministic rules used for team selection and constraint handling. Because it does not rely on control coefficients such as crossover rates or mutation factors, the algorithm remains easy to configure and operate. This simplicity makes CTOSO well suited for black-box engineering scenarios where little prior information about the problem is available.
The core logic and flow of the CTOSO algorithm are formally detailed in the pseudocode in Algorithm 1. The main inputs are the problem definition (D, lb, ub), the computational budget (FEmax), and the population size (ps). The algorithm outputs are the final best feasible solution and its objective value. The global best solution (gbest) is updated immediately after each individual solution evaluation within both the exploiter and explorer loops. This real-time update mechanism allows newly discovered superior solutions to immediately influence the search direction of subsequent individuals within the same iteration, creating a dynamic and responsive optimization process.
We next highlight the key structural differences between CTOSO and ETOSO. While CTOSO retains ETOSO’s division into exploiter and explorer teams, it departs from ETOSO in two important ways. First, the linear, rank-weighted exploiter motion used in ETOSO is replaced with a full-dimensional spiral operator applied only to the exploiter team. This modification provides a smoother, evaluation-driven transition from broad to focused search without relying on weight schedules or repeated reinitialization at the global best. Second, the explorer team incorporates feasibility-based guidance during neighborhood selection, improving reliability on constrained landscapes without requiring personal-best archives or penalty-parameter tuning.
Algorithm 1. Pseudocode of the Constrained Team-Oriented Swarm Optimizer (CTOSO)
Input: I , D , l b , u b , max _ FE , ps
Output:  best _ obj _ found ,   best _ sol _ found ,   convergence _ trace ,   total _ evals _ used ,   final _ violation
Step 1: Initialization
  • Initialize population positions:
                                   p o s = l b + rand ( ps , D ) ( u b l b )
  • Initialize objective values and constraint violations to
  • Initialize convergence trace to
  • Set total function evaluations total _ evals = 0
  • Set best feasible objective so far to
         Step 2: Initial Evaluation
          For each individual i = 1   to p s
  • Evaluate objective and constraints:
                                   ( f i , v i o i ) EngineeringBenchmarks ( p o s i , I )
  • Increment evaluation counter
  • Store objective and violation values
  • Update best feasible solution using Deb’s feasibility rule
  • Record convergence information
    End For
         Step 3: Global Best Selection
  • Select the global best solution using Deb’s feasibility rule
  • Initialize g b e s t _ p o s , g b e s t _ o b j , g b e s t _ v i o
         Step 4: Population Division
  • Assign first half of population as Exploiters
  • Assign second half of population as Explorers
         Step 5: Main Optimization Loop
         While  total _ evals < max _ FE   do
         Exploiter Update (Spiral Search)
          For each exploiter do
  • Update position using spiral movement toward g b e s t _ p o s
  • Evaluate objective and constraint violation
  • Increment evaluation counter
  • Update global best using Deb’s feasibility rule
  • Record convergence information
    End For
         Explorer Update (Neighborhood Search)
          For each explorer do
  • Select neighborhood leader in ring topology using
    Deb’s feasibility rule
  • Update position using adaptive weighted movement toward neighborhood leader
  • Evaluate objective and constraint violation
  • Increment evaluation counter
  • Update global best using Deb’s feasibility rule
  • Record convergence information
    End For
         End While
         Step 6: Output
  • Return best objective value and solution
  • Return final constraint violation
  • Return total evaluations used
  • Return convergence trace

3.6. Rationale and Design Motivation of CTOSO

CTOSO is derived through a deliberate restructuring of the Team-Oriented Swarm Optimization framework to achieve stable constrained optimization under fixed and limited evaluation budgets, while eliminating the need for parameter tuning or recovery-based operators. In the original TOSO implementation, search behavior is governed by several auxiliary components, including personal-best memory, mutation of poorly performing explorers, random reinitialization, and rank-based weighting schemes. Although these mechanisms enhance exploration, they introduce additional control parameters and can repeatedly disrupt feasible solutions once discovered, particularly in constrained problems where feasibility regions are narrow. ETOSO reduces this complexity by removing personal-best memory and correcting global-best updates; however, it retains linear exploiter movements whose step magnitudes scale with the full decision-variable range and are independent of proximity to feasible regions.
CTOSO replaces this linear exploitation mechanism with a full-dimensional spiral contraction centered on the current global best. Unlike the spiral motion used in Moth–Flame Optimizer (MFO), which relies on multiple flames and spiral shape parameters to balance exploration and exploitation, the CTOSO spiral is single-center, feasibility-driven, and deterministically coupled to the remaining evaluation budget. The step size is proportional to the distance from the global best and decreases monotonically as the evaluation budget is consumed, without introducing additional spiral control parameters. This design enables controlled intensification near promising feasible regions while preserving the original team-oriented search structure. In parallel, all recovery-based operators present in TOSO and ETOSO (including mutation, random restarts, feasibility archives, and personal-best tracking) are removed. As a result, feasibility preservation is enforced exclusively through Deb’s feasibility rule during all solution comparisons and updates, ensuring that feasible solutions consistently dominate the search trajectory once identified. These changes yield a structurally simpler, tuning-free algorithm in which constrained search behavior is governed solely by population interactions, global-best geometry, and feasibility-based selection.

4. Evaluation on CEC 2017 Constrained Benchmark Problems

Before evaluating the proposed algorithm on real-world engineering design problems, an initial validation was conducted using the CEC 2017 constrained optimization benchmark suite (f01–f15) [38]. These benchmarks are widely regarded as a standardized reference for assessing constrained evolutionary and swarm-based optimizers, providing a controlled environment to evaluate feasibility handling, convergence behavior, and robustness under well-defined conditions. This evaluation establishes a baseline comparison against canonical reference optimizers prior to application-oriented engineering studies.
These problems involve nonlinear, multimodal, shifted, and rotated landscapes with both inequality and equality constraints, and are specifically designed to challenge constraint-handling mechanisms and search dynamics. All problems were treated as constrained minimization problems and were evaluated using the official CEC 2017 Pure MATLAB implementation, without re-deriving or modifying any benchmark equations. Internal CEC state handling, including rotation initialization, was correctly reset for each problem to ensure reproducibility. The dimensionality was fixed to D = 10 for all benchmark functions, consistent with standard CEC experimental protocols.
All algorithms were evaluated under a fixed computational budget of 10,000 function evaluations (FE) per run. Each algorithm was executed for 30 independent runs per problem, and a population size of 100 was used for all population-based methods to ensure comparability. The same initialization bounds, stopping criteria, and random-seed control were applied uniformly across all algorithms. Constraint satisfaction was assessed using a feasibility tolerance of 10 6 , and performance statistics were computed using feasible solutions only, in accordance with standard constrained optimization practice.
For the standard CEC benchmark evaluation, the proposed CTOSO was compared against a set of widely adopted reference optimizers. Classical baselines (GA and PSO) and evolutionary standards (DE and L-SHADE [39]) were selected due to their extensive use in CEC benchmarking and competition studies. In addition, representative population-based optimizers including GWO, TLBO, and MFO were included to reflect commonly used modern swarm, tuning-light, and spiral-based search paradigms. This selection ensures fair comparison against canonical baselines while avoiding novelty-driven algorithm proliferation. CTOSO is the only newly proposed algorithm evaluated in this section.
For each benchmark problem, the best feasible objective value, average feasible objective value, feasibility rate, and average computational time were recorded. Per-problem ranks were computed based on best feasible objective values, and average ranks across all benchmark problems were calculated. Non-parametric statistical tests were employed, including Friedman ranking with Nemenyi post hoc analysis and pairwise Wilcoxon signed-rank tests, using identical data sets to ensure consistency and fairness.
Table 1 reports the best feasible objective values obtained by all compared algorithms on the CEC2017 constrained benchmark suite, together with feasibility rates, number of feasible problems, average ranks, and win counts. As shown in the table, CTOSO achieves the lowest average rank (AvgRank = 2.4) among the compared methods and attains the highest feasibility rate (79.78%), producing feasible solutions on 14 out of 15 benchmark functions.
To further assess statistical significance across all benchmark problems, rank-based analysis was conducted using the Friedman test followed by Nemenyi post hoc comparison. The resulting critical difference (CD) diagram is shown in Figure 1, where CTOSO and L-SHADE fall within the same statistical performance group, indicating no statistically significant difference between their average ranks at the selected significance level.
In addition, pairwise Wilcoxon signed-rank test results between CTOSO and each reference algorithm are summarized in Table 2. These results indicate that CTOSO performs significantly better than PSO and TLBO, while no statistically significant differences are observed in comparisons with GA, DE, MFO, GWO, or L-SHADE. Taken together, the empirical results and the rank-based visualization confirm that CTOSO exhibits competitive and reliable constrained optimization performance under standardized CEC benchmark conditions.

5. Comparative Evaluation of TOSO, ETOSO, and CTOSO

To clarify the practical impact of the structural modifications introduced in CTOSO relative to its predecessor variants, a focused comparative evaluation was conducted between TOSO, ETOSO, and CTOSO using the same CEC 2017 constrained benchmark suite (f01–f15). This comparison isolates variant-level behavior within the TOSO family under a standardized constrained testbed, complementing the broader comparison against reference optimizers presented in Section 4.
All three algorithms were evaluated using an identical experimental protocol: dimensionality D = 10 , a budget of 10,000 function evaluations, and 30 independent runs per benchmark function. Constraint satisfaction was assessed using a feasibility tolerance of 10 6 , and reported values correspond to best feasible objective values.
Table 3 reports the per-function best feasible objective values and aggregate feasibility indicators for the three variants. CTOSO produces feasible solutions on 14 out of 15 benchmark functions (AvgFR = 79.78%), whereas ETOSO and TOSO each produce feasible solutions on only 3 functions (AvgFR = 20%). The feasible outcomes of ETOSO and TOSO are limited to f01, f02, and f04, while CTOSO achieves feasible solutions across nearly the entire benchmark suite. For the subset of functions where all three algorithms report feasible solutions, CTOSO consistently attains smaller best feasible objective values than both predecessor variants.
To summarize performance across functions, per-problem rankings were computed based on best feasible objective values. CTOSO achieves the best overall average rank (AvgRank = 1.00), while ETOSO and TOSO obtain larger average ranks (2.54 and 2.46, respectively). The corresponding Friedman–Nemenyi critical difference diagram is illustrated in Figure 2, where CTOSO is separated from both ETOSO and TOSO by a margin exceeding the critical difference threshold, indicating statistically meaningful rank separation under this evaluation setup.
Overall, this comparative evaluation demonstrates that CTOSO exhibits substantially stronger constrained optimization behavior than its predecessor variants, characterized by markedly broader feasibility coverage and improved objective quality on shared feasible functions. These results clarify the effect of the structural changes introduced in CTOSO under standardized benchmark conditions, prior to assessing performance on real-world engineering design problems.

6. Engineering Design Optimization Studies

The performance and robustness of CTOSO were rigorously evaluated against a comprehensive set of competitive metaheuristics. This section details the 12 constrained engineering design benchmarks used for testing, the performance metrics applied for quantitative assessment (including the composite score), and the statistical validation methods employed to confirm the significance of the results. The setup also identifies the 20 state-of-the-art algorithms utilized for comparative analysis. All the parameters for the comparative algorithms are obtained from [2]. It was ensured to apply the same constraint handling to the competitive algorithms for fair comparison.

6.1. Benchmark Problems

The performance of CTOSO was assessed across twelve widely used constrained engineering design problems derived from the literature. These benchmarks are selected for their complexity, diverse search spaces, and relevance to real-world applications, offering a versatile and robust platform for validating the effectiveness of constrained optimization algorithms. The benchmarks are well-established and frequently employed in studies on constrained engineering optimization. Using this benchmark set ensures that all algorithms are tested under standardized, well-defined formulations with known reference solutions, providing a fair and reproducible basis for comparative evaluation under identical computational budgets. Each benchmark is detailed below, including its objective function, constraints, and design variables. All engineering design optimization problems considered in this section are formulated and solved as constrained minimization problems in the form:
Minimize: f(x), Subject to:
gk(x) ≤ 0, for k = 1, …, m
hl(x) = 0, for l = 1, …, p
x ∈ [lb, ub]D
The summary of the twelve constrained engineering benchmark problems is given in Table 4.

6.1.1. Speed Reducer Design [40]

Objective Function:
f ( x ) = 0.7854 x 1 x 2 2 ( 3.3333 x 3 2 + 14.9334 x 3 43.0934 ) + 0.7854 ( x 4 x 6 2 + x 5 x 7 2 )                                         1.508 x 1 ( x 6 2 + x 7 2 ) + 7.477 ( x 6 3 + x 7 3 ) Constraints   ( g i ( x ) 0 ) : g 1 ( x ) = 27 x 1 x 2 2 x 3 g 2 ( x ) = 397.5 x 1 x 2 2 x 3 2 g 3 ( x ) = 1.93 ( x 2 x 3 x 6 4 ) / ( x 4 3 ) g 4 ( x ) = 1.93 ( x 2 x 3 x 7 4 ) / ( x 5 3 ) g 5 ( x ) = 10 x 6 3 ( 745 x 4 x 2 x 3 ) 2 + 16.91 × 10 6 1100 g 6 ( x ) = 10 x 7 3 ( 745 x 5 x 2 x 3 ) 2 + 157.5 × 10 6 850 g 7 ( x ) = 40 + x 2 x 3 g 8 ( x ) = 5 x 1 / x 2 g 9 ( x ) = 12 + x 1 / x 2 g 10 ( x ) = 1.5 x 6 x 4 + 1.9 g 11 ( x ) = 1.1 x 7 x 5 + 1.9
Variable Bounds:
x 1 [ 2.6 , 3.6 ] , x 2 [ 0.7 , 0.8 ] , x 3 [ 17 , 28 ] , x 4 , x 5 [ 7.3 , 8.3 ] , x 6 [ 2.9 , 3.9 ] , x 7 [ 5.0 , 5.5 ]
Optimal Value: f * = 2994.422564

6.1.2. Tension/Compression Spring Design [41]

Objective Function:
f ( x ) = ( x 3 + 2 ) x 2 x 1 2
Constraints ( g i ( x ) 0 ):
g 1 ( x ) = 1 x 2 3 x 3 71785 x 1 4 g 2 ( x ) = 4 x 2 2 x 1 x 2 12566 ( x 2 x 1 3 x 1 4 ) + 1 5108 x 1 2 1 g 3 ( x ) = 1 140.45 x 1 x 2 2 x 3 g 4 ( x ) = x 1 + x 2 1.5 1
Variable Bounds: x 1 [ 0.05 , 2.0 ] , x 2 [ 0.25 , 1.3 ] , x 3 [ 2 , 15 ]
Optimal Value: f * = 0.012665

6.1.3. Pressure Vessel Design [40]

Objective Function:
f ( x ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
Constraints ( g i ( x ) 0 ):
g 1 ( x ) = x 1 + 0.0193 x 3 g 2 ( x ) = x 2 + 0.00954 x 3 g 3 ( x ) = ( π x 3 2 x 4 + 4 π x 3 3 3 ) + 1296000 g 4 ( x ) = x 4 240
Variable Bounds: x 1 , x 2 [ 0.0625 , 99 ] , x 3 , x 4 [ 10 , 200 ]
Optimal Value: f * = 5885.3328

6.1.4. Welded Beam Design [40]

Objective Function:
f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14 + x 2 )
Auxiliary Expressions:
M = P ( L + x 2 / 2 ) R = x 2 2 / 4 + ( ( x 1 + x 3 ) / 2 ) 2 J = 2 [ 2 x 1 x 2 ( x 2 2 12 + ( x 1 + x 3 2 ) 2 ) ] τ = τ 2 + 2 τ τ x 2 2 R + τ 2 ,   where   τ = P 2 x 1 x 2 ,   τ = M R J σ = 6 P L x 4 x 3 2 δ = 6 P L 3 E x 3 3 x 4 P c = [ 4.013 E x 3 2 x 4 6 / 36 L 2 ] [ 1 x 3 2 L E 4 G ]
Constraints ( g i ( x ) 0 ):
g 1 ( x ) = τ 13600 g 2 ( x ) = σ 30000 g 3 ( x ) = x 1 x 4 g 4 ( x ) = 0.10471 x 1 2 + 0.04811 x 3 x 4 ( 14 + x 2 ) 5 g 5 ( x ) = 0.125 x 1 g 6 ( x ) = δ 0.25 g 7 ( x ) = P P c
Optimal Value: f * = 1.724852

6.1.5. Himmelblau’s Beam Design [40]

Objective Function:
f ( x ) = 5.3578547 x 3 2 + 0.8356891 x 1 x 5 + 37.293239 x 1 40792.141
Auxiliary Expressions:
y 1 = 85.334407 + 0.0056858 x 2 x 5 + 0.0006262 x 1 x 4 0.0022053 x 3 x 5 y 2 = 80.51249 + 0.0071317 x 2 x 5 + 0.0029955 x 1 x 2 + 0.0021813 x 3 2 y 3 = 9.300961 + 0.0047026 x 3 x 5 + 0.0012547 x 1 x 3 + 0.0019085 x 3 x 4
Constraints ( g i ( x ) 0 ):
g 1 ( x ) : y 1 0 g 2 ( x ) : y 1 92 0 g 3 ( x ) : y 2 + 90 0 g 4 ( x ) : y 2 110 0 g 5 ( x ) : y 3 + 20 0 g 6 ( x ) : y 3 25 0
Optimal Value: f * = 30665.539

6.1.6. Cantilever Beam Design [41]

Objective Function:
f ( x ) = 0.0624 ( x 1 + x 2 + x 3 + x 4 + x 5 )
Constraint: g 1 ( x ) = 61 x 1 3 + 37 x 2 3 + 19 x 3 3 + 7 x 4 3 + 1 x 5 3 1 0
Optimal Value: f * = 1.339956

6.1.7. Tubular Column Design [40]

Objective Function:
f ( d , t ) = 9.8 d t + 2 d
Constraints: ( g i ( x ) 0 ):
g 1 ( d , t ) = 2500 π d t 500 1 g 2 ( d , t ) = 8 2500 250 2 π 3 0.85 × 10 6 d t ( d 2 + t 2 ) 1 g 3 ( d ) = 2 d 1 g 4 ( d ) = d 14 1 g 5 ( t ) = 0.2 t 1 g 6 ( t ) = t 0.8 1
Optimal Value: f * = 26.4863

6.1.8. Piston Lever Design [42]

Objective Function:
f ( x ) = π 4 D 2 ( l 2 l 1 )
l 1 = ( X B ) 2 + H 2 ,
l 2 = ( X sin θ + H ) 2 + ( B X cos θ ) 2
R = | X ( X sin θ + H ) + H ( B X cos θ ) | ( X B ) 2 + H 2
F = π 4 P D 2
Constraints ( g i ( x ) 0 ):
g 1 ( x ) = Q L cos θ R F g 2 ( x ) = Q ( L X ) M m a x g 3 ( x ) = 1.2 ( l 2 l 1 ) l 1 g 4 ( x ) = 0.5 D B θ = π 4 , P = 1500 , Q = 10000 , L = 240 , M m a x = 1.8 × 10 6 0.05 H 500 0.05 B 500 0.05 D 500 0.05 X 120
Optimal Value: f* = 8.412698

6.1.9. Car Side Impact Design [43]

Objective Function:
f ( x ) = 1.98 + 4.9 x 1 + 6.67 x 2 + 6.98 x 3 + 4.01 x 4 + 1.78 x 5 + 2.73 x 7
Constraints ( g i ( x ) 0 ):
g 1 ( x ) = 1.16 0.3717 x 2 x 4 0.0092928 x 5 x 9 0.484 x 3 x 8 + 0.01343 x 6 x 10
g 2 ( x ) = 0.261 0.0159 x 1 x 2 0.188 x 1 x 8 0.019 x 2 x 7 + 0.0144 x 3 x 5 + 0.0008757 x 5 x 10 + 0.08045 x 6 x 9 + 0.00139 x 8 x 11 + 0.00001575 x 10 x 11
g 3 ( x ) = 0.214 + 0.00817 x 5 0.131 x 1 x 8 0.0704 x 1 x 9 + 0.03099 x 2 x 6 0.018 x 2 x 7 + 0.0208 x 3 x 8 + 0.121 x 3 x 9 0.00364 x 5 x 6 + 0.0007715 x 5 x 10 0.0005354 x 6 x 10 + 0.00121 x 8 x 11 + 0.00184 x 9 x 10
g 4 ( x ) = 0.74 0.61 x 2 0.163 x 3 x 8 + 0.001232 x 3 x 10 0.166 x 7 x 9 + 0.227 x 2 2
Optimal Value: f * = 22.84296954

6.1.10. Corrugated Bulkhead Design [41]

Objective Function:
f ( x ) = 5.885 t ( b + l ) b + l 2 h 2
Constraints ( g i ( x ) 0 ):
g 1 ( x ) = t h ( 0.4 b + l / 6 ) + 8.94 ( b + l 2 h 2 ) g 2 ( x ) = t h 2 ( 0.2 b + l / 12 ) + 2.2 ( 8.94 ( b + l 2 h 2 ) ) 4 / 3 g 3 ( x ) = t + 0.0156 b + 0.15 g 4 ( x ) = t + 0.0156 l + 0.15 g 5 ( x ) = t + 1.05 g 6 ( x ) = l + h
Optimal Value: f * = 6.842958

6.1.11. Three-Bar Truss Design [44]

Objective Function:
f ( x ) = ( 2 2 A 1 + A 2 ) 100
Constraints ( g i ( x ) 0 ):
g 1 ( x ) = 2 ( 2 A 1 + A 2 ) ( 2 A 1 2 + 2 A 1 A 2 ) 2 g 2 ( x ) = 2 A 2 ( 2 A 1 2 + 2 A 1 A 2 ) 2 g 3 ( x ) = 2 2 A 2 + A 1 2
Optimal Value: f * = 263.8958

6.1.12. Reinforced Concrete Beam Design [45]

Objective Function:
f ( x ) = 29.4 A s + 0.6 b h
Lookup Tables and Indexing Logic:
Steel area set A s : [6, 6.16, 6.32, 6.6, 7, 7.11, 7.2, 7.8, 7.9, 8, 8.4]
Beam width set b : [28, 29, …, 40]
The normalized variables x 1 and x 2 are mapped to these discrete sets using:
i x 1 = m i n ( m a x ( 1 , round ( x 1 length ( A s ) ) ) , length ( A s ) ) i x 2 = m i n ( m a x ( 1 , round ( x 2 length ( b ) ) ) , length ( b ) ) Then :   A s = A s [ i x 1 ] ,   b = b [ i x 2 ] Constraints   ( g i ( x ) 0 ) g 1 ( x ) = b / h 4 g 2 ( x ) = 180 + 7.375 A s 2 h A s b Optimal Value:   f * = 359.20

6.2. Comparative Algorithms and Evaluation Metrics

CTOSO was compared against twenty state-of-the-art metaheuristic algorithms: the Bat Algorithm (BA) [46], Bees Algorithm (BEE) [47], Butterfly Optimization Algorithm (BOA) [48], Crow Search Algorithm (CSA) [49], Cuckoo Search (CS) [50], Differential Evolution (DE) [51], Elephant Herding Optimization (EHO) [52], Firefly Algorithm (FA) [53], Flower Pollination Algorithm (FPA) [54], Grasshopper Optimization Algorithm (GOA) [31], Gravitational Search Algorithm (GSA) [55], Grey Wolf Optimizer (GWO) [8], Harris Hawks Optimization (HHO) [32], Moth-Flame Optimization (MFO) [9], Monkey King Algorithm (MKA) [56], Prairie Dog Optimization (PDO) [33], Raven Roost Optimization (RRO) [34], Sine Cosine Algorithm (SCA) [57], Slime Mould Algorithm (SMA) [35], Teaching-Learning-Based Optimization (TLBO) [58], and Team-Oriented Swarm Optimizer (TOSO) [1]. The parameters for all the algorithms are used from [2]. The evaluation was conducted over 30 independent replications per problem (ensuring statistical reliability), with a population size of 100. A strict limit of 3000 function evaluations was imposed to emulate scenarios where objective or constraint evaluation is computationally expensive, as is typical in simulation-driven engineering design. Using a fixed and deliberately tight FE budget ensures that all algorithms face the same re-stricted-resource conditions and emphasizes their ability to produce feasible and competi-tive solutions when evaluation cost (not iteration count) is the dominant constraint in practice.
The 20 comparative metaheuristics were originally proposed for unconstrained optimization and do not define a unified constraint-handling method. To ensure fair and unbiased comparison, all algorithms were equipped with the same constraint-handling strategy, namely Deb’s feasibility rule, which is widely recognized as a standard, parameter-free CHT in constrained optimization research. Recent surveys on constraint handling for population-based and metaheuristic algorithms emphasize that such parameter-free feasibility rules are among the most widely used baselines, precisely because they avoid penalty tuning and provide a consistent feasibility criterion across different optimizers [4,59]. In addition, several recent algorithmic studies adopt Deb-type feasibility rules when extending unconstrained metaheuristics to constrained engineering design problems, treating them as a generic and reliable CHT [7]. Using a single, consistent CHT in this work therefore eliminates the confounding effect of penalty coefficients and ensures that performance differences arise primarily from the search dynamics of each algorithm rather than from differences in constraint treatment, in line with current recommendations for fair benchmarking in constrained engineering optimization.

7. Results and Comparative Analysis

This section presents a comprehensive evaluation of CTOSO against the twenty algorithms across the twelve constrained engineering design problems. The analysis is structured to provide a multi-faceted performance assessment. First, a composite ranking system is employed to objectively identify the top-performing algorithms for further detailed comparison. Subsequently, the performance of these top algorithms is examined on each individual benchmark problem. To statistically validate the observed performance differences, Wilcoxon signed-rank and Friedman–Nemenyi tests are conducted. Finally, the convergence behavior and computational complexity of the algorithms are analyzed to provide insights into their operational characteristics and efficiency.

7.1. Top Algorithm Selection via Composite Ranking

Table 5 presents the composite ranking results, showing for each algorithm its average rank (Avg. Rank), normalized Rank Score, feasibility rate (Feasibility %), normalized Feasibility Score, success rate (Success %), normalized Success Score, final Composite Score, and Top 10 selection status. In real-world constrained optimization, solution effectiveness must be judged not only by the quality of objective values but also by an algorithm’s capacity to consistently satisfy constraints and to do so robustly across multiple trials. As such, a multi-criteria selection framework is essential for fair and meaningful comparison. To accomplish this, we implemented a composite ranking mechanism synthesizing three essential facets of algorithm performance (across all problems):
  • Solution Quality: measured Via average rank of best feasible solutions across problems (60% weight for normalized rank score)
  • Feasibility Rate: quantifying reliability in constraint satisfaction (30% weight for feasibility score)
  • Robustness: assessing how often an algorithm reaches near-optimal solutions within 0.1% of known optima (10% weight for Success Score).
The weighting scheme is aligned with the priority structure established in the constraint-handling studies [4,12,28,59], which consistently treat objective comparison among feasible solutions as the strongest basis for discrimination, constraint-satisfaction reliability as the next most informative factor and run-to-run stability as a useful but lower-impact indicator. For this reason, the largest weight is assigned to the metric that differentiates feasible solutions, a moderate weight is given to the metric that reflects constraint-handling reliability across runs, and a smaller complementary weight captures robustness effects. All metrics are normalized to the [0, 1] range, ensuring scale comparability, with higher values denoting superior performance. The weighting reflects the practical priorities of constrained optimization: achieving feasible, high-quality solutions consistently. Algorithms were then ranked based on their overall composite score, and the top 10 were retained for subsequent fine-grained analysis, including statistical testing and convergence behavior. This filtering step is not merely procedural; it is crucial for eliminating underperforming or unstable algorithms, which may excel in one metric but fail in others. CTOSO emerged as the standout performer, achieving a perfect composite score of 1.00, ranking first in all three normalized dimensions.

7.2. Detailed Performance on Benchmark Engineering Problems

The per-problem performance of the top 10 algorithms provides detailed insights into their strengths and weaknesses across different types of constrained engineering challenges. The following tables (Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16 and Table 17) present the detailed results for each benchmark. The columns in these tables are defined as follows:
  • Algorithm: The name of the metaheuristic algorithm.
  • Known Best: The known global optimum value for the problem.
  • Best Found: The best feasible objective value found by the algorithm across all runs.
  • Average: The mean of the best feasible objective values across 30 independent runs.
  • Std Dev: The standard deviation of the best feasible objective values.
  • Feasibility %: The percentage of the 30 runs that produced a feasible solution.
  • Time (s): The average computation time in seconds. The experiments were conducted using MATLAB 2024 on a system equipped with a relatively fast Intel(R) Core (TM) Ultra 9 185H processor (2.30 GHz), 32.0 GB RAM, and a 64-bit operating system.

7.3. Statistical Significance Analysis

To statistically validate CTOSO’s performance, Wilcoxon signed-rank tests and a Friedman test with Nemenyi post hoc analysis were conducted. The results provide robust evidence of its superiority. The Wilcoxon signed-rank tests (Table 18) confirm that CTOSO achieves a statistically significant performance advantage (p < 0.01) over every other leading algorithm in direct, head-to-head comparisons. This establishes that the observed improvements are consistent and reliable.
Furthermore, the Friedman–Nemenyi post hoc test (Table 19) confirms CTOSO’s overall dominance by assigning it the best average rank of 1.58. While the Nemenyi test identifies a cluster of competitors (BOA, EHO, GWO, MFO) whose ranks are not statistically different from CTOSO’s based on the critical difference of 4.08, CTOSO still holds the absolute top rank among all algorithms. Interestingly, the test also confirms that five major algorithms (BEE, CSA, DE, FPA, and TLBO) are statistically placed in a significantly inferior performance cluster. This comprehensive analysis confirms CTOSO’s position as a top-performing algorithm in this comparative study.

7.4. Convergence Behavior

Across the twelve engineering design benchmarks, CTOSO shows a consistent and favorable convergence pattern as shown in Figure 3 and Figure 4. In problems with relatively easier constraint structures, such as the Cantilever Beam, Reinforced Concrete Beam, and Corrugated Bulkhead, the CTOSO curve appears from the earliest evaluations, indicating immediate feasibility. In more restrictive problems, including the Speed Reducer, Tension/Compression Spring, Welded Beam, and Car Side Impact, the CTOSO curve appears shortly after the start, reflecting a brief initial infeasible phase; however, this delay is noticeably shorter than that of several competing algorithms, which often remain infeasible for much longer.
Once feasibility is reached, CTOSO often shows a clear initial decrease in the objective value, indicating effective early improvement on many of the benchmark problems, showing that it quickly moves toward high-quality regions of the search space. After this fast descent, the convergence profile becomes smooth and stable, with steady improvement and no oscillatory or divergent behavior. Across all twelve benchmarks, CTOSO reliably reaches a near-optimal region early and then refines the solution effectively throughout the remaining budget. Overall, CTOSO’s convergence behavior is characterized by early feasibility, quick improvement, and stable long-term refinement, making it a dependable and efficient optimizer across a variety of constrained engineering design problems.

7.5. Computational Complexity

As detailed in Table 20, The computational complexity of CTOSO is theoretically derived as O(FE (D + ps)), placing it in the moderate complexity tier alongside established algorithms like Differential Evolution. This efficient structure is driven by two key mechanisms: the spiral search strategy, which scales linearly with problem dimensionality (D), and the adaptive explorer weighting, which depends linearly on population size (ps). Unlike computationally expensive hierarchy-based algorithms such as GWO, which suffer from a high overhead of O(FE (D + ps log ps)) due to repetitive population sorting, CTOSO eliminates the need for sorting entirely. This strategic design choice allows the algorithm to allocate computational resources toward finding better solutions rather than maintaining internal hierarchy.
Furthermore, CTOSO demonstrates robust scalability when applied to high-dimensional real-world problems. While simple algorithms like MFO or BEE operate with a baseline complexity of O(FE.D), the additional overhead introduced by CTOSO’s population-dependent term is mathematically negligible in complex engineering scenarios. As problem dimensionality grows significantly larger than the population size ( D p s ) , the computational cost becomes dominated by the dimension term, effectively converging to the efficiency of the simplest competitors. Consequently, CTOSO offers a superior trade-off, providing the advanced search capabilities necessary to achieve 100% feasibility on constrained problems without incurring the prohibitive runtime penalties associated with high-complexity methods.
In addition to the population-based metaheuristics summarized in Table 20, CTOSO can be contrasted with single-agent based optimizers that operate on a single solution trajectory. In such approaches, the optimization process updates a single agent by manipulating either several elements or the full set of design parameters at each iteration, without maintaining a population of candidate solutions. Representative examples include improved smoothed functional algorithm approaches and related single-trajectory stochastic approximation methods, as well as sequential learning formulations often described in the context of game-theoretic approaches. Owing to the absence of population maintenance, sorting, or inter-agent coordination, these single-agent methods typically incur lower internal computational burden per iteration.
Recent studies in zeroth-order and smoothed functional optimization formalize this single-agent structure and emphasize its computational efficiency under fixed function-evaluation budgets through accelerated updates and estimator-efficient mechanisms [60,61,62]. When viewed against this class of single-agent methods, the additional population-dependent cost introduced by CTOSO remains within the moderate complexity tier identified in Table 20 and is dominated by the dimension-dependent term as the problem size increases. As demonstrated by the empirical results in Section 5, this moderate increase in computational effort enables cooperative exploration and systematic feasibility handling in constrained search spaces, resulting in consistently feasible solutions without approaching the higher overhead associated with hierarchy-based swarm algorithms.

7.6. Performance Visualization and Win Analysis

The performance trends observed in the numerical results are further confirmed through a series of visualizations. Figure 5a quantifies the number of benchmark problems where each algorithm achieved the best mean performance, with CTOSO securing the most wins. Figure 5b provides a rank-based heatmap of the best-found objectives, offering an immediate visual confirmation of CTOSO’s consistent top-tier performance across the problem suite, which aligns with the data in Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13 and Table 14. The distribution of solution quality is detailed in Figure 6a, where the log-scaled boxplot of average objectives reveals that CTOSO not only achieves the lowest median but also the smallest interquartile range, highlighting its precision and reliability. Finally, Figure 6b presents the Critical Difference (CD) diagram from the Friedman–Nemenyi test conducted in Section 7.3, graphically validating that CTOSO’s average rank is statistically superior to a distinct group of its closest competitors.

8. Discussion

The standardized CEC2017 constrained benchmark evaluation established the feasibility-handling robustness and competitive ranking behavior of CTOSO under a fixed evaluation budget. This section concentrates on the comparative analysis conducted on the constrained engineering design problems. It provides a comprehensive, data-driven evaluation of the top 10 algorithms selected via composite ranking. The assessment draws on convergence behavior, numerical performance across 12 benchmark functions, feasibility robustness, statistical hypothesis testing, and computational complexity analysis. Across all benchmark functions, CTOSO exhibits the most favorable convergence dynamics, consistently achieving rapid objective reduction while maintaining feasibility throughout the search process. Its curves consistently demonstrate rapid descent in the initial search phase (first 500–1000 FEs), suggesting effective early exploitation. This is followed by smooth and consistent flattening, which indicates strong convergence stability and minimal noise or oscillation, reflecting stable constraint management. By contrast, MFO shows similarly steady convergence but at a slower rate. GWO demonstrates delayed takeoff but achieves competitive final values, while CSA and DE show late convergence with wider result spread. Algorithms such as FPA, BOA, and EHO display intermittent flat regions and jumps, and TLBO and BEE show erratic convergence with inconsistent improvement rates.
A critical observation regarding the low performing algorithms in highly constrained problems is not their constraint-handling method, as all use Deb’s Feasibility Rule, but their limited exploitation efficiency under this low FE budget. Competitors often rely on less directed, generalized search mechanisms for local refinement. In contrast, CTOSO’s dedicated exploiter team, guided by the spiral strategy, provides a highly intensified and focused search. This superior, structured exploitation plays a key role in rapidly converging to the precise feasible optimum when the search budget is strictly limited to 3000 FEs, providing a plausible explanation for CTOSO’s consistent statistical advantage.
Examining the average and standard deviation of feasible objectives provides crucial cross-problem consistency insights. CTOSO achieves near best or best average feasible objectives in the majority of the problems, often with low standard deviations. While MFO matches or approaches CTOSO in multiple functions, it suffers from higher variability. GWO stands out in precision and variance control.
The statistical validation confirms these performance findings. The Wilcoxon signed-rank tests between CTOSO and each of the nine other top-performing algorithms yielded statistically significant results (p < 0.01) across all comparisons, confirming that CTOSO’s superiority is not incidental. Furthermore, the Friedman–Nemenyi test stratified the algorithms, showing that CTOSO’s average rank of 1.58 is statistically better than algorithms like TLBO, DE, CSA, and FPA, although differences with GWO and MFO were not statistically significant per the critical difference (CD).
Although CTOSO incorporates a spiral exploitation mechanism inspired by the Moth-Flame Optimization (MFO) algorithm, a structural distinction is necessary. In canonical MFO, every agent moves towards a sorted elite solution (a flame). In contrast, CTOSO selectively applies this spiral function only to its exploiters (half the population), and it does not employ a global sorting mechanism to determine flame positions. This team-based role separation and selective application makes CTOSO’s exploitation more modular, less greedy, and benefits from cooperative balance with the explorer team.
CTOSO’s primary strengths include achieving the best overall composite score, maintaining 100% feasibility across all problems, and providing statistically significant superiority over competitors. Its simplicity, requiring no algorithm-specific hyperparameters, and its structurally clean architecture (based on just two roles and no memory archive) make it easy to implement and computationally efficient.

9. Conclusions and Future Work

This paper has introduced the Constrained Team-Oriented Swarm Optimizer (CTOSO), a metaheuristic designed specifically for constrained engineering problems, following an initial validation on the standardized CEC2017 constrained benchmark suite. CTOSO’s architecture is defined by three core components: a dual-role population that separates exploration and exploitation duties, a spiral-based search strategy for intensive local refinement, and the systematic application of Deb’s feasibility rule for constraint handling. A principal advantage of this design is that it operates without algorithm-specific control parameters, reducing the tuning effort required for practical application.
The algorithm’s performance was rigorously evaluated against twenty state-of-the-art metaheuristics across twelve established engineering benchmarks. The results demonstrate that CTOSO is highly effective and robust. It achieved the highest composite ranking, maintained a 100% feasibility rate across all problems and independent runs, and demonstrated competitive computational efficiency. This consistent feasibility is not accidental; once a feasible solution appears, Deb’s rule ensures that feasible candidates are always preferred over infeasible ones, which naturally biases the search toward maintaining feasibility across runs. Statistical analysis, including Wilcoxon signed-rank tests and the Friedman–Nemenyi post hoc procedure, confirmed that CTOSO’s performance is significantly superior to all other algorithms in the study. Beyond these aggregate metrics, CTOSO exhibited consistent behavioral strengths, including rapid initial convergence, stable performance with low variance across runs, and reliable constraint satisfaction throughout the search process.
In summary, CTOSO represents a reliable, parameter-free solution for constrained optimization. Its consistent top-tier performance across multiple dimensions—solution quality, feasibility, and robustness—makes it a compelling choice for engineering applications where tuning effort must be minimized and reliable results are critical. Future work will focus on extending CTOSO’s capabilities to more complex scenarios. Promising directions include rigorous evaluation on large-scale problems with hundreds of variables to test its scalability, and extension to handle mixed-integer design variables, which combine continuous and discrete parameters and remain difficult for most metaheuristics. In addition, future evaluations may consider broader constrained test environments such as recent benchmark suites that include equality-heavy, simulation-driven, or mixed-variable scenarios, which would further clarify CTOSO’s behavior across diverse constraint structures.

Author Contributions

Conceptualization, A.B. and A.M.A.; Methodology, A.B. and A.M.A.; Software, A.B.; Validation, A.B. and A.M.A.; Formal analysis, A.B. and A.M.A.; Investigation, A.B. and A.M.A.; Resources, A.B.; Data curation, A.B. and A.M.A.; Writing—original draft, A.B. and A.M.A.; Writing—review & editing, A.M.A.; Visualization, A.B. and A.M.A.; Project administration, A.B.; Funding acquisition, A.B. and A.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article. To facilitate the reproduction of the results, the CTOSO algorithm code is available at https://github.com/adel468/CTOSO (accessed on 30 November 2025).

Acknowledgments

The authors acknowledges the Deanship of scientific research of the Islamic University of Madinah for their support with publication-related fees.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hafiz, F.M.F.; Abdennour, A. A team-oriented approach to particle swarms. Appl. Soft Comput. 2013, 13, 3776–3791. [Google Scholar] [CrossRef]
  2. BenAbdennour, A. An Enhanced Team-Oriented Swarm Optimization Algorithm (ETOSO) for Robust and Efficient High-Dimensional Search. Biomimetics 2025, 10, 222. [Google Scholar] [CrossRef]
  3. Liang, J.; Ban, X.; Yu, K.; Qu, B.; Qiao, K.; Yue, C.; Chen, K.; Tan, K.C. A Survey on Evolutionary Constrained Multiobjective Optimization. IEEE Trans. Evol. Comput. 2023, 27, 201–221. [Google Scholar] [CrossRef]
  4. Rahimi, I.; Gandomi, A.H.; Chen, F.; Mezura-Montes, E. A Review on Constraint Handling Techniques for Population-based Algorithms: From single-objective to multi-objective optimization. Arch. Comput. Methods Eng. 2023, 30, 2181–2209. [Google Scholar] [CrossRef]
  5. Yun, Y.; Gen, M.; Erdene, T.N. Applying GA-PSO-TLBO approach to engineering optimization problems. Math. Biosci. Eng. 2022, 20, 552–571. [Google Scholar] [CrossRef]
  6. Yi, W.; Lin, Z.; Chen, Y.; Pei, Z.; Lu, J. An Enhanced Adaptive Differential Evolution Approach for Constrained Optimization Problems. Comput. Model. Eng. Sci. 2023, 136, 2841–2860. [Google Scholar] [CrossRef]
  7. Guo, E.; Gao, Y.; Hu, C.; Zhang, J. A Hybrid PSO-DE Intelligent Algorithm for Solving Constrained Optimization Problems Based on Feasibility Rules. Mathematics 2023, 11, 522. [Google Scholar] [CrossRef]
  8. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  9. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  10. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  11. Rao, R.V.; Waghmare, G.G. Complex constrained design optimisation using an elitist teaching-learning-based optimisation algorithm. Int. J. Metaheuristics 2014, 3, 81. [Google Scholar] [CrossRef]
  12. Hao, Y.; Zhao, C.; Zhang, Y.; Cao, Y.; Li, Z. Constrained multi-objective optimization problems: Methodologies, algorithms and applications. Knowl.-Based Syst. 2024, 299, 111998. [Google Scholar] [CrossRef]
  13. Rezk, H.; Ghani Olabi, A.; Wilberforce, T.; Taha Sayed, E. Metaheuristic optimization algorithms for real-world electrical and civil engineering application: A review. Results Eng. 2024, 23, 102437. [Google Scholar] [CrossRef]
  14. Dalmaz, H.; Erdal, E.; Murat Unver, H. A New Hybrid Approach Using GWO and MFO Algorithms to Detect Network Attack. Comput. Model. Eng. Sci. 2023, 136, 1277–1314. [Google Scholar] [CrossRef]
  15. Veloso de Melo, V.; Moreira Nascimento, A.; Iacca, G. A co-evolutionary algorithm with adaptive penalty function for constrained optimization. Soft Comput. 2024, 28, 11343–11376. [Google Scholar] [CrossRef]
  16. Wang, B.-C.; Guo, J.-J.; Huang, P.-Q.; Meng, X.-B. A two-stage adaptive penalty method based on co-evolution for constrained evolutionary optimization. Complex Intell. Syst. 2023, 9, 4615–4627. [Google Scholar] [CrossRef]
  17. Innocente, M.S.; Sienz, J. Constraint-Handling Techniques for Particle Swarm Optimization Algorithms. In Proceedings of the 7th ASMO UK Conference on Engineering Design Optimization, Bath, UK, 23–24 July 2021. [Google Scholar] [CrossRef]
  18. Coelho, C.; Costa, M.F.P.; Ferrás, L.L. A Self-Adaptive Penalty Method for Integrating Prior Knowledge Constraints into Neural ODEs. arXiv 2024, arXiv:2307.14940. [Google Scholar] [CrossRef]
  19. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar] [CrossRef]
  20. Rahimi, I.; Gandomi, A.H.; Nikoo, M.R.; Mousavi, M.; Chen, F. Efficient implicit constraint handling approaches for constrained optimization problems. Sci. Rep. 2024, 14, 4816. [Google Scholar] [CrossRef]
  21. Liu, B.-J.; Bi, X.-J. Adaptive ε-Constraint Multi-Objective Evolutionary Algorithm Based on Decomposition and Differential Evolution. IEEE Access 2021, 9, 17596–17609. [Google Scholar] [CrossRef]
  22. Liu, Z.; Han, F.; Ling, Q.; Han, H.; Jiang, J. Constraint-Pareto Dominance and Diversity Enhancement Strategy based Evolutionary Algorithm for Solving Constrained Multiobjective Optimization Problems. IEEE Trans. Evol. Comput. 2025, 29, 2771–2784. [Google Scholar] [CrossRef]
  23. Liu, Z.; Shan, G.; Chen, Z.; Yang, Y. Physics-Guided Neural Surrogate Model with Particle Swarm-Based Multi-Objective Optimization for Quasi-Coaxial TSV Interconnect Design. Micromachines 2025, 16, 1134. [Google Scholar] [CrossRef] [PubMed]
  24. Chen, X.; Fang, S.; Li, K. Reinforcement-Learning-Based Multi-Objective Differential Evolution Algorithm for Large-Scale Combined Heat and Power Economic Emission Dispatch. Energies 2023, 16, 3753. [Google Scholar] [CrossRef]
  25. Ming, F.; Gong, W.; Wang, L.; Jin, Y. Constrained Multi-objective Optimization with Deep Reinforcement Learning Assisted Operator Selection. arXiv 2024, arXiv:2402.12381. [Google Scholar] [CrossRef]
  26. Wang, C.; Liu, Z.; Qiu, J.; Zhang, L. Adaptive constraint handling technique selection for constrained multi-objective optimization. Swarm Evol. Comput. 2024, 86, 101488. [Google Scholar] [CrossRef]
  27. Li, Y.; Gong, W.; Hu, Z.; Li, S. A Competitive and Cooperative Evolutionary Framework for Ensemble of Constraint Handling Techniques. IEEE Trans. Syst. Man Cybern Syst. 2024, 54, 2440–2451. [Google Scholar] [CrossRef]
  28. Wu, G.; Wen, X.; Wang, L.; Pedrycz, W.; Suganthan, P.N. A Voting-Mechanism-Based Ensemble Framework for Constraint Handling Techniques. IEEE Trans. Evol. Comput. 2022, 26, 646–660. [Google Scholar] [CrossRef]
  29. Mohd Tumari, M.Z.; Ahmad, M.A.; Suid, M.H.; Ghazali, M.R.; Tokhi, M.O. An improved marine predators algorithm tuned data-driven multiple-node hormone regulation neuroendocrine-PID controller for multi-input–multi-output gantry crane system. J. Low Freq. Noise Vib. Act. Control 2023, 42, 1666–1698. [Google Scholar] [CrossRef]
  30. Chen, Y.; Dong, W.; Hu, X. IMATSA—An improved and adaptive intelligent optimization algorithm based on tunicate swarm algorithm. AI Commun. 2024, 37, 1–22. [Google Scholar] [CrossRef]
  31. Meraihi, Y.; Gabis, A.B.; Mirjalili, S.; Ramdane-Cherif, A. Grasshopper Optimization Algorithm: Theory, Variants, and Applications. IEEE Access 2021, 9, 50001–50024. [Google Scholar] [CrossRef]
  32. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  33. Yu, H.; Wang, Y.; Jia, H.; Abualigah, L. Modified prairie dog optimization algorithm for global optimization and constrained engineering problems. Math. Biosci. Eng. 2023, 20, 19086–19132. [Google Scholar] [CrossRef] [PubMed]
  34. Brabazon, A.; Cui, W.; O’Neill, M. The raven roosting optimisation algorithm. Soft Comput 2016, 20, 525–545. [Google Scholar] [CrossRef]
  35. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  36. Tamura, K.; Yasuda, K. Spiral Dynamics Inspired Optimization. J. Adv. Comput. Intell. Intell. Inform. 2011, 15, 1116–1122. [Google Scholar] [CrossRef]
  37. Omar, M.B.; Bingi, K.; Prusty, B.R.; Ibrahim, R. Recent Advances and Applications of Spiral Dynamics Optimization Algorithm: A Review. Fractal Fract 2022, 6, 27. [Google Scholar] [CrossRef]
  38. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization; National University of Defense Technology: Changsha, China, 2017. [Google Scholar]
  39. Yang, H.; Xie, X.; Bi, Y.; Qu, B.; Liang, J.; Huang, K.; Yan, L. LCO-LSHADE-GSRL: An enhanced differential evolution algorithm with chaotic orthogonal initialization and GAN-driven specular reflection learning for engineering optimization. Expert Syst. Appl. 2025, 302, 130561. [Google Scholar] [CrossRef]
  40. Altay, E.V.; Altay, O.; Özçevik, Y. A Comparative Study of Metaheuristic Optimization Algorithms for Solving Real-World Engineering Design Problems. Comput. Model. Eng. Sci. 2024, 139, 1039–1094. [Google Scholar] [CrossRef]
  41. Maiti, B.; Biswas, S.; Ezugwu, A.E.-S.; Bera, U.K.; Alzahrani, A.I.; Alblehai, F.; Abualigah, L. Enhanced crayfish optimization algorithm with differential evolution’s mutation and crossover strategies for global optimization and engineering applications. Artif. Intell. Rev. 2025, 58, 69. [Google Scholar] [CrossRef]
  42. Xu, D.; Yin, J. An Improved Black Widow Optimization Algorithm for Engineering Constrained Optimization Problems. IEEE Access 2023, 11, 32476–32495. [Google Scholar] [CrossRef]
  43. Guo, W.; Hou, Z.; Dai, F.; Wang, X.; Qiang, Y. Bald Eagle Search Optimization Algorithm Combined with Spherical Random Shrinkage Mechanism and Its Application. J. Bionic. Eng. 2024, 21, 572–605. [Google Scholar] [CrossRef]
  44. Wang, S.; Hussien, A.G.; Jia, H.; Abualigah, L.; Zheng, R. Enhanced Remora Optimization Algorithm for Solving Constrained Engineering Optimization Problems. Mathematics 2022, 10, 1696. [Google Scholar] [CrossRef]
  45. Kim, T.-H.; Cho, M.; Shin, S. Constrained Mixed-Variable Design Optimization Based on Particle Swarm Optimizer with a Diversity Classifier for Cyclically Neighboring Subpopulations. Mathematics 2020, 8, 2016. [Google Scholar] [CrossRef]
  46. Yang, X.-S. A New Metaheuristic Bat-Inspired Algorithm. arXiv 2010, arXiv:1004.4170. [Google Scholar] [CrossRef]
  47. Pham, D.T.; Ghanbarzadeh, A.; Koç, E.; Otri, S.; Rahim, S.; Zaidi, M. The Bees Algorithm—A Novel Tool for Complex Optimisation Problems. In Intelligent Production Machines and Systems; Elsevier: Amsterdam, The Netherlands, 2006; pp. 454–459. ISBN 978-0-08-045157-2. [Google Scholar]
  48. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  49. Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  50. Yang, X.-S.; Deb, S. Cuckoo Search via Levy Flights. arXiv 2010, arXiv:1003.1594. [Google Scholar] [CrossRef]
  51. Storn, R.; Price, K. Differential Evolution—A simple and efficient adaptive scheme for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  52. Hakli, H. A Nover approach based on elephant herding optimization for constrained optimization problems. Selcuk Univ. J. Eng. Sci. Technol. 2019, 7, 405–419. [Google Scholar] [CrossRef]
  53. Yu, S.; Zhu, S.; Ma, Y.; Mao, D. A variable step size firefly algorithm for numerical optimization. Appl. Math. Comput. 2015, 263, 214–220. [Google Scholar] [CrossRef]
  54. Yang, X.-S.; Karamanoglu, M.; He, X. Flower pollination algorithm: A novel approach for multiobjective optimization. Eng. Optim. 2014, 46, 1222–1237. [Google Scholar] [CrossRef]
  55. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  56. Zhao, R.; Tang, W. Monkey Algorithm for Global Numerical Optimization. J. Uncertain Syst. 2008, 2, 165–176. [Google Scholar]
  57. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  58. Rao, R.V.; Patel, V. An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems. Sci. Iran. 2012, 20, 710–720. [Google Scholar] [CrossRef]
  59. Lagaros, N.D.; Kournoutos, M.; Kallioras, N.A.; Nordas, A.N. Constraint handling techniques for metaheuristics: A state-of-the-art review and new variants. Optim. Eng. 2023, 24, 2251–2298. [Google Scholar] [CrossRef]
  60. Gorbunov, E.; Dvurechensky, P.; Gasnikov, A. An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization. arXiv 2020, arXiv:1802.09022. [Google Scholar] [CrossRef]
  61. Zhu, J.; Wang, L.; Spall, J.C. Efficient Implementation of Second-Order Stochastic Approximation Algorithms in High-Dimensional Problems. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 3087–3099. [Google Scholar] [CrossRef]
  62. Pachal, S.; Bhatnagar, S.; Prashanth, L.A. Generalized Simultaneous Perturbation-Based Gradient Search With Reduced Estimator Bias. IEEE Trans. Automat. Control 2025, 70, 4687–4702. [Google Scholar] [CrossRef]
Figure 1. Critical difference (CD) plot for average ranks on the CEC2017 benchmark suite.
Figure 1. Critical difference (CD) plot for average ranks on the CEC2017 benchmark suite.
Mathematics 14 00176 g001
Figure 2. Critical difference (CD) diagram for average ranks of CTOSO, ETOSO, and TOSO across CEC2017 constrained benchmark functions (Friedman test with Nemenyi post hoc; CD = 0.856; lower rank is better).
Figure 2. Critical difference (CD) diagram for average ranks of CTOSO, ETOSO, and TOSO across CEC2017 constrained benchmark functions (Friedman test with Nemenyi post hoc; CD = 0.856; lower rank is better).
Mathematics 14 00176 g002
Figure 3. Convergence Plots for Top 10 Algorithms (Problems 1–6), 30 replications.
Figure 3. Convergence Plots for Top 10 Algorithms (Problems 1–6), 30 replications.
Mathematics 14 00176 g003
Figure 4. Convergence Plots for Top 10 Algorithms (Problems 7–12), 30 replications.
Figure 4. Convergence Plots for Top 10 Algorithms (Problems 7–12), 30 replications.
Mathematics 14 00176 g004
Figure 5. Visualization of algorithms performance: (a) Algorithm Win Count Across the Benchmark Suite; (b) Performance Heatmap: Rankings of Best Feasible Objectives.
Figure 5. Visualization of algorithms performance: (a) Algorithm Win Count Across the Benchmark Suite; (b) Performance Heatmap: Rankings of Best Feasible Objectives.
Mathematics 14 00176 g005
Figure 6. Statistical Algorithms Performance: (a) Log-Scaled Boxplots; (b) Critical Difference Diagram from the Friedman–Nemenyi Test.
Figure 6. Statistical Algorithms Performance: (a) Log-Scaled Boxplots; (b) Critical Difference Diagram from the Friedman–Nemenyi Test.
Mathematics 14 00176 g006
Table 1. Empirical performance on the CEC2017 constrained benchmark suite (f01–f15): empty entries indicate no feasible solution was found.
Table 1. Empirical performance on the CEC2017 constrained benchmark suite (f01–f15): empty entries indicate no feasible solution was found.
CTOSOGAPSOLSHADEDEMFOTLBOGWO
f012.07 × 10−521.157360.2726251.33 × 10−91006.1230.00033.4452770.081434
f021.49 × 10−510.694420.3253512.91 × 10−10754.34560.0010721.803410.063298
f033956.3993178.55529,388.24
f0440.7931134.7091347.4217216.284812014.51720.23924103.889819.35096
f050.03922279.471382.0905260.3440040.00041834.188921.944388
f06903.1204
f07−332.212−157.042−27.5699
f080.000111−9.2 × 10−5
f090.15696597.0689−0.00050.320292
f10−4.4 × 10−5−4.8 × 10−5−3.1 × 10−5
f11−2.38898
f123.98795520.236224.2197914.0393893.98794813.6040110.61497
f130.00437142.76981.5450340.0001920.00288496.9826810.5451
f143.3047763.397743.30349
f1511.780978.63937911.7809711.7809711.7809718.06416
AvgFR (%)79.7777852.8888941.5555663.777782066.2222238.2222238.66667
No. FeasibleProblems14981231276
AvgRank2.45.1333335.12.66.7666672.95.9333335.166667
Wins22080300
Table 2. Pairwise Wilcoxon signed-rank test results between CTOSO and reference algorithms.
Table 2. Pairwise Wilcoxon signed-rank test results between CTOSO and reference algorithms.
VersuspVerdict
CTOSOGA0.25No significant difference
CTOSOPSO0.0078CTOSO better
CTOSOLSHADE0.5771No significant difference
CTOSODE0.25No significant difference
CTOSOMFO0.7002No significant difference
CTOSOTLBO0.0157CTOSO better
CTOSOGWO0.4375No significant difference
Table 3. Best feasible objective values of CTOSO, ETOSO, and TOSO on the CEC2017: empty entries indicate no feasible solution was found.
Table 3. Best feasible objective values of CTOSO, ETOSO, and TOSO on the CEC2017: empty entries indicate no feasible solution was found.
Metric/FunctionCTOSOETOSOTOSO
f012.07106 × 10−55725.951149888.2916486
f021.48756 × 10−56781.844391598.8760425
f033956.399228
f0440.79310902608.4204647772.7285459
f050.039222331
f06903.1203672
f07−332.2117729
f080.000110564
f090.156964729
f10−4.35889 × 10−5
f11
f123.987954704
f130.004370377
f143.304775761
f1511.78097174
AvgFR (%)79.777777782020
No. Feasible Problems1433
AvgRank1.0000002.5357142862.464285714
Wins1400
Table 4. Summary of Constrained Engineering Benchmark Problems (inequality constraint).
Table 4. Summary of Constrained Engineering Benchmark Problems (inequality constraint).
IDProblem NameDimensionComplexity Type
1Speed Reducer7Nonlinear, Mixed Terms
2Tension/Compression Spring3Nonlinear
3Pressure Vessel4Nonlinear
4Welded Beam4Highly Coupled
5Himmelblau’s Beam5Polynomial Mixed
6Cantilever Beam5Rational Function
7Tubular Column2Analytical, Simple
8Piston Lever4Trigonometric, Nonlinear
9Car Side Impact11Mixed Linear & Nonlinear
10Corrugated Bulkhead4Nonconvex, Engineering
11Three-Bar Truss2Rational Fractions
12Reinforced Concrete Beam3Discrete, Quadratic
Table 5. Algorithm Filtering (Top 10 Selection).
Table 5. Algorithm Filtering (Top 10 Selection).
AlgorithmAvg.
Rank
Rank
Score
Feasibility %Feasibility
Score
Success%Success
Score
Composite
Score
In Top 10
CTOSO1.921.00100.001.0091.671.001.00Yes
BEE10.000.57100.001.0025.000.270.67Yes
BOA10.080.57100.001.0025.000.270.67Yes
CSA10.500.55100.001.0025.000.270.66Yes
CS15.000.31100.001.008.330.090.49No
DE10.670.54100.001.0033.330.360.66Yes
EHO4.080.8998.330.9450.000.550.87Yes
FA18.830.1191.110.670.000.000.26No
FPA7.250.72100.001.0033.330.360.77Yes
GOA10.580.5497.780.9241.670.450.65No
GSA13.830.3791.110.6716.670.180.44No
GWO5.670.80100.001.0041.670.450.83Yes
HHO10.000.5793.330.7533.330.360.61No
MFO2.580.96100.001.0058.330.640.94Yes
MKA16.330.24100.001.000.000.000.44No
PDO20.830.0073.060.000.000.000.00No
RRO14.250.35100.001.008.330.090.52No
SCA11.830.4891.670.6925.000.270.52No
SMA13.830.37100.001.0025.000.270.55No
TLBO10.670.54100.001.0025.000.270.65Yes
TOSO12.420.4499.720.9925.000.270.59No
Table 6. Speed Reducer Design Results.
Table 6. Speed Reducer Design Results.
AlgorithmKnown BestBest FoundAverageStd DevFeasibility %Time (s)
CTOSO2994.422562994.430223009.9822918.28088100.000000.00799
BEE3402.165264109.66421828.9654313.333330.00511
BOA2998.604503013.5462716.83084100.000000.00642
CSA3014.549063043.3963612.24176100.000000.00902
DE3010.431543026.909709.62804100.000000.00939
EHO3009.458443084.07376117.39808100.000000.00606
FPA3002.768883015.169789.85771100.000000.00985
GWO3025.166953044.7540510.86458100.000000.02451
MFO2994.551112998.922679.89997100.000000.00698
TLBO3044.164903101.9930845.33784100.000000.00668
Table 7. Tension/Compression Spring Design Results.
Table 7. Tension/Compression Spring Design Results.
AlgorithmKnown BestBest FoundAverageStd DevFeasibility %Time (s)
CTOSO0.012660.012680.016500.00508100.000000.01179
BEE0.032610.072890.0382743.333330.00820
BOA0.012720.048810.03045100.000000.00971
CSA0.012750.014710.00170100.000000.01456
DE0.012830.013770.00082100.000000.01210
EHO0.012670.013260.00140100.000000.00899
FPA0.012770.013260.00083100.000000.01200
GWO0.012740.013370.00100100.000000.03478
MFO0.012680.013780.00157100.000000.03881
TLBO0.013130.016360.00259100.000000.03923
Table 8. Pressure Vessel Design Results.
Table 8. Pressure Vessel Design Results.
AlgorithmKnown BestBest FoundAverageStd DevFeasibility %Time (s)
CTOSO5885.332805885.365436665.59614561.98063100.000000.04927
BEE61,261.59527286,910.42601153,258.5166496.666670.03783
BOA6403.0026627,150.8627545,676.80253100.000000.04274
CSA10,438.8428720,235.742306044.29053100.000000.06959
DE13,099.0406222,977.195195439.74015100.000000.05339
EHO5922.9526526,126.8618051,467.62641100.000000.04030
FPA7779.0747512,510.588362677.35331100.000000.03897
GWO5973.922896526.91214411.40888100.000000.07358
MFO5924.553156693.85946575.36999100.000000.03811
TLBO7187.5477012,546.103374363.66476100.000000.04009
Table 9. Welded Beam Design Results.
Table 9. Welded Beam Design Results.
AlgorithmKnown BestBest FoundAverageStd DevFeasibility %Time (s)
CTOSO1.724851.725962.219850.52940100.000000.05618
BEE3.113755.379521.6937540.000000.04848
BOA1.846622.751300.50826100.000000.05559
CSA2.046642.734090.34587100.000000.07470
DE1.944752.184050.12798100.000000.05872
EHO1.751532.491670.6222996.666670.04540
FPA1.862692.168480.19531100.000000.05769
GWO1.733621.749210.00883100.000000.11098
MFO1.743702.067020.39453100.000000.03879
TLBO2.064382.188940.10541100.000000.03249
Table 10. Himmelblau’s Beam Results.
Table 10. Himmelblau’s Beam Results.
AlgorithmKnown BestBest FoundAverageStd DevFeasibility %Time (s)
CTOSO−30665.53900−30,665.53932−30,635.43189163.30138100.000000.04606
BEE−29,962.25656−29,273.66802303.7352496.666670.03641
BOA−30,662.58116−30,493.40983151.10401100.000000.04828
CSA−30,650.80310−30,479.8055292.05694100.000000.07075
DE−30,618.59428−30,536.5020856.47780100.000000.05395
EHO−30,665.53832−30,439.96609244.06583100.000000.04613
FPA−30,660.44229−30,598.0511330.68531100.000000.04565
GWO−30,659.87004−30,625.7588124.38995100.000000.07824
MFO−30,665.51035−30,645.1732687.53560100.000000.04049
TLBO−30,645.18615−30,473.82089121.25306100.000000.03946
Table 11. Cantilever Beam Results.
Table 11. Cantilever Beam Results.
AlgorithmKnown BestBest FoundAverageStd DevFeasibility %Time (s)
CTOSO1.339961.340161.341910.00175100.000000.02628
BEE3.419576.946881.47072100.000000.01611
BOA1.340671.368010.12802100.000000.02046
CSA1.832282.471040.29989100.000000.02310
DE2.034053.425620.61963100.000000.03132
EHO1.389333.194981.42424100.000000.02205
FPA1.587362.151740.31028100.000000.02847
GWO1.340041.341020.00062100.000000.06784
MFO1.344441.368860.01649100.000000.01776
TLBO1.629272.160940.32001100.000000.01735
Table 12. Tubular Column Results.
Table 12. Tubular Column Results.
AlgorithmKnown BestBest FoundAverageStd DevFeasibility %Time (s)
CTOSO26.4863026.4994926.500710.00480100.000000.04424
BEE26.8554529.025141.56285100.000000.03574
BOA26.5011126.507220.00375100.000000.04733
CSA26.4942126.576130.06240100.000000.05395
DE26.5065326.563460.03414100.000000.04790
EHO26.4995026.499690.00047100.000000.03854
FPA26.5044026.532930.03144100.000000.04634
GWO26.5106126.537190.02009100.000000.07977
MFO26.4994926.499770.00054100.000000.03923
TLBO26.5165726.637950.08282100.000000.04903
Table 13. Piston Lever Design Results.
Table 13. Piston Lever Design Results.
AlgorithmKnown BestBest FoundAverageStd DevFeasibility %Time (s)
CTOSO8.412708.4127086.6217187.61831100.000000.05120
BEE555.6217314,022.5812519,172.81065100.000000.03951
BOA226.67109552.76981223.59413100.000000.03993
CSA74.78814274.4384071.45653100.000000.04790
DE24.79565141.3540467.24652100.000000.05920
EHO9.37145330.96262181.72972100.000000.03993
FPA14.77209189.0653283.15320100.000000.05122
GWO8.43224110.6324779.01639100.000000.09884
MFO8.41833120.7874674.73346100.000000.02912
TLBO14.73820189.6058381.56745100.000000.03640
Table 14. Car Side Impact Results.
Table 14. Car Side Impact Results.
AlgorithmKnown BestBest FoundAverageStd DevFeasibility %Time (s)
CTOSO22.8429727.9246930.028811.92685100.000000.05107
BEEInfNaNNaN0.000000.03887
BOA29.0169331.222051.30467100.000000.05152
CSA29.1543530.563270.68335100.000000.07011
DE28.6243430.125400.57927100.000000.06029
EHO29.5858532.426963.1280566.666670.04442
FPA28.6999029.605090.62486100.000000.05643
GWO28.2423329.106440.66774100.000000.11014
MFO28.1137828.943540.74435100.000000.04111
TLBO28.9956630.323040.78456100.000000.03429
Table 15. Corrugated Bulkhead Results.
Table 15. Corrugated Bulkhead Results.
AlgorithmKnown BestBest FoundAverageStd DevFeasibility %Time (s)
CTOSO6.842966.842976.895460.27283100.000000.04922
BEE7.9093711.129981.82991100.000000.04833
BOA6.851047.220450.39607100.000000.04814
CSA6.974497.388230.25567100.000000.07832
DE7.103677.603230.31234100.000000.06253
EHO6.863927.098300.38665100.000000.05045
FPA6.993587.250390.19522100.000000.05481
GWO6.865456.904240.04063100.000000.09829
MFO6.844776.851110.00631100.000000.03887
TLBO7.036257.414460.16978100.000000.04257
Table 16. Three-Bar Truss Design Results.
Table 16. Three-Bar Truss Design Results.
AlgorithmKnown BestBest FoundAverageStd DevFeasibility %Time (s)
CTOSO263.89580263.89874271.605094.48839100.000000.04946
BEE329.23104872.63628334.28303100.000000.03225
BOA263.91885264.266500.35916100.000000.05173
CSA264.06106265.892421.88533100.000000.07390
DE263.89648264.313220.29849100.000000.04331
EHO263.89589264.241021.38562100.000000.04396
FPA263.91726265.558142.21588100.000000.04065
GWO263.89730263.945050.03789100.000000.07332
MFO263.89976269.811825.16554100.000000.03946
TLBO263.98009264.591210.53620100.000000.04116
Table 17. Reinforced Concrete Beam Design Results.
Table 17. Reinforced Concrete Beam Design Results.
AlgorithmKnown BestBest FoundAverageStd DevFeasibility %Time (s)
CTOSO359.20800359.20796360.054511.40914100.000000.04526
BEE363.11596373.938526.58367100.000000.02741
BOA359.20834364.499394.84366100.000000.04557
CSA359.16516360.054120.96460100.000000.06112
DE359.21403359.534350.35829100.000000.05969
EHO359.20800360.393241.69635100.000000.03257
FPA359.21081359.790911.07844100.000000.04153
GWO359.21043360.044251.37822100.000000.06801
MFO359.20796360.983681.58284100.000000.02405
TLBO359.22485360.044660.91562100.000000.03279
Table 18. Wilcoxon Signed-Rank Test: CTOSO vs. Top Algorithms.
Table 18. Wilcoxon Signed-Rank Test: CTOSO vs. Top Algorithms.
Versusp-ValueVerdict
CTOSOBEE0.000488281CTOSO better
BOA0.000488281CTOSO better
CSA0.004882813CTOSO better
DE0.001464844CTOSO better
EHO0.009277344CTOSO better
FPA0.000488281CTOSO better
GWO0.004882813CTOSO better
MFO0.000488281CTOSO better
TLBO0.000488281CTOSO better
Table 19. Friedman–Nemenyi Test Results.
Table 19. Friedman–Nemenyi Test Results.
VersusAvg
Rank 1
Avg
Rank 2
Abs Rank
Diff
Critical
Difference
Significant
CTOSOBEE1.5810.008.424.08TRUE
BOA5.003.42FALSE
CSA6.424.83TRUE
DE7.085.50TRUE
EHO3.752.17FALSE
FPA6.004.42TRUE
GWO4.502.92FALSE
MFO2.921.33FALSE
TLBO7.756.17TRUE
Table 20. Computational Complexity Analysis.
Table 20. Computational Complexity Analysis.
AlgorithmBig-O
Complexity (ps = 100)
Key Overhead DriversOverall
Complexity Tier
CTOSOO(FE · (D + ps))ps/2 spiral (exp, cos) per D; O(ps) max operation for weight wModerate
BEEO(FE · D)Sort ps bees each gen (ps log ps); Gaussian site searchModerate
BOAO(FE · D)Lightweight fragrance calculation; no population sortingLow
CSAO(2FE · D)Awareness move + memory copy per agentHigh
DEO(FE · (D + ps))randperm on ps agents; mutation & crossover vectorsModerate
EHOO(FE · D)Two rand calls + clan mean per updateLow
FPAO(FE · D)Lévy flight (three variables) per DLow
GWOO(FE · (D + ps log ps))Sort ps wolves every eval; updates three variablesHigh
MFOO(FE · D)exp + cos per D; flame re-rank once/generationLow
TLBOO(FE · D)Teacher/learner vector operations; one start-up sortLow
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

BenAbdennour, A.; Alenezi, A.M. A Tuning-Free Constrained Team-Oriented Swarm Optimizer (CTOSO) for Engineering Problems. Mathematics 2026, 14, 176. https://doi.org/10.3390/math14010176

AMA Style

BenAbdennour A, Alenezi AM. A Tuning-Free Constrained Team-Oriented Swarm Optimizer (CTOSO) for Engineering Problems. Mathematics. 2026; 14(1):176. https://doi.org/10.3390/math14010176

Chicago/Turabian Style

BenAbdennour, Adel, and Abdulmajeed M. Alenezi. 2026. "A Tuning-Free Constrained Team-Oriented Swarm Optimizer (CTOSO) for Engineering Problems" Mathematics 14, no. 1: 176. https://doi.org/10.3390/math14010176

APA Style

BenAbdennour, A., & Alenezi, A. M. (2026). A Tuning-Free Constrained Team-Oriented Swarm Optimizer (CTOSO) for Engineering Problems. Mathematics, 14(1), 176. https://doi.org/10.3390/math14010176

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop