Next Article in Journal
Numerical Study of the Dynamics of Medical Data Security in Information Systems
Previous Article in Journal
Exploring Risk Factors of Mycotoxin Contamination in Fresh Eggs Using Machine Learning Techniques
Previous Article in Special Issue
Computer Vision-Driven Framework for IoT-Enabled Basketball Score Tracking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Sine–Cosine with Hummingbird Foraging Algorithm for Engineering Design Optimisation

1
Computer Science Department, University of Petra, Amman 11196, Jordan
2
Department of Data Science and Artificial Intelligence, Faculty of Information Technology, Al-Ahliyya Amman University, Amman 19328, Jordan
3
Design & Visual Communication Department, School of Architecture and Built Environment (SABE), German Jordanian University (GJU), Amman 11180, Jordan
4
Artificial Intelligence Faculty, Al-Balqa Applied University, Amman 11196, Jordan
5
Information Studies Department, Sultan Qaboos University, Muscat P.O. Box 50, Oman
6
Information Sciences and Educational Technology Department, School of Educational Sciences, The University of Jordan, Amman 11196, Jordan
7
Department of Journalism, Media, and Digital Communication, School of Arts, The University of Jordan, Amman 1196, Jordan
*
Author to whom correspondence should be addressed.
Computers 2026, 15(1), 35; https://doi.org/10.3390/computers15010035
Submission received: 13 November 2025 / Revised: 25 December 2025 / Accepted: 26 December 2025 / Published: 7 January 2026
(This article belongs to the Special Issue AI in Complex Engineering Systems)

Abstract

We introduce AHA–SCA, a compact hybrid optimiser that alternates the wave-based exploration of the Sine–Cosine Algorithm (SCA) with the exploitation skills of the Artificial Hummingbird Algorithm (AHA) within a single population. Even iterations perform SCA moves with a linearly decaying sinusoidal amplitude to explore widely around the current best solution, while odd iterations invoke guided and territorial hummingbird flights using axial, diagonal, and omnidirectional patterns to intensify the search in promising regions. This simple interleaving yields an explicit and tunable balance between exploration and exploitation and incurs negligible overhead beyond evaluating candidate solutions. The proposed approach is evaluated on the CEC2014, CEC2017, and CEC2022 benchmark suites and on several constrained engineering design problems, including welded beam, pressure vessel, tension/compression spring, speed reducer, and cantilever beam designs. Across these diverse tasks, AHA–SCA demonstrates competitive or superior performance relative to stand-alone SCA, AHA, and a broad panel of recent metaheuristics, delivering faster early-phase convergence and robust final solutions. Statistical analyses using non-parametric tests confirm that improvements are significant on many functions, and the method respects problem constraints without parameter tuning. The results suggest that alternating wave-driven exploration with hummingbird-inspired refinement is a promising general strategy for continuous engineering optimisation.

1. Introduction

Optimisation is the process of selecting the best decision vector from a feasible set defined by constraints, and it underpins many tasks in engineering design, economics and machine learning. In a typical single-objective formulation, the goal is to minimise (or maximise) an objective function f ( x ) subject to bound constraints and general inequality/equality constraints, e.g.,  g i ( x ) 0 and h j ( x ) = 0 , where x may be continuous, discrete, or mixed. Practical optimisation problems often involve conflicting requirements (accuracy, cost, safety), expensive black-box simulations, measurement noise, and non-smooth or discontinuous responses, which strongly influences the choice of optimisation methodology [1,2].
Deterministic optimisation methods follow a fully prescribed sequence of computations: given the same initial conditions and parameters, they generate the same search trajectory and solution. This family includes classical mathematical programming approaches such as linear and quadratic programming, convex optimisation, and nonlinear programming (NLP). When the objective and constraints are smooth and (approximately) convex, gradient-based algorithms—including steepest descent, Newton/trust-region methods, sequential quadratic programming (SQP), and interior-point methods—can provide strong theoretical properties (e.g., convergence to stationary points and, in convex settings, global optimality) and fast local convergence rates [1,2]. Deterministic approaches are therefore widely used in parameter estimation, optimal control, structural sizing with well-behaved models, and resource allocation where derivatives (or accurate approximations) are available and constraints can be handled systematically via Karush–Kuhn–Tucker conditions and Lagrangian-based frameworks [1]. Their main limitations arise when problems are highly non-convex, discontinuous, mixed-integer, or defined by black-box simulators: gradients may be unavailable or misleading, feasible regions may be fragmented, and the algorithms may become sensitive to initialisation or converge to poor local optima. Moreover, exact deterministic global methods for discrete or mixed-integer formulations can incur combinatorial complexity as dimensionality grows [2].
Stochastic optimisation methods incorporate randomness into the search process and typically return solutions whose quality is described statistically over repeated runs. Randomness can enter through noisy objective evaluations, random sampling, or probabilistic update rules, enabling the search to explore multiple basins of attraction and reducing dependence on a single initial point [2,3]. A prominent example in machine learning is stochastic gradient-based learning (e.g., SGD and its variants), where random mini-batches provide scalable approximate gradients for large datasets [3]. In broader engineering optimisation, stochastic methods are attractive for rugged landscapes, non-differentiable objectives, and simulation-based models where only function evaluations are available. The trade-offs are equally important: stochastic methods may require many evaluations, can exhibit run-to-run variability, and generally offer weaker theoretical guarantees for global optimality in finite time [2].
Within stochastic optimisation, metaheuristic techniques are widely recognised for their flexibility and robustness; surveys and comparative studies consistently report their ability to adapt to diverse problem structures and maintain performance under uncertainty [4,5]. Metaheuristics provide general-purpose search frameworks rather than problem-specific solvers, relying on population diversity and stochastic operators to balance exploration (global search) and exploitation (local refinement). This balance is crucial: excessive exploitation can cause premature convergence to local optima, while excessive exploration can slow convergence and waste limited evaluation budgets [5]. Metaheuristics are therefore commonly adopted for black-box engineering design, feature selection, hyperparameter tuning, and constrained optimisation where modelling assumptions required by deterministic solvers are difficult to satisfy [6].
Recent years have witnessed a proliferation of nature-inspired metaheuristics. Examples include the Slime Mould Algorithm (SMA), Tunicate Swarm Algorithm (TSA), Runge–Kutta Optimiser (RKO), Equilibrium Optimiser (EO), and Manta Ray Foraging Optimisation (MRFO) [7,8,9,10,11]. Although these algorithms introduce diverse search operators, they share a recurring difficulty: maintaining an effective exploration–exploitation balance across different landscapes and constraint regimes. As a result, performance can be problem-dependent, motivating strategies that enhance robustness without sacrificing computational simplicity.
Combining complementary metaheuristics has become a widely adopted strategy to mitigate individual weaknesses and improve convergence. Metaheuristic hybrids integrate two or more algorithms, drawing on mechanisms such as evolutionary operators, swarm behaviours, or chaotic mappings to achieve improved search dynamics [7,12]. Comprehensive reviews document numerous combinations—for example, Particle Swarm Optimisation (PSO) with Genetic Algorithms (GAs), Differential Evolution (DE) with PSO, and Grey Wolf Optimiser (GWO) with Sine–Cosine Algorithm (SCA)—each aiming to capitalise on complementary strengths [7,12]. Despite this rich body of hybrid designs, we are not aware of prior work that tightly integrates the Sine–Cosine Algorithm with the Artificial Hummingbird Algorithm within a unified population for continuous optimisation.
This paper addresses this gap by proposing AHA–SCA, a hybrid metaheuristic that interleaves the Sine–Cosine Algorithm (SCA) and the Artificial Hummingbird Algorithm (AHA) within a single population. In the proposed design, SCA-style sinusoidal updates encourage wide-ranging exploration (particularly in early iterations), while AHA-style guided and territorial foraging intensifies the search near promising regions. Alternating these phases is intended to reduce premature convergence and improve final solution quality compared with using either component alone.
The main contributions of this work are as follows:
  • We develop a novel AHA–SCA hybrid optimisation framework that couples SCA and AHA operators within a unified population to promote more effective exploration and exploitation behaviour.
  • We conduct extensive experiments on three benchmark suites (CEC2014, CEC2017, and CEC2022) and on constrained engineering design problems (welded beam, pressure vessel, tension/compression spring, speed reducer, and cantilever beam) using consistent settings and multiple independent runs.
  • We compare AHA–SCA against state-of-the-art optimisers using statistical significance tests and analyse convergence behaviour to clarify how the hybridisation affects search dynamics.
The remainder of this paper is structured as follows. Section 2 reviews related work on metaheuristic optimisation and highlights recent hybrid algorithms. Section 3 provides a detailed literature review of recent metaheuristic algorithms, discussing their strengths and weaknesses. Section 4 describes the proposed AHA–SCA algorithm, including its motivation, mathematical model, pseudocode, and complexity analysis. Section 5 details the experimental setup, including implementation details, statistical analysis, and benchmark functions. Section 6 presents numerical results on benchmarks and engineering problems, followed by discussion. Section 7 concludes this paper and outlines directions for future research.

2. Literature Review of Recent Metaheuristic Optimisers

Metaheuristic optimisation has expanded rapidly in recent years, with many new algorithms proposed for black-box, non-convex, and constrained problems. Recent surveys report that this growth is driven by the practical need for derivative-free solvers that are easy to implement and can be adapted across domains [4,12]. At the same time, historical perspectives caution that the field remains heterogeneous, with many variants differing mainly in operators rather than offering fundamentally new search principles [13]. Consequently, a useful literature review should not reiterate general optimisation theory but instead analyse representative recent algorithms in terms of the following: (i) the mechanisms they introduce, (ii) the strengths demonstrated in their empirical studies, and (iii) the recurring limitations that motivate hybridisation and enhancement strategies.

2.1. Representative Recent Optimisers: Mechanisms, Strengths, and Limitations

A large share of recent methods are nature-inspired, where search operators are designed from observed behaviours. The Slime Mould Algorithm (SMA) introduces an adaptive feedback mechanism that re-weights agents to emulate oscillatory foraging. In its original study, SMA demonstrates strong global exploration on multimodal landscapes, but the same adaptive weights may reduce step sizes too aggressively near promising regions, leading to slow final convergence and occasional stagnation [7]. In contrast, the Tunicate Swarm Algorithm (TSA) relies on relatively direct position updates inspired by jet propulsion and swarm cohesion. TSA is often reported to exploit locally effective regions well, yet its exploration capacity can be limited on complex multimodal functions, making it susceptible to premature convergence when diversity collapses early [8].
A second line of work uses mathematics- or physics-inspired operators to guide trajectories more deterministically while retaining stochasticity at the population level. The Runge–Kutta Optimiser (RKO) uses ideas from numerical integration to adapt step updates and has been shown to behave competitively on a range of continuous functions; however, when objective landscapes are rugged or highly irregular, the algorithm may require additional diversification mechanisms to avoid being trapped in suboptimal basins [9]. The Equilibrium Optimiser (EO) updates solutions using dynamic equilibrium concepts, which can provide wide-ranging exploration and strong early progress. Nonetheless, the reported performance of EO can be sensitive to parameter settings and control schedules, and careful tuning is sometimes needed to maintain stability across different benchmark classes [10].
Foraging- and hunting-based optimisers have also remained active research topics because they naturally yield multi-phase search behaviours. Manta Ray Foraging Optimisation (MRFO) combines chain foraging, cyclone foraging, and somersault foraging to alternate between diversification and intensification [11]. Although MRFO performs well on several test suites, empirical studies indicate that convergence behaviour may be inconsistent across problem types, particularly when the balance between its phases does not match the structure of the landscape [11]. Harris Hawks Optimisation (HHO) mimics cooperative pursuit strategies and is frequently reported to converge quickly during exploitation; however, overly aggressive transition to exploitation can cause unstable steps or overshooting in smoother functions and may reduce robustness when the exploration period is insufficient [14]. The Butterfly Optimisation Algorithm (BOA) relies on fragrance-based attraction to control exploration, and its global search ability is often adequate at moderate dimensions; yet it can still become trapped in local optima in higher-dimensional or highly multimodal settings without additional diversification operators [15].
Table 1 summarises these representative recent optimisers and highlights a consistent conclusion: despite diverse metaphors, many algorithms exhibit similar trade-offs, most notably the following: (i) slow exploitation near optima, (ii) premature convergence due to diversity loss, and (iii) sensitivity to control parameters and phase schedules. These recurring weaknesses are a key reason why hybridisation has become a dominant research direction [4,12].

2.2. Hybridisation and Enhancement Trends in Recent Studies

Because no single optimiser consistently dominates across all landscapes and constraint regimes, recent publications increasingly focus on hybridisation and enhancement operatorsrather than proposing entirely new metaphors [4,12]. Hybrid methods commonly combine the following: (i) an exploration-oriented operator that can traverse the space quickly and maintain diversity and (ii) an exploitation-oriented operator that refines solutions near promising regions. Reviews document many successful pairings, such as PSO with GA, DE with PSO, and GWO combined with SCA, where the hybrid is designed to capitalise on complementary strengths and offset known weaknesses [7,12].
Beyond simply concatenating two algorithms, recent hybrids often incorporate mechanisms such as adaptive parameter schedules, diversity-preserving mutation, opposition-based learning, chaotic maps, or embedded local search to stabilise performance across problem classes [12,21]. These enhancements aim to address the most frequently observed failure modes in modern metaheuristics: early loss of diversity, stagnation in later iterations, and sensitivity to the chosen control parameters. However, increased algorithmic complexity can also raise implementation overhead and make fair comparison more difficult if parameter tuning is not standardised [4].

2.3. Positioning and Motivation for the Proposed Hybrid

The above studies indicate that modern optimisers often excel in either exploration or exploitation but rarely maintain an effective balance across all stages of the search. This motivates hybrids that explicitly interleavecomplementary operators in a unified population to retain diversity early while strengthening refinement later [12]. In this work, we follow this established trend but target a specific, under-explored combination: the Sine–Cosine Algorithm (SCA), which is well-suited to broad exploratory moves through its oscillatory update rule, and the Artificial Hummingbird Algorithm (AHA), which provides structured foraging-style intensification around promising regions. Our proposed AHA–SCA hybrid is designed to reduce premature convergence and improve final accuracy by alternating these complementary behaviours within a single population, and it is evaluated on recent CEC benchmark suites and constrained engineering design problems under consistent experimental settings.
Metaheuristic algorithms are employed to find optimal solutions for various problems. These algorithms typically involve a search process using multiple agents that follows a set of rules or mathematical equations through multiple iterations. The process continues until a solution meeting a predefined criterion is found. This approach contrasts with exact methods that provide optimal solutions but often require high computational time. Metaheuristics, being a level above heuristic methods, offer a balance between computational cost and solution quality, making them particularly effective for real-world problems [22].
Two fundamental aspects of metaheuristic algorithms are exploration (diversification) and exploitation (intensification). Exploration involves broadening the search to unvisited areas, while exploitation focuses on areas with high-quality solutions. Efficient algorithms balance these two features to effectively navigate the search space. Additionally, metaheuristic algorithms can be categorised based on their search approach: local search methods, which are more exploitative, and global search methods, which are more explorative. Hybrid methods combine these approaches to enhance performance [21].
In the field of single-objective optimisation, the challenges and methodologies are intricate and multifaceted. Single-objective optimisation problems, characterised by their focus on optimising a single, clearly defined metric, are ubiquitous across various domains such as engineering, finance, logistics, and more. They necessitate the application of diverse optimisation techniques to efficiently navigate the complexity of these problems. Metaheuristics have gained significant traction in this context due to their simplicity and robustness, offering solutions to a wide array of optimisation problems across different fields [23].
The essence of optimisation lies in maximising outcomes with limited resources by selecting the best solution from a set of available options. This process inherently involves decision variables, objective functions, and constraints, all of which are crucial in formulating an optimisation problem. Single-objective optimisation problems are distinct in that they focus on one specific objective, either maximising or minimising it. This contrasts with multi-objective optimisation problems, where multiple conflicting objectives must be satisfied simultaneously, adding layers of complexity to the optimisation task [24].
Metaheuristics play a pivotal role in addressing these challenges. They are higher-level heuristic methodologies that guide the search for optimal solutions through adaptive and intelligent computational strategies. These techniques, which include local and global search metaheuristics, vary in their approach to problem-solving [12]. Local search methods are more exploitative, focusing on refining current solutions, whereas global search methods are explorative, searching broadly across the solution space. The choice between single-solution (trajectory) and population-based metaheuristics further influences the optimisation strategy, where the former focuses on evolving a single solution and the latter manipulates a set of solutions to explore and exploit the search space [25].

2.4. Overview of the Artificial Hummingbird Algorithm (AHA)

The Artificial Hummingbird Algorithm (AHA) is a bio-inspired optimisation technique modelled after the foraging behaviour of hummingbirds. These birds are known for their extraordinary flight capabilities and efficiency in locating and extracting nectar from flowers. The AHA simulates this behaviour to solve optimisation problems, focusing on efficient local search strategies that adapt based on past experiences and environmental cues. This algorithm is particularly noted for its ability to perform intensive local exploitation and its adaptability, which enables it to fine-tune solutions in complex landscapes.
The Artificial Hummingbird Algorithm (AHA) is designed to emulate the foraging behaviour of hummingbirds through a structured series of computational steps. Each step corresponds to a fundamental aspect of how these birds search for and exploit nectar sources in their natural habitat. The key components of the AHA include initialisation, nectar quality assessment, exploration and exploitation, and decision making. These components collectively ensure that the algorithm can effectively navigate and optimise complex search spaces. By understanding each of these components in detail, we can appreciate how the AHA achieves a balance between exploration and exploitation, leading to robust and efficient optimisation performance [26].
The Artificial Hummingbird Algorithm (AHA) operates by mimicking three primary aspects of hummingbird foraging: exploitative foraging, which involves direct flight to known food sources; explorative foraging, which entails searching for new resources over a wider area; and traplining, a routine path that includes visits to flowers at intervals matching their nectar regeneration times, thereby optimising the foraging path. These behaviours are translated into algorithmic steps that adjust the positions of potential solutions based on their relative quality and the distribution of other solutions in the search space.

Mathematical Foundation of Artificial Hummingbird Algorithm (AHA)

The Artificial Hummingbird Algorithm (AHA) emulates the foraging behaviour of hummingbirds, capturing their dynamic movement patterns [26]. This behaviour is mathematically represented by the update rule provided in Equation (1):
X i t + 1 = X i t + α · ( X best t X i t ) + β · randn ( ) · ( X j t X i t )
where
  • X i t and X j t are the current positions of the i-th and j-th solutions,
  • X best t is the best solution found so far,
  • α and β are coefficients that control the influence of the best known solution and a randomly chosen solution j, respectively,
  • randn ( ) generates a random number from a normal distribution, adding a stochastic element to the exploration.

2.5. Overview of the Sine–Cosine Algorithm (SCA)

The Sine–Cosine Algorithm (SCA) is a population-based optimisation technique introduced by Seyedali Mirjalili in 2016, designed to solve complex optimisation problems by leveraging the properties of sine and cosine functions for exploring and exploiting the search space. The algorithm initialises a diverse population of candidate solutions and iteratively updates their positions using sine and cosine mathematical functions, guided by random and systematic parameters to ensure thorough exploration and refinement of the search space. Each candidate’s fitness is evaluated using a problem-specific function, guiding the search towards optimal solutions [27]. SCA’s simplicity, adaptability, and effective balance between exploration and exploitation make it suitable for various applications, including engineering design, machine learning, resource allocation, and function optimisation. Despite its advantages, such as ease of implementation and versatility, SCA’s performance can be sensitive to parameter settings and may incur high computational costs for large-scale problems.

Mathematical Foundation of Sine–Cosine Algorithm (SCA)

The Sine–Cosine Algorithm (SCA) employs trigonometric functions to navigate the search space towards optimal solutions [26]. The fundamental equation that guides the SCA in adjusting the positions of potential solutions is represented in Equation (2), where the sine and cosine terms drive exploration and exploitation phases effectively.
X i , d t + 1 = X i , d t + r 1 · a t · ( sin ( r 2 ) or cos ( r 2 ) ) · | r 3 · X best , d t X i , d t |
where
  • X i , d t is the position of the i-th solution in the d-th dimension at iteration t,
  • X best , d t is the best solution found so far in the d-th dimension,
  • r 1 , r 2 , and r 3 are random numbers in the range [0, 1],
  • a t is a control parameter that decreases over time, influencing the balance between exploration and exploitation. It typically decreases from a positive value towards zero as iterations proceed.
The choice between sine and cosine functions is made randomly, allowing the algorithm to either intensify the search around the global best (using cosine) or diversify the search into new regions (using sine).

3. Proposed Hybrid AHA–SCA Optimiser Algorithm

Metaheuristic algorithms have seen widespread adoption for tackling complex optimisation problems due to their ability to escape local minima and handle non-convex, high-dimensional search spaces. Classical approaches such as Genetic Algorithms, Simulated Annealing, and Particle Swarm Optimisation laid the foundations for population-based search. In the past decade, numerous nature-inspired algorithms have been proposed, including the Slime Mould Algorithm, Tunicate Swarm Algorithm, and Equilibrium Optimiser, each drawing on distinct biological or physical metaphors to design update rules. Comparative studies report that no single algorithm consistently outperforms others across all problem types, motivating the design of hybrids that combine complementary behaviours [4,28].
A number of recent works propose hybrid metaheuristics that alternate between exploration and exploitation phases. For instance, Ahmadianfar et al. coupled the Runge–Kutta Optimiser with Particle Swarm Optimisation to accelerate convergence, while Faramarzi et al. integrated Equilibrium Optimisation with Lévy flights for improved exploration. Sadiq et al. developed Manta Ray Foraging with local refinements to solve engineering designs, and Hashim et al. combined Archimedes Optimisation with chaos-based searches. These hybrids generally report faster convergence and better solution quality than their constituent algorithms but often at the expense of additional parameters or increased complexity. Very few studies have explored tight integration between Sine–Cosine and Hummingbird algorithms. The former excels at global exploration using oscillatory motions, whereas the latter offers effective local exploitation via guided foraging. This gap motivates the current study [9,10,11,29].

3.1. Inspiration

Hybrid SCA–AHA integrates two distinct sources of inspiration. The Sine–Cosine Algorithm uses the oscillatory behaviour of sine and cosine functions to update candidate solutions. As a solution approaches an optimum, the amplitude of the trigonometric perturbations decreases, enabling a smooth transition from exploration to exploitation. Figure 1 depicts a sine and cosine wave guiding a particle toward the global best. The particle’s position is adjusted using random angles and decaying amplitudes, producing a spiral-like trajectory. The SCA update mechanism was proposed by Mirjalili in 2016, and it employs random parameters r 1 , r 2 , r 3 , r 4 to modulate movement and switch between sine and cosine functions.
Zaho et al. [26] introduced AHA as a bio-inspired optimiser: hummingbirds perform three types of flight—axial, diagonal, and omnidirectional—and employ guided foraging, territorial foraging, and migrating foraging. A visit table records the length of time a hummingbird has not visited each food source, influencing its movement decisions. Figure 1b shows a stylised hummingbird approaching a flower using these flight patterns. In the hybrid algorithm, SCA operations alternate with AHA operations, leveraging both wave-based exploration and hummingbird-inspired exploitation.

3.2. Mathematical Model

Consider a population of N individuals (solutions) in an n-dimensional search space. Each individual’s position is denoted by x i = ( x i , 1 , , x i , n ) and has an associated fitness value f ( x i ) = fobj ( x i ) . The best solution found so far is x best . The algorithm runs for T iterations, alternating between SCA and AHA operations depending on whether the iteration index t is even or odd.

3.2.1. Sine–Cosine Update (Even Iterations)

The SCA phase updates each individual using trigonometric functions. A scaling parameter r 1 linearly decreases from a constant a to zero to control the step size:
r 1 ( t ) = a 1 t T ,
where a is typically set to 2. For each individual i and each dimension j, three additional random variables are drawn: r 2 U ( 0 , 2 π ) , r 3 U ( 0 , 2 ) , and r 4 U ( 0 , 1 ) . The updated position is computed as
x i , j = x i , j + r 1 ( t ) × sin ( r 2 ) r 3 x best , j x i , j , if r 4 < 0.5 , cos ( r 2 ) r 3 x best , j x i , j , otherwise ,
where | · | denotes the absolute value. The sine function encourages movement toward the best solution, while the cosine function may move away from it, promoting exploration. The probability of selecting sine or cosine is governed by r 4 . After updating all dimensions, the new position is clipped to the domain bounds [ l b , u b ] .

3.2.2. Hummingbird Flights and Foraging (Odd Iterations)

In odd iterations, the algorithm simulates hummingbird behaviours. Each hummingbird selects a flight pattern (axial, diagonal, or omnidirectional) by constructing a direction vector d i { 0 , 1 } n , where ones indicate active dimensions. As described by Zhao et al., the flight skills allow the hummingbird to move along a single axis, along a diagonal involving multiple axes, or in all dimensions.
Guided foraging.
With probability 0.5 , a hummingbird performs guided foraging. It identifies a target food source indexed by k having the maximum unvisited time in the visit table. The new position is generated by
x i = x k + ϵ d i x i x k ,
where ϵ N ( 0 , σ 2 ) is normally distributed noise and ⊙ denotes elementwise multiplication. If the new fitness f ( x i ) improves upon f ( x i ) , the hummingbird moves to the new position; otherwise, its visit table entry is increased.
Territorial foraging.
Otherwise, the hummingbird performs territorial foraging around its current position:
x i = x i + ϵ d i x i ,
with ϵ N ( 0 , σ 2 ) as before. Only those components selected by d i are perturbed. Again, the move is accepted if it yields a better fitness. After each update, the visit table is modified: each entry increases by one (ageing), and the entry corresponding to the chosen food source is reset to zero.
The AHA also supports a migrating foragingstrategy in which the population moves collectively to new regions. In our hybrid implementation, migrating foraging is omitted because SCA already performs global exploration.

3.3. Pseudocode

Algorithm 1 presents the pseudocode for the hybrid SCA–AHA. At each iteration, the algorithm decides which phase to execute based on the parity of t. During the SCA phase, it computes r 1 from Equation (3) and updates each dimension using Equation (4). During the AHA phase, it selects a direction vector, performs guided or territorial foraging according to Equation (5) or Equation (6), updates the visit table, and tracks the best solution.
Algorithm 1 Hybrid Sine–Cosine and Artificial Hummingbird Optimiser
Computers 15 00035 i001

3.4. Movement Strategy

The hybrid movement strategy alternates between wave-driven steps and hummingbird flights. Figure 2 shows how these strategies complement each other. During even iterations, the SCA step (panel a) moves a particle around the best solution in a decaying spiral. The amplitude of the oscillations is controlled by r 1 (Equation (3)), and the choice between sine and cosine introduces directional diversity (Equation (4)).
In odd iterations, the AHA step (panel b) selects a flight pattern based on a random decision: axial flight perturbs a single dimension, diagonal flight perturbs multiple dimensions, and omnidirectional flight perturbs all dimensions. The hummingbird chooses between guided foraging (flying toward a food source identified by the visit table) and territorial foraging (exploring locally). The noise term ϵ in Equations (5) and (6) ensures stochasticity.

3.5. Exploration and Exploitation Behaviour

Hybrid SCA–AHA alternates between exploration and exploitation. The SCA phase emphasises exploration early on: the parameter r 1 starts large and decreases linearly (Equation (3)), causing large oscillatory movements that sample far from the current best solution. The random variables r 2 , r 3 , r 4 further diversify the search, and the use of both sine and cosine functions prevents stagnation. As iterations progress, r 1 shrinks, and the movement focuses near x best , increasing exploitation.
The AHA phase predominantly exploits local regions by flying toward promising food sources (guided foraging) or performing territorial searches around the current position. However, the random selection of flight patterns and the inclusion of diagonal and omnidirectional flights enable exploration of multiple dimensions simultaneously. The visit table encourages diversification by increasing the unvisited time of food sources, making them more attractive targets later.

3.6. Complexity Analysis

Let c f denote the cost of evaluating the objective function. During the SCA phase, each of the N individuals updates n dimensions and performs one fitness evaluation. The SCA phase thus requires O ( N n + N c f ) operations per iteration. During the AHA phase, each individual constructs a direction vector (cost O ( n ) ), performs one or two position updates and evaluations, and adjusts the visit table. The complexity of the AHA phase is therefore O ( N n + N c f ) . Since the algorithm alternates between phases, the worst-case per-iteration complexity is
O N n + N c f .
Over T iterations, the total complexity is O ( T N n + T N c f ) . Memory consumption is dominated by storing the population ( O ( N n ) ) and the visit table ( O ( N 2 ) ). The quadratic visit table can be expensive for large populations, but in practice, N is chosen moderately. For very large populations, the O ( N 2 ) memory requirement may become prohibitive; in such cases it may be necessary to limit N or adopt more memory-efficient data structures (for example, sparse or hashed visit tables) to maintain scalability.

4. Experimental Setup and Benchmark Functions

4.1. Implementation Details and Experimental Setup

The AHA–SCA algorithm and all comparator methods were implemented in MATLAB(R2024a, The MathWorks Inc., Natick, MA, USA) and executed on a workstation equipped with an Intel Core i7-10700F processor (Intel Corporation, Santa Clara, CA, USA) at 2.90 GHz and 32 GB of RAM, running Windows 10 (Microsoft Corporation, Redmond, WA, USA). Each algorithm employed a population of 50 individuals. The termination criterion was a maximum of 1000 function evaluations per run, reflecting a modest computational budget typical of engineering optimisation studies. To account for stochastic variability, we conducted 30 independent runs for each benchmark function and engineering design problem. For each run, the best objective value was recorded, and across runs the results were summarised using the mean and standard deviation. Statistical significance of performance differences between AHA–SCA and competing algorithms was assessed using the Wilcoxon rank-sum test at a 5% significance level. All source code will be made available to facilitate reproducibility.

4.2. Benchmark Functions

The CEC benchmark suites—namely, CEC2014, CEC2017, and CEC2022—serve as standardised and comprehensive tools for evaluating the performance of optimisation algorithms across a wide spectrum of challenges. These suites include unimodal functions for testing convergence speed and precision, multimodal functions for assessing exploration capabilities and avoidance of local optima, and both separable and non-separable functions to examine variable interaction handling. Each version builds upon its predecessor by introducing increasingly complex problem landscapes, including hybrid and composite functions that simulate real-world optimisation environments. Specifically, the CEC2022 suite emphasises diversity in function types and dimensionality, offering a broad testbed for scalability and robustness assessment. Meanwhile, the CEC2017 and CEC2014 suites bring valuable insights into adaptability, convergence behaviour, and resilience under noisy or high-dimensional conditions. Together, these benchmarks foster the development of more capable and efficient algorithms for both academic research and practical industrial applications.

5. Parameterisation Framework for Benchmarking Optimisation Algorithms

For the effective benchmarking of optimisation algorithms in the CEC competitions of 2014, 2017, and 2022, the establishment of uniform parameters is indispensable. These parameters ensure a standardised environment that is crucial for the equitable evaluation of various evolutionary algorithms’ performances. Detailed numerical results (formerly Table 1, Table 2, Table 3, Table 4 and Table 5) are provided in the Appendix; in the main text we focus on key statistics and visual comparisons.
The selection of a 30-individual population size and a ten-dimensional space strikes a practical balance between computational efficiency and the complexity required for accurate testing. A maximum of 1000 function evaluations is adopted to reflect scenarios with limited evaluation budgets; this relatively small number provides the algorithms an opportunity to exhibit early convergence without imposing excessive computational demands. Notably, many CEC benchmarks allocate at least 10 , 000 × D evaluations for ten-dimensional problems, so our limit is lower than the state of the art and may restrict long-run convergence. The choice underscores the applicability of the proposed method to resource-constrained engineering tasks; nevertheless, evaluating AHA–SCA with larger budgets remains an interesting direction for future work. A consistent search range of [ 100 , 100 ] D allows for broad and equal testing conditions across different algorithms.

5.1. Evaluating Optimisation Algorithms Through Statistical Methods

In this thorough investigation into the performance of various optimisation algorithms, we employed fundamental statistical measures: the mean, standard deviation, and Wilcoxon rank-sum test. The mean, indicative of central tendency, averages the performance outcomes across several trials, providing a comprehensive overview of expected performance levels. To complement this, the standard deviation measures the extent of variability from the average, shedding light on the consistency and robustness of the algorithms under different conditions. These statistics are indispensable for understanding the algorithms’ stability and their predictable performance.
Further, to explore the statistical relevance of differences between the performances of diverse algorithm groups, we conducted the Wilcoxon rank-sum test. This test helps to statistically validate whether observed performance discrepancies are significant, enhancing the reliability of our comparative analysis.

5.2. Review of Comparative Algorithms

In this analysis, we conducted a rigorous evaluation of a variety of optimisation algorithms to determine their effectiveness. The assessment encompassed a broad range of algorithms including the following: Sea-Horse Optimiser (SHO) [30], Sine–Cosine Algorithm (SCA) [31], Horned Lizard Optimisation Algorithm (HLOA) [32], and Butterfly Optimisation Algorithm (BOA) [15]; Moth-Flame Optimisation (MFO) [33], Whale Optimisation Algorithm (WOA) [34], Multi-Verse Optimiser (MVO) [35], and Reptile Search Algorithm (RSA) [36]; and Geometric Mean Optimiser (GMO) [37], Grey Wolf Optimiser (GWO) [38], Arithmetic Optimisation Algorithm (AOA) [39], Golden Jackal Optimisation (GJO) [40], Mountain Gazelle Optimiser (MGO) [41], Flow Direction Algorithm (FDA) [42], and Giant Trevally Optimiser (GTO) [43].

6. Results Analysis

6.1. Results over CEC2017

The Hybrid Artificial Hummingbird Algorithm with Sine–Cosine Algorithm (SCA–AHA) exhibits strong performance across all 30 functions of the IEEE CEC2017 benchmark. To summarise these results concisely, we provide a performance overview in Table 2. In addition to each algorithm’s average rank and the number of functions it wins (best mean value), the table also lists how many times the method finishes within the top three and its average standard deviation, offering a clearer view of both competitiveness and robustness. Detailed per-function statistics have been moved to the Appendix A.
As shown in the summary, SCA–AHA consistently demonstrates strong performance, particularly excelling on functions such as F2, F3, and F5 where it not only matches but often surpasses other leading algorithms such as Harris Hawks Optimisation (HGSO) and Grey Wolf Optimiser (GWO). This performance is indicative of SCA–AHA’s balanced mechanism of exploration and exploitation, which appears particularly effective in these scenarios. The low variability in the ranks reflects the reliability and consistency of the hybrid across multiple runs.
The SCA–AHA continues to excel, particularly in Functions F16 and F21, where it achieves significantly lower mean values compared to other algorithms. Although the detailed statistics are provided in the Appendix A, this observation underscores the hybrid algorithm’s strategic integration of the exploratory aspects of the Sine–Cosine Algorithm and the exploitative efficiency of the Artificial Hummingbird Algorithm. Such synergy enhances its suitability for specific optimisation tasks, where this combined approach proves advantageous.

6.2. Results Analysis over CEC2014

The performance of SCA–AHA in the CEC2014 benchmark is summarised in Table 3. Across the 30 functions of this suite SCA–AHA achieves the best average rank and wins the majority of functions. To provide further insight, we also report how often each method finishes in the top three and its average standard deviation, underscoring the hybrid’s balance between exploration, exploitation, and robustness. Detailed per-function statistics have been moved to the Appendix A.
Note: The full CEC2014 results tables (Tables 8 and 9) have been relocated to the Appendix A.
In the second half of the CEC2014 suite (functions F16–F30), the strengths of SCA–AHA are further highlighted by its ability to handle complex, high-dimensional search spaces effectively. Its outstanding performance on select functions underscores the sophisticated integration of exploratory and exploitative strategies, enabling it to consistently reach competitive solutions. The precision and reliability observed here are products of the algorithm’s ability to leverage diverse strategies from both constituent algorithms, ensuring robust performance across varying problem settings.

6.3. Results Analysis over CEC2022

The CEC2022 benchmark suite presents a diverse set of unimodal, multimodal, hybrid, and composition functions. SCA–AHA attains one of the best overall ranks, winning a substantial number of functions, appearing frequently among the top three and demonstrating a favourable balance between search diversity and convergence. Detailed per-function statistics are available in the Appendix A. While SCA–AHA excels on challenging functions such as F3 and maintains competitive stability across F5 and F6, some algorithms outperform it on a few functions; nevertheless, its average rank remains among the lowest of the tested methods See Table 4 for CEC2022 summary performance across all 12 functions. Average rank is computed over the suite using the mean objective values (lower is better); winscounts the number of functions for which the algorithm achieves the best mean value. Additional columns summarise how often each method finishes among the top three (based on mean values) and its average standard deviation (mean of per-function STD values; lower indicates greater robustness).

6.4. The Wilcoxon Rank-Sum (Mann–Whitney U) Test Results

The Wilcoxon rank-sum (Mann–Whitney U) test results across CEC2014, CEC2017, and CEC2022 datasets (see Table A6, Table A7 and Table A8) consistently support the superiority of the proposed SCAAHA optimiser over most competitors at the 0.05 significance level (with “+”, “-”, and “=” denoting statistically better, worse, and indistinguishable performance, respectively). On the first dataset (CEC2017), SCAAHA achieves complete dominance over several methods, winning all 30 / 30 functions against SHIO, WOA, HHO, and SHO and showing near-complete superiority over BOA ( 29 + , 1=), OHO ( 27 + , 3=), SCA ( 27 + , 1 , 2=), GJO ( 22 + , 2 , 6=), and RSA ( 24 + , 2 , 4=); however, it is outperformed by GWO on this dataset ( 3 + , 23 , 4=) and exhibits a mixed outcome against AOA ( 13 + , 11 , 6=). On the second dataset, SCAAHA demonstrates overwhelming robustness, recording 30 / 30 significant wins against all optimisers except GWO, where it still secures 28 + with only 2 ties, indicating no statistically significant losses. On the third dataset (CEC2022), SCAAHA again yields strong evidence of improvement, winning all 12 / 12 functions against WOA, BOA, HHO, OHO, GJO, and SHO while maintaining clear advantages over SCA ( 11 + , 1=), RSA ( 11 + , 1=), SHIO ( 10 + , 2=), and GWO ( 10 + , 2=); the only notable exception is AOA, where SCAAHA records two significant losses ( 10 + , 2 ) yet remains superior overall.

6.5. Visual Results over CEC 2022

As shown in Figure 3, the convergence curves for Functions F1 through F6 display a variety of behaviours that collectively demonstrate the optimisation algorithm’s capabilities in handling different landscape complexities. In Function F1, the rapid initial drop followed by a plateau suggests a quick finding of a near-optimal solution within a simple landscape. Function F2 shows a sharp decline and early stabilisation, indicating efficient exploitation of a less complex landscape. Function F3’s steady, step-wise improvements suggest effective navigation through a landscape with multiple local optima, highlighting the algorithm’s balanced exploration and exploitation capabilities. In Function F4, the significant improvements followed by periods of stability and then further improvements suggest the algorithm’s ability to overcome barriers or plateaus in complex search landscapes. Function F5’s curve, with its rapid initial improvement and subsequent fine-tuning, points to the algorithm’s proficiency in refining solutions within complex environments. Finally, Function F6 features an extremely sharp initial descent, illustrating the algorithm’s effectiveness in quickly homing in on promising regions, although it indicates some struggle in achieving incremental improvements later, possibly due to the deceptive nature of the landscape.
The convergence curves in Figure 4 for Functions F7–F12 showcase the SCA-AHA’s dynamic optimisation capabilities across different scenarios within the CEC2022 benchmarks, as shown in Figure 4. Function F7 features a steep initial drop in the best value obtained, quickly finding an optimal region and making incremental improvements thereafter, which demonstrates efficient exploitation after a robust initial search. For Function F8, the curve shows an early significant decrease followed by a prolonged plateau, suggesting that while the algorithm locates near-optimal solutions early, further optimisation is limited by the landscape’s complexity. Function F9’s moderate and steady decline indicates a landscape conducive to consistent incremental improvements, highlighting the algorithm’s ability to exploit the search space effectively. The curve for Function F10 reveals an abrupt drop to a low plateau, indicating swift identification and maintenance of proximity to the global optimum. Function F11 presents a sharp initial fall followed by slow, steady optimisation, suggesting adeptness in fine-tuning within intricate landscapes. Collectively, these curves from F7 to F12 illustrate the SCA-AHA adaptability and effectiveness across a variety of problem settings, from rapid descent to steady incremental improvement, underscoring its utility in tackling complex optimisation challenges efficiently.

7. AHASCA Applications in Engineering Design Problems

This subsection evaluates the proficiency and effectiveness of the SCA-AHA algorithm in addressing engineering-related challenges, particularly constrained optimisation problems. The assessment focuses on seven specific engineering design cases: weld beam design, pressure vessel configuration, rolling element bearing formulation, cantilever beam structure, tension/compression spring mechanism, and gear train assembly. These examples are chosen to showcase the algorithm’s applicability, versatility, and robustness in solving practical engineering optimisation problems.

7.1. Pressure Vessel Design Problem

A pressure vessel is a closed container designed to hold gases or liquids at a pressure substantially above atmospheric levels, as illustrated in Figure 5. These vessels are typically cylindrical with hemispherical ends and are fundamental to numerous engineering applications. The challenge of optimising the design of pressure vessels to enhance efficiency and safety while minimising costs was first articulated by Sandgren [44] and continues to be a significant area of research. The construction of the pressure vessels adheres to the American Society of Mechanical Engineers (ASME)’s boiler and pressure vessel code, focusing on a vessel designed to operate at 3000 psi and hold a minimum of 750 cubic   ft .
The optimisation variables include the shell thickness ( T s ), the head thickness ( T h ), the inner radius (R), and the cylindrical section length excluding the heads (L), with the thicknesses constrained to multiples of 0.0625 inches . This optimisation problem is mathematically formulated as follows [45]:
z = [ T s , T h , R , L ]
The objective function to be minimised is given by
f ( z ) = 0.6224 T s R L + 1.7781 T h R 2 + 3.1661 T s 2 L + 19.84 T h L
The constraints ensuring structural integrity and compliance with ASME standards are
g 1 ( z ) = T s + 0.0193 R 0 ,
g 2 ( z ) = T h + 0.00954 R 0 ,
g 3 ( z ) = π R 2 L 4 3 π R 3 + 750 × 11728 0 ,
g 4 ( z ) = L 240 0 ,
The variable bounds are set as
0.0625 T s , T h 99 × 0.0625 , 10 R , L 200
AHASCA exhibits exemplary performance in optimising the Pressure Vessel Design Problem, as demonstrated by the statistical results presented in Table 5. AHASCA achieves minimum, mean, and maximum values of 5885.333, with an extraordinarily low standard deviation of 2.31 × 10 5 , indicating exceptional consistency and reliability in its optimisation capability.
This performance is significantly superior when compared with other optimisers like the Salp Swarm Algorithm (SSA) and Moth Flame Optimiser (MFO). SSA has a minimum value of 5927.544 but a mean much higher at 6087.173 and a maximum of 6252.051, coupled with a standard deviation of 136.4673, which highlights greater variability in its results. Similarly, MFO starts at the same minimum as AHASCA but diverges considerably in the mean (6449.973) and maximum (6994.548) values, with a very high standard deviation of 453.9395, suggesting less predictability and stability.
The Sine–Cosine Algorithm (SCA) and Ant Lion Optimiser (AOA) also show less favourable outcomes, with SCA recording a minimum of 6300.01, a mean of 6720.458, and a maximum of 7562.309, alongside a standard deviation of 455.0258. AOA, in particular, shows the least optimal performance with the highest minimum value of 6718.876, escalating to an alarming mean of 8684.574 and a maximum of 11,376.98, with an extremely high standard deviation of 1586.498, indicating poor performance consistency.
Among other algorithms, the Adaptive Harmony Algorithm (AHA) closely matches AHASCA in terms of minimum, mean, and maximum values, achieving similarly low variability but with a slightly higher standard deviation of 0.111128. This comparison underscores the efficiency and effectiveness of AHASCA, which not only provides optimal solutions but does so with remarkable precision and repeatability, making it a highly preferable choice for engineering applications where both accuracy and consistency are crucial.

7.2. Spring Design Optimisation Problem

The Spring Design Optimisation Problem is a well-established challenge in engineering aimed at determining the ideal dimensions of a mechanical spring to meet specific performance criteria while minimising material and production costs [46]. The critical design parameters as shown in Figure 6 include the wire diameter (d), coil diameter (D), and number of active coils (N), which significantly influence the spring’s strength, flexibility, and space occupation.
The primary goal of this optimisation is to reduce the material cost by minimising the weight or volume of the spring [47]. The design must ensure that the spring can endure operational stresses without yielding (stress failure), resist buckling under compressive loads, and function without resonating at damaging frequencies. These requirements lead to several constraints, including limitations on shear stress ( τ ), permissible deflections (y), and adequate free length to prevent solid compression and surging.
The problem is generally formulated as a nonlinear optimisation task where the objective function and constraints are expressed in terms of the design variables d, D, and N. The choice of materials, environmental conditions, and load specifications are also crucial in defining the optimal design. Various solution methods can be applied, from analytical and numerical approaches such as finite element analysis to heuristic algorithms like Genetic Algorithms and other evolutionary strategies, each providing distinct benefits depending on the design’s complexity and specific needs.
The optimisation of the spring is characterised by the following mathematical formulation:
f ( d , D , N ) = z + 10 5 · ( sum ( v ) + sum ( g ) )
where z = ( N + 2 ) · D · d 2 is a cost function dependent on the design parameters.
The constraints are defined as
c 1 : D 3 · N 71785 · d 4 + 1 0
c 2 : 4 D 2 d · D 12566 · ( D · d 3 d 4 ) + 1 5108 · d 2 1 0
c 3 : 1 140.45 · d D 2 · N 0
c 4 : d + D 1.5 1 0
Penalty terms v s . and g are integrated into the objective function to penalise designs that fail to satisfy the constraints, with v s . indicating violations and g quantifying their severity.
In the context of the Spring Design Optimisation Problem, the AHASCA has shown notable efficacy, as detailed in the results presented in Table 6. AHASCA demonstrated superior performance with the best objective score of 0.012665, optimising design variables for wire diameter, coil diameter, and number of active coils effectively. Its results closely align with other top-performing algorithms like MFO, GWO, and FDA, indicating competitive optimisation capabilities. In contrast, algorithms such as SCA, GJO, and MGO showed less efficient solutions, especially with higher coil counts, reducing material efficiency. AOA notably underperformed, with both a higher score and suboptimal variable settings. AHASCA’s low standard deviation further highlights its robustness and reliability, making it well-suited for precise engineering applications.
In the Spring Design Optimisation Problem, the AHASCA demonstrates outstanding results, as evidenced in the data provided in Table 7. AHASCA achieves not only the lowest minimum value of 0.012665 but also maintains this as both the mean and maximum, indicating an exceptionally consistent performance across multiple runs with a negligible standard deviation of 5.7 × 10 10 . This precision highlights AHASCA’s ability to reliably find and replicate the optimal solution without deviation.
Compared to other optimisers, AHASCA results are superior in terms of both the achieved minimum values and the narrow spread between the minimum, mean, and maximum values. For instance, Salp Swarm Algorithm (SSA) and Moth Flame Optimiser (MFO) have similar minimal scores but higher mean and maximum values, reflecting greater variability in their results. Specifically, MFO’s mean of 0.013195 and a maximum of 0.014389 with a standard deviation of 0.000816 illustrate less stability compared to AHASCA.
Some algorithms like the Grey Wolf Optimiser (GWO) and Genetic Job Optimisation (GJO) show competitive performances with low standard deviations, but they still do not match the absolute precision of AHASCA. Meanwhile, the Hooke–Jeeves Optimisation Algorithm (HLOA) and Multi-Verse Optimiser (MVO) exhibit much larger deviations from their minimum values, as indicated by their higher standard deviations and maximum values, which suggests a significant decrease in performance consistency.
Among the competitors, the Progressive Optimisation Algorithm (POA) and Adaptive Harmony Algorithm (AHA) come closest to matching AHASCA in consistency and precision, with AHA demonstrating an almost identical performance in terms of stability and repeatability, evidenced by its minuscule standard deviation of 9.75 × 10 7 .

7.3. Speed Reducer Design Problem

The Speed Reducer Design Problem as shown in Figure 7 is a classic engineering optimisation challenge aimed at finding the optimal dimensions for a speed reducer to meet specified performance requirements while minimising material and production costs [48]. Key design variables include the face width (b), module of teeth (m), number of teeth on the pinion (p), lengths of the first ( l 1 ) and second ( l 2 ) shafts between bearings, and the diameters of these shafts ( d 1 and d 2 ). Each variable critically affects the speed reducer’s performance attributes such as strength, flexibility, and space occupation [49].
The main objective of this optimisation is to minimise the total weight of the speed reducer. Additionally, the design must ensure that the speed reducer can withstand operational stresses without yielding (stress failure), prevent buckling under compressive loads, and operate without resonating at harmful frequencies [50]. These operational requirements necessitate multiple constraints in the design process, which include limits on shear stress ( τ ), permissible deflections (y), and sufficient free length to avoid solid compression and surging.
This problem is typically modelled as a nonlinear optimisation challenge where the objective function and constraints are formulated in terms of the design variables b, m, p, l 1 , l 2 , d 1 , and d 2 . Material selection, environmental conditions, and load specifications also play a crucial role in determining the optimal design. Various computational methods, ranging from analytical approaches and numerical methods like finite element analysis to heuristic algorithms such as Genetic Algorithms and other evolutionary strategies, can be employed to solve this problem. Each method offers specific advantages depending on the complexity and requirements of the design task.
The decision variables for the Speed Reducer Design Problem are defined as
x = [ x 1 x 2 x 3 x 4 x 5 x 6 x 7 ] = [ b m p l 1 l 2 d 1 d 2 ] ,
where each variable corresponds to a specific design dimension.
The objective function to be minimised is
f ( x ) = 0.7854 x 1 x 2 2 ( 3.3333 x 2 3 + 14.9334 x 3 43.0934 ) 1.508 x 1 ( x 6 2 + x 7 2 ) + 7.4777 ( x 6 3 + x 7 3 ) + 0.7854 ( x 4 x 6 2 + x 5 x 7 2 ) ,
The constraints are specified as follows:
g 1 ( x ) = 27 x 1 x 2 2 x 3 1 0 ,
g 2 ( x ) = 397.5 x 1 x 2 2 x 3 2 1 0 ,
g 3 ( x ) = 1.93 x 4 3 x 2 x 6 3 x 4 1 0 ,
g 4 ( x ) = 1.93 x 5 3 x 2 x 7 3 x 5 1 0 ,
g 5 ( x ) = 745 2 ( x 4 / ( x 2 x 3 ) ) 2 + 16.9 × 10 6 110 x 6 3 1 0 ,
g 6 ( x ) = 745 2 ( x 5 / ( x 2 x 3 ) ) 2 + 157.5 × 10 6 85 x 7 3 1 0 ,
g 7 ( x ) = x 2 x 3 40 1 0 ,
g 8 ( x ) = 5 x 2 x 1 1 0 ,
g 9 ( x ) = x 1 / 12 x 2 1 0 ,
g 10 ( x ) = 1.5 x 6 + 1.9 x 4 1 0 ,
g 11 ( x ) = 1.1 x 7 + 1.9 x 5 1 0 ,
Variable bounds are set as follows:
2.6 x 1 3.6 , 0.7 x 2 0.8 , 17 x 3 28 , 7.3 x 4 , x 5 8.3 , 2.9 x 6 3.9 , 5.0 x 7 5.5 .
In the Speed Reducer Design Problem, the AHASCA excels in achieving a highly consistent and optimal performance, as shown in Table 8. AHASCA achieves a minimum, mean, and maximum value of 2994.471, with an exceptionally low standard deviation of 6.6 × 10 5 . This indicates not only AHASCA’s ability to find an optimal solution but also its consistency in replicating this result across multiple runs, which is crucial for reliability in practical engineering applications.
Comparatively, other optimisation algorithms exhibit varying levels of performance and consistency. For example, the Salp Swarm Algorithm (SSA) and Moth Flame Optimiser (MFO) also achieve competitive minimum values but have higher mean and maximum values, with standard deviations of 11.4687 and 20.28279, respectively, suggesting less consistency in their optimisation results. Specifically, MFO shows significant variability, which might affect the reliability of the solutions it provides in a real-world setting.
The Sine–Cosine Algorithm (SCA) and Ant Lion Optimiser (AOA) both perform less optimally not only in terms of achieving higher minimal values but also in exhibiting much larger standard deviations of 52.77254 and 46.56557, respectively, indicating significant variability in their results. These large deviations from the mean suggest these algorithms might struggle with stability in this particular design problem.

7.4. Cantilever Beam Design Optimisation Problem

The Cantilever Beam Design Optimisation Problem as shown in Figure 8 represents a significant challenge in structural engineering, focusing on determining the optimal dimensions of a cantilever beam to meet specific performance criteria while minimising material and manufacturing costs [51]. The critical design variables include the beam’s length (L), width (b), and height (h), each significantly impacting attributes like stiffness, strength, and weight.
The primary aim of this optimisation is to reduce the overall weight of the beam and lowering costs while ensuring that the beam remains sufficiently resistant to bending and deflection under load [52]. The design must also prevent material failure under maximum expected loads (stress failure) and ensure stability against lateral buckling. These operational requirements necessitate several constraints, including limits on maximum bending stress ( σ ), allowable deflection (y), and buckling load factors.
This optimisation challenge is typically modelled as a nonlinear task where the objective function and constraints are expressed in terms of the design variables L, b, and h. Material properties, environmental conditions, and the specifics of the applied loads are crucial in determining the optimal design. Various computational methods can be applied to solve this problem, ranging from traditional analytical approaches and simulation techniques like finite element analysis to modern heuristic methods such as Genetic Algorithms and Simulated Annealing. Each technique offers specific benefits depending on the complexity and requirements of the design task.
The mathematical description of the cantilever beam design problem is defined as follows:
The optimisation problem is formulated as
Minimize x f ( x ) = 0.0624 ( x 1 + x 2 + x 3 + x 4 + x 5 )
subject to the constraint:
g ( x ) = 61 x 1 + 37 x 2 + 19 x 3 + 7 x 4 + 1 x 5 1 0 ,
and the variable bounds:
0.01 x i 100 , i = 1 , 2 , , 5 .
Equations provide a complete description of the optimisation model, which aims to minimise the objective function subject to a nonlinear constraint and variable bounds.
In the Cantilever Beam Design Optimisation Problem, the AHASCA showcases exemplary precision and consistency, as highlighted by the statistical metrics presented in Table 9. AHASCA achieves the minimum, mean, and maximum values of 1.339956, with an incredibly low standard deviation of 4.43 × 10 13 . This near-zero variance not only indicates AHASCA’s ability to consistently find the optimal solution but also demonstrates its exceptional stability across multiple runs.
The Salp Swarm Algorithm (SSA) and Moth Flame Optimiser (MFO) also achieve competitive minimum values but with higher standard deviations of 1.84 × 10 5 and 0.0005, respectively. These higher deviations suggest less consistency compared to AHASCA. MFO, in particular, shows a relatively wider range in its results, indicating a potential for greater variability in achieving optimal results.
Other algorithms like the Sine–Cosine Algorithm (SCA) and Ant Lion Optimiser (AOA) exhibit not only higher minimum values but also significantly larger standard deviations of 0.011464 and 0.009419, respectively. These values highlight the challenges these algorithms face in achieving and maintaining an optimal solution within this design problem, suggesting a potential lack of robustness or adaptability to the specific constraints and requirements of the Cantilever Beam Design.

8. Conclusions

This paper introduced AHA–SCA, a hybrid optimiser that interleaves the Sine–Cosine Algorithm’s oscillatory search with the Artificial Hummingbird Algorithm’s guided and territorial foraging. The design is intentionally minimal: even iterations apply SCA with a linearly decaying amplitude to promote broad, controllable exploration; odd iterations apply AHA, using axial/diagonal/omnidirectional flight patterns and a visit table to intensify search around promising regions. The result is a clear exploration–exploitation schedule with O ( N n + N c f ) time per iteration and modest implementation overhead.
Comprehensive experiments on CEC2014, CEC2017, and CEC2022 demonstrate that AHA–SCA is a strong general purpose optimiser. Relative to stand-alone AHA and SCA and to several recent baselines, the hybrid achieves the following: (i) it converges faster in early and mid phases, (ii) it achieves lower mean/median errors with noticeably smaller standard deviations on many functions, and (iii) it shows improved robustness against premature convergence on rugged, composite landscapes. In constrained engineering studies (e.g., welded beam and pressure vessel), the method attains best or near-best feasible designs while respecting all constraints, underscoring its practical utility.
Future directions include the following: (i) self-adaptive or reward-driven switching between SCA and AHA phases, (ii) parameter control for SCA amplitude and AHA noise, (iii) more sophisticated constraint handling (e.g., feasibility rules and adaptive penalties), (iv) multi-objective and large-scale extensions, (v) parallel/distributed implementations, and (vi) formal convergence analysis.

Author Contributions

Conceptualization, A.S.A.-S., R.A., H.F., F.H. and N.H.; Methodology, J.Z., A.S.A.-S., R.A., H.F. and F.H.; Software, R.A., H.F. and F.H.; Validation, J.Z., A.S.A.-S., H.F. and F.H.; Formal analysis, N.H.; Investigation, J.Z. and A.S.A.-S.; Resources, A.S.A.-S. and N.H.; Data curation, J.Z.; Writing—original draft, J.Z., A.S.A.-S., R.A., H.F., F.H. and N.H.; Writing—review & editing, J.Z., R.A., H.F., F.H. and N.H.; Visualization, A.S.A.-S., H.F., F.H. and N.H.; Project administration, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Statistical Results

Appendix A.1. Statistical Results over CEC2017

Table A1. Statistical results over CEC2017 (F1–F15).
Table A1. Statistical results over CEC2017 (F1–F15).
FunctionStatisticsSCAAHAHLOAHGSOGJOBATSHOSHIOWOARSAMTDESCA
F1Mean 8.76 × 10 7 4.17 × 10 3 7.28 × 10 8 1.75 × 10 8 2.86 × 10 10 3.31 × 10 8 3.78 × 10 7 9.14 × 10 5 1.10 × 10 10 2.91 × 10 11 8.41 × 10 8
STD 1.42 × 10 8 3.26 × 10 3 2.12 × 10 8 2.36 × 10 8 9.32 × 10 7 3.99 × 10 8 1.28 × 10 8 8.60 × 10 5 4.75 × 10 9 9.68 × 10 9 2.37 × 10 8
SEM 3.66 × 10 7 8.42 × 10 2 5.46 × 10 7 6.09 × 10 7 2.41 × 10 7 1.03 × 10 8 3.31 × 10 7 2.22 × 10 5 1.23 × 10 9 2.50 × 10 9 6.11 × 10 7
F2Mean 6.51 × 10 6 4.78 × 10 5 6.77 × 10 7 8.31 × 10 6 5.37 × 10 17 2.78 × 10 6 4.23 × 10 7 2.39 × 10 4 1.76 × 10 12 7.65 × 10 91 2.70 × 10 7
STD 1.28 × 10 7 1.85 × 10 6 6.12 × 10 7 1.64 × 10 7 1.62 × 10 16 1.04 × 10 7 1.21 × 10 8 1.90 × 10 4 3.57 × 10 12 1.61 × 10 92 2.62 × 10 7
SEM 3.31 × 10 6 4.77 × 10 5 1.58 × 10 7 4.24 × 10 6 4.17 × 10 15 2.68 × 10 6 3.11 × 10 7 4.91 × 10 3 9.22 × 10 11 4.17 × 10 91 6.75 × 10 6
F3Mean 1.44 × 10 3 3.00 × 10 2 1.72 × 10 3 2.27 × 10 3 2.09 × 10 4 2.78 × 10 3 5.01 × 10 3 1.70 × 10 3 8.77 × 10 3 4.79 × 10 5 1.23 × 10 3
STD 5.05 × 10 2 5.59 × 10 1 6.62 × 10 2 1.89 × 10 3 2.79 × 10 2 2.64 × 10 3 3.71 × 10 3 1.46 × 10 3 1.97 × 10 3 5.75 × 10 4 4.41 × 10 2
SEM 1.31 × 10 2 1.44 × 10 1 1.71 × 10 2 4.87 × 10 2 7.20 × 10 1 6.81 × 10 2 9.58 × 10 2 3.77 × 10 2 5.08 × 10 2 1.49 × 10 4 1.14 × 10 2
F4Mean 4.10 × 10 2 4.03 × 10 2 4.63 × 10 2 4.28 × 10 2 5.48 × 10 3 4.37 × 10 2 4.28 × 10 2 4.42 × 10 2 9.06 × 10 2 1.57 × 10 5 4.36 × 10 2
STD 4.11 × 10 0 2.11 × 10 0 1.73 × 10 1 2.50 × 10 1 2.60 × 10 1 2.84 × 10 1 4.18 × 10 1 5.32 × 10 1 4.17 × 10 2 1.73 × 10 4 8.44 × 10 0
SEM 1.06 × 10 0 5.45 × 10 1 4.47 × 10 0 6.45 × 10 0 6.72 × 10 0 7.34 × 10 0 1.08 × 10 1 1.37 × 10 1 1.08 × 10 2 4.46 × 10 3 2.18 × 10 0
F5Mean 5.32 × 10 2 5.67 × 10 2 5.56 × 10 2 5.31 × 10 2 7.03 × 10 2 5.30 × 10 2 5.23 × 10 2 5.44 × 10 2 5.79 × 10 2 1.84 × 10 3 5.50 × 10 2
STD 9.56 × 10 0 2.28 × 10 1 5.11 × 10 0 1.30 × 10 1 3.00 × 10 0 1.83 × 10 1 1.11 × 10 1 1.94 × 10 1 1.31 × 10 1 4.39 × 10 1 6.26 × 10 0
SEM 2.47 × 10 0 5.90 × 10 0 1.32 × 10 0 3.35 × 10 0 7.75 × 10 1 4.72 × 10 0 2.87 × 10 0 5.00 × 10 0 3.39 × 10 0 1.13 × 10 1 1.62 × 10 0
F6Mean 6.04 × 10 2 6.45 × 10 2 6.25 × 10 2 6.07 × 10 2 7.01 × 10 2 6.12 × 10 2 6.05 × 10 2 6.29 × 10 2 6.51 × 10 2 7.46 × 10 2 6.18 × 10 2
STD 1.62 × 10 0 1.23 × 10 1 6.21 × 10 0 5.13 × 10 0 3.41 × 10 0 5.96 × 10 0 4.20 × 10 0 8.46 × 10 0 8.45 × 10 0 5.70 × 10 0 3.55 × 10 0
SEM 4.17 × 10 1 3.17 × 10 0 1.60 × 10 0 1.33 × 10 0 8.79 × 10 1 1.54 × 10 0 1.09 × 10 0 2.18 × 10 0 2.18 × 10 0 1.47 × 10 0 9.18 × 10 1
F7Mean 7.51 × 10 2 7.85 × 10 2 7.74 × 10 2 7.50 × 10 2 8.65 × 10 2 7.56 × 10 2 7.36 × 10 2 7.82 × 10 2 8.06 × 10 2 6.52 × 10 3 7.76 × 10 2
STD 6.90 × 10 0 3.11 × 10 1 8.03 × 10 0 1.51 × 10 1 8.34 × 10 0 1.01 × 10 1 1.09 × 10 1 2.08 × 10 1 8.93 × 10 0 2.66 × 10 2 1.18 × 10 1
SEM 1.78 × 10 0 8.04 × 10 0 2.07 × 10 0 3.89 × 10 0 2.15 × 10 0 2.62 × 10 0 2.82 × 10 0 5.37 × 10 0 2.30 × 10 0 6.86 × 10 1 3.04 × 10 0
F8Mean 8.23 × 10 2 8.43 × 10 2 8.35 × 10 2 8.24 × 10 2 9.31 × 10 2 8.27 × 10 2 8.18 × 10 2 8.41 × 10 2 8.56 × 10 2 2.42 × 10 3 8.42 × 10 2
STD 5.78 × 10 0 1.56 × 10 1 3.02 × 10 0 1.14 × 10 1 1.95 × 10 0 6.71 × 10 0 9.18 × 10 0 1.90 × 10 1 9.09 × 10 0 4.96 × 10 1 7.04 × 10 0
SEM 1.49 × 10 0 4.03 × 10 0 7.80 × 10 1 2.95 × 10 0 5.04 × 10 1 1.73 × 10 0 2.37 × 10 0 4.91 × 10 0 2.35 × 10 0 1.28 × 10 1 1.82 × 10 0
F9Mean 9.22 × 10 2 1.60 × 10 3 1.02 × 10 3 9.60 × 10 2 3.19 × 10 3 9.95 × 10 2 9.77 × 10 2 1.38 × 10 3 1.54 × 10 3 9.06 × 10 4 1.01 × 10 3
STD 2.16 × 10 1 3.15 × 10 2 2.07 × 10 1 5.54 × 10 1 1.92 × 10 2 7.51 × 10 1 1.09 × 10 2 4.26 × 10 2 2.02 × 10 2 5.77 × 10 3 3.78 × 10 1
SEM 5.57 × 10 0 8.14 × 10 1 5.35 × 10 0 1.43 × 10 1 4.96 × 10 1 1.94 × 10 1 2.82 × 10 1 1.10 × 10 2 5.21 × 10 1 1.49 × 10 3 9.75 × 10 0
F10Mean 2.21 × 10 3 2.29 × 10 3 2.53 × 10 3 1.81 × 10 3 5.65 × 10 3 1.80 × 10 3 1.96 × 10 3 2.05 × 10 3 2.47 × 10 3 1.68 × 10 4 2.21 × 10 3
STD 2.35 × 10 2 3.82 × 10 2 1.06 × 10 2 5.51 × 10 2 4.93 × 10 2 3.42 × 10 2 2.91 × 10 2 3.41 × 10 2 1.54 × 10 2 7.17 × 10 2 2.72 × 10 2
SEM 6.08 × 10 1 9.87 × 10 1 2.74 × 10 1 1.42 × 10 2 1.27 × 10 2 8.82 × 10 1 7.52 × 10 1 8.81 × 10 1 3.98 × 10 1 1.85 × 10 2 7.02 × 10 1
F11Mean 1.14 × 10 3 1.17 × 10 3 1.27 × 10 3 1.16 × 10 3 6.51 × 10 4 1.13 × 10 3 1.17 × 10 3 1.20 × 10 3 3.49 × 10 3 7.37 × 10 4 1.19 × 10 3
STD 1.86 × 10 1 5.11 × 10 1 5.93 × 10 1 4.40 × 10 1 9.28 × 10 2 1.01 × 10 1 8.06 × 10 1 9.56 × 10 1 1.58 × 10 3 7.61 × 10 3 3.78 × 10 1
SEM 4.80 × 10 0 1.32 × 10 1 1.53 × 10 1 1.13 × 10 1 2.40 × 10 2 2.61 × 10 0 2.08 × 10 1 2.47 × 10 1 4.09 × 10 2 1.96 × 10 3 9.77 × 10 0
F12Mean 1.31 × 10 6 2.84 × 10 4 8.72 × 10 6 7.07 × 10 5 3.10 × 10 9 6.53 × 10 5 9.75 × 10 5 3.23 × 10 6 2.83 × 10 8 1.81 × 10 11 1.60 × 10 7
STD 1.41 × 10 6 2.07 × 10 4 4.97 × 10 6 8.50 × 10 5 4.73 × 10 7 6.86 × 10 5 7.60 × 10 5 3.01 × 10 6 2.42 × 10 8 8.53 × 10 9 1.44 × 10 7
SEM 3.64 × 10 5 5.34 × 10 3 1.28 × 10 6 2.20 × 10 5 1.22 × 10 7 1.77 × 10 5 1.96 × 10 5 7.78 × 10 5 6.24 × 10 7 2.20 × 10 9 3.71 × 10 6
F13Mean 1.01 × 10 4 2.00 × 10 3 4.62 × 10 4 1.06 × 10 4 1.73 × 10 9 7.93 × 10 3 1.26 × 10 4 1.44 × 10 4 4.63 × 10 7 1.12 × 10 11 3.17 × 10 4
STD 6.64 × 10 3 8.58 × 10 2 4.08 × 10 4 6.85 × 10 3 2.53 × 10 7 5.55 × 10 3 7.79 × 10 3 1.03 × 10 4 6.44 × 10 7 8.57 × 10 9 1.90 × 10 4
SEM 1.72 × 10 3 2.21 × 10 2 1.05 × 10 4 1.77 × 10 3 6.53 × 10 6 1.43 × 10 3 2.01 × 10 3 2.65 × 10 3 1.66 × 10 7 2.21 × 10 9 4.89 × 10 3
F14Mean 1.58 × 10 3 1.56 × 10 3 2.56 × 10 3 2.26 × 10 3 1.69 × 10 3 3.33 × 10 3 4.25 × 10 3 2.03 × 10 3 7.82 × 10 3 5.23 × 10 8 1.63 × 10 3
STD 1.08 × 10 2 1.53 × 10 2 1.17 × 10 3 1.50 × 10 3 6.54 × 10 0 1.71 × 10 3 1.70 × 10 3 1.24 × 10 3 7.55 × 10 3 8.09 × 10 7 1.94 × 10 2
SEM 2.79 × 10 1 3.96 × 10 1 3.03 × 10 2 3.88 × 10 2 1.69 × 10 0 4.42 × 10 2 4.40 × 10 2 3.21 × 10 2 1.95 × 10 3 2.09 × 10 7 5.01 × 10 1
F15Mean 1.94 × 10 3 1.72 × 10 3 5.43 × 10 3 4.26 × 10 3 1.23 × 10 4 3.14 × 10 3 6.45 × 10 3 7.61 × 10 3 9.77 × 10 3 4.92 × 10 10 2.31 × 10 3
STD 5.32 × 10 2 1.03 × 10 2 2.04 × 10 3 2.44 × 10 3 1.50 × 10 2 1.77 × 10 3 6.60 × 10 3 4.69 × 10 3 6.53 × 10 3 4.30 × 10 9 6.82 × 10 2
SEM 1.37 × 10 2 2.67 × 10 1 5.26 × 10 2 6.29 × 10 2 3.87 × 10 1 4.58 × 10 2 1.70 × 10 3 1.21 × 10 3 1.69 × 10 3 1.11 × 10 9 1.76 × 10 2
Table A2. Results over CEC2017 (F16–F30).
Table A2. Results over CEC2017 (F16–F30).
FunctionStatisticsSCAAHAHLOAHGSOGJOBATSHOSHIOWOARSAMTDESCA
F16Mean 1.67 × 10 3 2.05 × 10 3 1.86 × 10 3 1.70 × 10 3 3.17 × 10 3 1.77 × 10 3 1.80 × 10 3 1.90 × 10 3 2.08 × 10 3 2.69 × 10 4 1.72 × 10 3
STD 3.57 × 10 1 2.05 × 10 2 7.87 × 10 1 7.80 × 10 1 3.93 × 10 1 1.13 × 10 2 1.31 × 10 2 1.31 × 10 2 1.15 × 10 2 1.69 × 10 3 5.12 × 10 1
SEM 9.22 × 10 0 5.29 × 10 1 2.03 × 10 1 2.01 × 10 1 1.02 × 10 1 2.92 × 10 1 3.39 × 10 1 3.38 × 10 1 2.97 × 10 1 4.35 × 10 2 1.32 × 10 1
F17Mean 1.76 × 10 3 1.88 × 10 3 1.78 × 10 3 1.76 × 10 3 2.96 × 10 3 1.75 × 10 3 1.78 × 10 3 1.80 × 10 3 1.84 × 10 3 2.89 × 10 6 1.78 × 10 3
STD 1.14 × 10 1 1.37 × 10 2 8.68 × 10 0 1.55 × 10 1 1.32 × 10 2 2.27 × 10 1 4.64 × 10 1 5.19 × 10 1 3.18 × 10 1 1.30 × 10 6 1.99 × 10 1
SEM 2.95 × 10 0 3.53 × 10 1 2.24 × 10 0 4.00 × 10 0 3.40 × 10 1 5.87 × 10 0 1.20 × 10 1 1.34 × 10 1 8.22 × 10 0 3.36 × 10 5 5.14 × 10 0
F18Mean 2.81 × 10 4 4.98 × 10 3 5.30 × 10 5 3.35 × 10 4 9.68 × 10 9 2.16 × 10 4 1.54 × 10 4 2.39 × 10 4 1.42 × 10 7 1.13 × 10 9 1.37 × 10 5
STD 1.62 × 10 4 8.61 × 10 3 3.44 × 10 5 1.19 × 10 4 5.52 × 10 7 1.08 × 10 4 1.22 × 10 4 1.28 × 10 4 1.72 × 10 7 2.71 × 10 8 1.22 × 10 5
SEM 4.17 × 10 3 2.22 × 10 3 8.88 × 10 4 3.08 × 10 3 1.42 × 10 7 2.80 × 10 3 3.15 × 10 3 3.30 × 10 3 4.45 × 10 6 6.99 × 10 7 3.15 × 10 4
F19Mean 4.67 × 10 3 2.06 × 10 3 1.41 × 10 4 7.46 × 10 3 3.91 × 10 8 6.43 × 10 3 2.37 × 10 4 7.54 × 10 4 8.80 × 10 5 8.88 × 10 9 4.07 × 10 3
STD 4.13 × 10 3 1.77 × 10 2 1.61 × 10 4 5.61 × 10 3 1.47 × 10 7 4.76 × 10 3 7.04 × 10 4 1.11 × 10 5 1.12 × 10 6 1.09 × 10 9 3.42 × 10 3
SEM 1.07 × 10 3 4.58 × 10 1 4.17 × 10 3 1.45 × 10 3 3.79 × 10 6 1.23 × 10 3 1.82 × 10 4 2.85 × 10 4 2.88 × 10 5 2.80 × 10 8 8.83 × 10 2
F20Mean 2.08 × 10 3 2.29 × 10 3 2.16 × 10 3 2.13 × 10 3 3.07 × 10 3 2.08 × 10 3 2.15 × 10 3 2.15 × 10 3 2.26 × 10 3 5.59 × 10 3 2.09 × 10 3
STD 3.79 × 10 1 1.00 × 10 2 3.89 × 10 1 6.76 × 10 1 5.18 × 10 1 5.55 × 10 1 9.38 × 10 1 8.95 × 10 1 5.77 × 10 1 1.81 × 10 2 1.75 × 10 1
SEM 9.80 × 10 0 2.58 × 10 1 1.00 × 10 1 1.75 × 10 1 1.34 × 10 1 1.43 × 10 1 2.42 × 10 1 2.31 × 10 1 1.49 × 10 1 4.67 × 10 1 4.53 × 10 0
F21Mean 2.25 × 10 3 2.31 × 10 3 2.24 × 10 3 2.32 × 10 3 2.79 × 10 3 2.31 × 10 3 2.31 × 10 3 2.31 × 10 3 2.29 × 10 3 3.58 × 10 3 2.26 × 10 3
STD 5.52 × 10 1 7.05 × 10 1 2.13 × 10 1 3.27 × 10 1 1.26 × 10 1 4.21 × 10 1 4.29 × 10 1 6.24 × 10 1 5.32 × 10 1 3.55 × 10 1 6.44 × 10 1
SEM 1.42 × 10 1 1.82 × 10 1 5.50 × 10 0 8.45 × 10 0 3.27 × 10 0 1.09 × 10 1 1.11 × 10 1 1.61 × 10 1 1.37 × 10 1 9.17 × 10 0 1.66 × 10 1
F22Mean 2.31 × 10 3 2.45 × 10 3 2.40 × 10 3 2.35 × 10 3 5.14 × 10 3 2.33 × 10 3 2.37 × 10 3 2.31 × 10 3 2.90 × 10 3 1.83 × 10 4 2.36 × 10 3
STD 2.12 × 10 1 3.87 × 10 2 3.18 × 10 1 4.98 × 10 1 2.80 × 10 1 3.47 × 10 1 1.73 × 10 2 1.67 × 10 1 2.17 × 10 2 3.39 × 10 2 3.43 × 10 1
SEM 5.47 × 10 0 9.98 × 10 1 8.21 × 10 0 1.29 × 10 1 7.23 × 10 0 8.95 × 10 0 4.47 × 10 1 4.30 × 10 0 5.60 × 10 1 8.76 × 10 1 8.87 × 10 0
F23Mean 2.64 × 10 3 2.68 × 10 3 2.68 × 10 3 2.63 × 10 3 4.00 × 10 3 2.65 × 10 3 2.63 × 10 3 2.65 × 10 3 2.70 × 10 3 7.35 × 10 3 2.66 × 10 3
STD 7.11 × 10 0 4.02 × 10 1 9.57 × 10 0 1.17 × 10 1 6.32 × 10 1 1.73 × 10 1 1.35 × 10 1 2.88 × 10 1 8.58 × 10 0 1.97 × 10 2 9.32 × 10 0
SEM 1.84 × 10 0 1.04 × 10 1 2.47 × 10 0 3.02 × 10 0 1.63 × 10 1 4.47 × 10 0 3.48 × 10 0 7.44 × 10 0 2.22 × 10 0 5.09 × 10 1 2.41 × 10 0
F24Mean 2.75 × 10 3 2.79 × 10 3 2.56 × 10 3 2.75 × 10 3 3.38 × 10 3 2.74 × 10 3 2.75 × 10 3 2.77 × 10 3 2.87 × 10 3 6.99 × 10 3 2.77 × 10 3
STD 6.31 × 10 1 3.28 × 10 1 1.58 × 10 1 6.28 × 10 1 1.23 × 10 0 9.57 × 10 1 7.04 × 10 1 7.37 × 10 1 4.53 × 10 1 1.79 × 10 2 6.51 × 10 1
SEM 1.63 × 10 1 8.46 × 10 0 4.07 × 10 0 1.62 × 10 1 3.19 × 10 1 2.47 × 10 1 1.82 × 10 1 1.90 × 10 1 1.17 × 10 1 4.63 × 10 1 1.68 × 10 1
F25Mean 2.93 × 10 3 2.93 × 10 3 2.96 × 10 3 2.93 × 10 3 4.74 × 10 3 2.92 × 10 3 2.95 × 10 3 2.95 × 10 3 3.33 × 10 3 4.92 × 10 4 2.95 × 10 3
STD 2.16 × 10 1 3.00 × 10 1 1.79 × 10 1 1.49 × 10 1 7.49 × 10 0 2.56 × 10 1 3.29 × 10 1 2.40 × 10 1 1.02 × 10 2 3.33 × 10 3 1.39 × 10 1
SEM 5.57 × 10 0 7.74 × 10 0 4.63 × 10 0 3.85 × 10 0 1.93 × 10 0 6.61 × 10 0 8.51 × 10 0 6.19 × 10 0 2.64 × 10 1 8.60 × 10 2 3.58 × 10 0
F26Mean 2.98 × 10 3 3.81 × 10 3 3.24 × 10 3 3.11 × 10 3 5.46 × 10 3 3.09 × 10 3 3.16 × 10 3 3.56 × 10 3 4.02 × 10 3 7.21 × 10 4 3.07 × 10 3
STD 3.56 × 10 1 5.99 × 10 2 8.24 × 10 1 1.30 × 10 2 1.20 × 10 2 1.04 × 10 2 3.62 × 10 2 5.16 × 10 2 3.09 × 10 2 3.93 × 10 3 2.71 × 10 1
SEM 9.19 × 10 0 1.55 × 10 2 2.13 × 10 1 3.37 × 10 1 3.11 × 10 1 2.69 × 10 1 9.34 × 10 1 1.33 × 10 2 7.99 × 10 1 1.01 × 10 3 7.00 × 10 0
F27Mean 3.10 × 10 3 3.15 × 10 3 3.13 × 10 3 3.11 × 10 3 4.82 × 10 3 3.13 × 10 3 3.12 × 10 3 3.14 × 10 3 3.20 × 10 3 2.13 × 10 4 3.10 × 10 3
STD 2.21 × 10 0 4.68 × 10 1 1.21 × 10 1 3.46 × 10 1 2.57 × 10 1 2.79 × 10 1 2.73 × 10 1 3.80 × 10 1 6.31 × 10 1 1.33 × 10 3 2.29 × 10 0
SEM 5.69 × 10 1 1.21 × 10 1 3.12 × 10 0 8.94 × 10 0 6.64 × 10 0 7.21 × 10 0 7.05 × 10 0 9.80 × 10 0 1.63 × 10 1 3.43 × 10 2 5.92 × 10 1
F28Mean 3.28 × 10 3 3.39 × 10 3 3.35 × 10 3 3.37 × 10 3 3.94 × 10 3 3.31 × 10 3 3.38 × 10 3 3.36 × 10 3 3.74 × 10 3 2.96 × 10 4 3.27 × 10 3
STD 1.15 × 10 2 2.05 × 10 2 1.20 × 10 2 8.89 × 10 1 3.48 × 10 0 1.19 × 10 2 1.35 × 10 2 8.89 × 10 1 1.38 × 10 2 9.55 × 10 2 6.50 × 10 1
SEM 2.97 × 10 1 5.30 × 10 1 3.10 × 10 1 2.29 × 10 1 8.99 × 10 1 3.06 × 10 1 3.50 × 10 1 2.29 × 10 1 3.56 × 10 1 2.47 × 10 2 1.68 × 10 1
F29Mean 3.20 × 10 3 3.41 × 10 3 3.26 × 10 3 3.21 × 10 3 4.20 × 10 3 3.23 × 10 3 3.24 × 10 3 3.35 × 10 3 3.35 × 10 3 2.79 × 10 6 3.23 × 10 3
STD 2.27 × 10 1 1.57 × 10 2 2.29 × 10 1 4.82 × 10 1 3.66 × 10 1 3.99 × 10 1 6.00 × 10 1 8.48 × 10 1 1.02 × 10 2 8.86 × 10 5 2.44 × 10 1
SEM 5.87 × 10 0 4.06 × 10 1 5.92 × 10 0 1.25 × 10 1 9.45 × 10 0 1.03 × 10 1 1.55 × 10 1 2.19 × 10 1 2.63 × 10 1 2.29 × 10 5 6.30 × 10 0
F30Mean 1.06 × 10 6 1.99 × 10 6 2.98 × 10 5 5.78 × 10 5 1.94 × 10 8 8.49 × 10 5 7.41 × 10 5 4.30 × 10 5 6.27 × 10 6 1.66 × 10 10 8.15 × 10 5
STD 9.79 × 10 5 2.98 × 10 6 7.45 × 10 5 5.43 × 10 5 6.26 × 10 6 8.49 × 10 5 1.65 × 10 6 5.32 × 10 5 5.53 × 10 6 1.57 × 10 9 4.55 × 10 5
SEM 2.53 × 10 5 7.70 × 10 5 1.92 × 10 5 1.40 × 10 5 1.62 × 10 6 2.19 × 10 5 4.26 × 10 5 1.37 × 10 5 1.43 × 10 6 4.05 × 10 8 1.17 × 10 5
19272230192124303025

Appendix A.2. Results Analysis over CEC2014

Table A3. CEC2014 Results F1–F15.
Table A3. CEC2014 Results F1–F15.
FunStatisticsSCAAHASHIOGWOWOABOAOHOAOASCAGJOSHORSA
F1Mean 3.58 × 10 6 1.21 × 10 7 9.65 × 10 7 1.31 × 10 7 5.25 × 10 7 2.79 × 10 8 1.84 × 10 7 8.34 × 10 6 6.42 × 10 6 7.35 × 10 6 6.07 × 10 7
STD 2.12 × 10 6 4.56 × 10 6 6.89 × 10 7 9.57 × 10 6 4.72 × 10 7 1.22 × 10 8 1.00 × 10 7 5.33 × 10 6 3.12 × 10 6 2.27 × 10 6 1.08 × 10 7
SEM 9.49 × 10 5 2.04 × 10 6 3.08 × 10 7 4.28 × 10 6 2.11 × 10 7 5.44 × 10 7 4.47 × 10 6 2.38 × 10 6 1.39 × 10 6 1.02 × 10 6 4.81 × 10 6
F2Mean 1.37 × 10 8 1.20 × 10 7 6.83 × 10 9 4.06 × 10 5 2.74 × 10 9 6.84 × 10 9 4.42 × 10 9 7.48 × 10 8 3.18 × 10 8 1.74 × 10 6 5.66 × 10 9
STD 2.71 × 10 8 2.68 × 10 7 2.66 × 10 9 1.71 × 10 5 1.20 × 10 9 1.89 × 10 9 1.29 × 10 9 2.28 × 10 8 5.49 × 10 8 8.08 × 10 5 9.71 × 10 8
SEM 1.21 × 10 8 1.20 × 10 7 1.19 × 10 9 7.64 × 10 4 5.37 × 10 8 8.45 × 10 8 5.76 × 10 8 1.02 × 10 8 2.45 × 10 8 3.61 × 10 5 4.34 × 10 8
F3Mean 5.19 × 10 3 9.02 × 10 3 2.10 × 10 4 3.74 × 10 4 1.38 × 10 4 1.31 × 10 4 1.64 × 10 4 4.45 × 10 3 7.75 × 10 3 1.90 × 10 3 1.05 × 10 4
STD 3.53 × 10 3 4.71 × 10 3 5.21 × 10 3 9.08 × 10 3 2.39 × 10 3 9.51 × 10 2 6.07 × 10 3 3.94 × 10 3 2.05 × 10 3 4.22 × 10 2 4.91 × 10 3
SEM 1.58 × 10 3 2.11 × 10 3 2.33 × 10 3 4.06 × 10 3 1.07 × 10 3 4.25 × 10 2 2.71 × 10 3 1.76 × 10 3 9.18 × 10 2 1.89 × 10 2 2.20 × 10 3
F4Mean 4.27 × 10 2 4.30 × 10 2 1.98 × 10 3 4.43 × 10 2 2.12 × 10 3 2.67 × 10 3 8.97 × 10 2 4.47 × 10 2 4.41 × 10 2 4.38 × 10 2 1.16 × 10 3
STD 1.26 × 10 1 2.96 × 10 1 1.49 × 10 3 1.56 × 10 1 6.04 × 10 2 9.84 × 10 2 1.65 × 10 2 1.23 × 10 1 2.15 × 10 1 2.17 × 10 1 3.33 × 10 2
SEM 5.63 × 10 0 1.32 × 10 1 6.65 × 10 2 6.97 × 10 0 2.70 × 10 2 4.40 × 10 2 7.37 × 10 1 5.51 × 10 0 9.61 × 10 0 9.71 × 10 0 1.49 × 10 2
F5Mean 5.20 × 10 2 5.20 × 10 2 5.20 × 10 2 5.20 × 10 2 5.20 × 10 2 5.20 × 10 2 5.20 × 10 2 5.20 × 10 2 5.20 × 10 2 5.20 × 10 2 5.20 × 10 2
STD 1.39 × 10 1 1.02 × 10 1 7.69 × 10 2 3.65 × 10 2 5.31 × 10 2 3.87 × 10 2 3.51 × 10 2 8.45 × 10 2 6.12 × 10 2 4.77 × 10 2 6.04 × 10 2
SEM 6.22 × 10 2 4.55 × 10 2 3.44 × 10 2 1.63 × 10 2 2.37 × 10 2 1.73 × 10 2 1.57 × 10 2 3.78 × 10 2 2.74 × 10 2 2.14 × 10 2 2.70 × 10 2
F6Mean 6.03 × 10 2 6.04 × 10 2 6.08 × 10 2 6.07 × 10 2 6.07 × 10 2 6.10 × 10 2 6.10 × 10 2 6.07 × 10 2 6.04 × 10 2 6.07 × 10 2 6.10 × 10 2
STD 1.02 × 10 0 1.05 × 10 0 3.78 × 10 0 1.61 × 10 0 7.86 × 10 1 1.03 × 10 0 5.10 × 10 1 1.65 × 10 0 6.91 × 10 1 6.32 × 10 1 1.17 × 10 0
SEM 4.56 × 10 1 4.72 × 10 1 1.69 × 10 0 7.19 × 10 1 3.52 × 10 1 4.60 × 10 1 2.28 × 10 1 7.38 × 10 1 3.09 × 10 1 2.83 × 10 1 5.22 × 10 1
F7Mean 7.04 × 10 2 7.01 × 10 2 8.48 × 10 2 7.01 × 10 2 8.90 × 10 2 8.79 × 10 2 8.21 × 10 2 7.11 × 10 2 7.08 × 10 2 7.16 × 10 2 7.61 × 10 2
STD 4.14 × 10 0 1.01 × 10 0 8.45 × 10 1 3.78 × 10 1 3.55 × 10 1 5.15 × 10 1 1.24 × 10 1 2.82 × 10 0 4.99 × 10 0 1.55 × 10 1 2.98 × 10 1
SEM 1.85 × 10 0 4.52 × 10 1 3.78 × 10 1 1.69 × 10 1 1.59 × 10 1 2.30 × 10 1 5.54 × 10 0 1.26 × 10 0 2.23 × 10 0 6.91 × 10 0 1.33 × 10 1
F8Mean 8.15 × 10 2 8.23 × 10 2 8.90 × 10 2 8.34 × 10 2 8.59 × 10 2 8.74 × 10 2 8.34 × 10 2 8.38 × 10 2 8.25 × 10 2 8.08 × 10 2 8.78 × 10 2
STD 5.80 × 10 0 9.18 × 10 0 1.51 × 10 1 1.30 × 10 1 4.33 × 10 0 2.30 × 10 0 1.38 × 10 1 7.76 × 10 0 5.06 × 10 0 5.01 × 10 0 7.64 × 10 0
SEM 2.59 × 10 0 4.10 × 10 0 6.73 × 10 0 5.81 × 10 0 1.94 × 10 0 1.03 × 10 0 6.18 × 10 0 3.47 × 10 0 2.26 × 10 0 2.24 × 10 0 3.42 × 10 0
F9Mean 9.30 × 10 2 9.23 × 10 2 9.80 × 10 2 9.71 × 10 2 9.50 × 10 2 9.61 × 10 2 9.39 × 10 2 9.42 × 10 2 9.23 × 10 2 9.31 × 10 2 9.58 × 10 2
STD 8.01 × 10 0 6.04 × 10 0 4.68 × 10 0 1.78 × 10 1 2.80 × 10 0 6.36 × 10 0 6.11 × 10 0 6.31 × 10 0 1.16 × 10 1 5.29 × 10 0 5.08 × 10 0
SEM 3.58 × 10 0 2.70 × 10 0 2.09 × 10 0 7.95 × 10 0 1.25 × 10 0 2.84 × 10 0 2.73 × 10 0 2.82 × 10 0 5.19 × 10 0 2.37 × 10 0 2.27 × 10 0
F10Mean 1.77 × 10 3 1.59 × 10 3 1.91 × 10 3 2.01 × 10 3 2.35 × 10 3 2.40 × 10 3 1.69 × 10 3 1.96 × 10 3 1.61 × 10 3 1.28 × 10 3 2.06 × 10 3
STD 2.10 × 10 2 2.92 × 10 2 5.72 × 10 2 3.16 × 10 2 1.99 × 10 2 4.04 × 10 1 1.85 × 10 2 3.00 × 10 2 1.60 × 10 2 1.59 × 10 2 1.49 × 10 2
SEM 9.37 × 10 1 1.31 × 10 2 2.56 × 10 2 1.41 × 10 2 8.92 × 10 1 1.81 × 10 1 8.26 × 10 1 1.34 × 10 2 7.13 × 10 1 7.10 × 10 1 6.67 × 10 1
F11Mean 2.50 × 10 3 2.12 × 10 3 2.64 × 10 3 2.17 × 10 3 2.45 × 10 3 2.54 × 10 3 1.94 × 10 3 2.32 × 10 3 1.66 × 10 3 1.65 × 10 3 2.46 × 10 3
STD 2.07 × 10 2 2.45 × 10 2 1.75 × 10 2 3.02 × 10 2 3.55 × 10 2 8.54 × 10 1 2.94 × 10 2 2.72 × 10 2 1.86 × 10 2 2.36 × 10 2 1.46 × 10 2
SEM 9.27 × 10 1 1.09 × 10 2 7.82 × 10 1 1.35 × 10 2 1.59 × 10 2 3.82 × 10 1 1.31 × 10 2 1.22 × 10 2 8.33 × 10 1 1.05 × 10 2 6.52 × 10 1
F12Mean 1.20 × 10 3 1.20 × 10 3 1.20 × 10 3 1.20 × 10 3 1.20 × 10 3 1.20 × 10 3 1.20 × 10 3 1.20 × 10 3 1.20 × 10 3 1.20 × 10 3 1.20 × 10 3
STD 2.65 × 10 1 4.23 × 10 1 9.94 × 10 1 3.45 × 10 1 4.12 × 10 1 2.47 × 10 1 1.34 × 10 1 9.91 × 10 2 5.79 × 10 1 3.61 × 10 2 4.37 × 10 1
SEM 1.19 × 10 1 1.89 × 10 1 4.45 × 10 1 1.54 × 10 1 1.84 × 10 1 1.10 × 10 1 5.98 × 10 2 4.43 × 10 2 2.59 × 10 1 1.61 × 10 2 1.95 × 10 1
F13Mean 1.30 × 10 3 1.30 × 10 3 1.30 × 10 3 1.30 × 10 3 1.30 × 10 3 1.30 × 10 3 1.30 × 10 3 1.30 × 10 3 1.30 × 10 3 1.30 × 10 3 1.30 × 10 3
STD 1.05 × 10 1 3.60 × 10 2 5.30 × 10 1 1.65 × 10 1 2.71 × 10 1 4.53 × 10 1 7.94 × 10 1 1.70 × 10 1 8.28 × 10 2 4.65 × 10 2 9.72 × 10 1
SEM 4.71 × 10 2 1.61 × 10 2 2.37 × 10 1 7.37 × 10 2 1.21 × 10 1 2.03 × 10 1 3.55 × 10 1 7.59 × 10 2 3.70 × 10 2 2.08 × 10 2 4.34 × 10 1
F14Mean 1.40 × 10 3 1.40 × 10 3 1.43 × 10 3 1.40 × 10 3 1.43 × 10 3 1.44 × 10 3 1.41 × 10 3 1.40 × 10 3 1.40 × 10 3 1.40 × 10 3 1.41 × 10 3
STD 2.24 × 10 1 1.90 × 10 1 8.49 × 10 0 4.23 × 10 2 7.25 × 10 0 1.39 × 10 1 3.37 × 10 0 2.42 × 10 1 1.82 × 10 1 3.05 × 10 1 2.04 × 10 0
SEM 1.00 × 10 1 8.49 × 10 2 3.80 × 10 0 1.89 × 10 2 3.24 × 10 0 6.23 × 10 0 1.51 × 10 0 1.08 × 10 1 8.13 × 10 2 1.36 × 10 1 9.13 × 10 1
F15Mean 1.50 × 10 3 1.50 × 10 3 6.74 × 10 3 1.51 × 10 3 2.41 × 10 3 5.59 × 10 3 1.66 × 10 3 1.51 × 10 3 1.52 × 10 3 1.50 × 10 3 2.84 × 10 3
STD 1.00 × 10 0 1.46 × 10 0 6.85 × 10 3 2.01 × 10 0 9.58 × 10 2 2.46 × 10 3 6.71 × 10 1 9.85 × 10 1 4.71 × 10 1 4.51 × 10 1 1.09 × 10 3
SEM 4.47 × 10 1 6.51 × 10 1 3.06 × 10 3 8.99 × 10 1 4.29 × 10 2 1.10 × 10 3 3.00 × 10 1 4.41 × 10 1 2.11 × 10 1 2.02 × 10 1 4.86 × 10 2
Table A4. CEC2014 Results F16–F30.
Table A4. CEC2014 Results F16–F30.
FunStatisticsSCAAHASHIOGWOWOABOAOHOAOASCAGJOSHORSA
F16Mean 1.60 × 10 3 1.60 × 10 3 1.60 × 10 3 1.60 × 10 3 1.60 × 10 3 1.60 × 10 3 1.60 × 10 3 1.60 × 10 3 1.60 × 10 3 1.60 × 10 3 1.60 × 10 3
STD 2.24 × 10 1 2.16 × 10 1 2.02 × 10 1 4.98 × 10 1 3.11 × 10 1 1.10 × 10 1 2.51 × 10 1 2.01 × 10 1 5.60 × 10 1 2.90 × 10 1 1.63 × 10 1
SEM 1.00 × 10 1 9.68 × 10 2 9.03 × 10 2 2.23 × 10 1 1.39 × 10 1 4.93 × 10 2 1.12 × 10 1 8.98 × 10 2 2.50 × 10 1 1.30 × 10 1 7.31 × 10 2
F17Mean 1.62 × 10 4 7.72 × 10 4 1.08 × 10 6 2.51 × 10 5 5.33 × 10 4 3.97 × 10 5 1.97 × 10 5 4.40 × 10 4 6.51 × 10 3 5.64 × 10 4 5.06 × 10 5
STD 9.08 × 10 3 1.54 × 10 5 1.49 × 10 6 3.39 × 10 5 5.65 × 10 4 3.42 × 10 4 7.55 × 10 4 4.20 × 10 4 2.83 × 10 3 5.36 × 10 4 1.29 × 10 5
SEM 4.06 × 10 3 6.91 × 10 4 6.68 × 10 5 1.52 × 10 5 2.53 × 10 4 1.53 × 10 4 3.38 × 10 4 1.88 × 10 4 1.27 × 10 3 2.40 × 10 4 5.79 × 10 4
F18Mean 1.27 × 10 4 8.51 × 10 3 9.68 × 10 5 1.23 × 10 4 1.59 × 10 4 3.00 × 10 4 1.53 × 10 4 2.17 × 10 4 1.52 × 10 4 1.04 × 10 4 2.28 × 10 5
STD 3.54 × 10 3 4.08 × 10 3 1.17 × 10 6 6.31 × 10 3 6.99 × 10 3 1.95 × 10 4 1.04 × 10 4 1.39 × 10 4 6.57 × 10 3 4.72 × 10 3 4.68 × 10 5
SEM 1.58 × 10 3 1.82 × 10 3 5.22 × 10 5 2.82 × 10 3 3.13 × 10 3 8.74 × 10 3 4.64 × 10 3 6.20 × 10 3 2.94 × 10 3 2.11 × 10 3 2.09 × 10 5
F19Mean 1.90 × 10 3 1.90 × 10 3 1.95 × 10 3 1.91 × 10 3 1.92 × 10 3 1.95 × 10 3 1.94 × 10 3 1.90 × 10 3 1.90 × 10 3 1.90 × 10 3 1.93 × 10 3
STD 7.98 × 10 1 1.09 × 10 0 2.66 × 10 1 1.42 × 10 0 1.46 × 10 1 1.60 × 10 1 2.43 × 10 1 2.79 × 10 1 6.13 × 10 1 1.77 × 10 0 1.76 × 10 1
SEM 3.57 × 10 1 4.89 × 10 1 1.19 × 10 1 6.36 × 10 1 6.53 × 10 0 7.14 × 10 0 1.09 × 10 1 1.25 × 10 1 2.74 × 10 1 7.90 × 10 1 7.86 × 10 0
F20Mean 4.75 × 10 3 4.83 × 10 3 6.71 × 10 4 6.08 × 10 3 5.68 × 10 3 9.24 × 10 3 6.41 × 10 3 3.42 × 10 3 6.88 × 10 3 9.20 × 10 3 1.03 × 10 4
STD 2.71 × 10 3 4.14 × 10 3 4.45 × 10 4 4.47 × 10 3 2.06 × 10 3 1.77 × 10 3 2.47 × 10 3 1.11 × 10 3 3.48 × 10 3 3.41 × 10 3 5.73 × 10 3
SEM 1.21 × 10 3 1.85 × 10 3 1.99 × 10 4 2.00 × 10 3 9.20 × 10 2 7.90 × 10 2 1.11 × 10 3 4.98 × 10 2 1.56 × 10 3 1.52 × 10 3 2.56 × 10 3
F21Mean 9.21 × 10 3 1.03 × 10 4 1.48 × 10 6 4.55 × 10 4 5.50 × 10 4 1.55 × 10 6 7.63 × 10 3 8.80 × 10 3 5.41 × 10 3 8.74 × 10 3 1.40 × 10 5
STD 2.84 × 10 3 5.58 × 10 3 1.65 × 10 6 5.55 × 10 4 3.56 × 10 4 1.35 × 10 6 3.40 × 10 3 6.31 × 10 3 2.17 × 10 3 4.13 × 10 3 1.49 × 10 5
SEM 1.27 × 10 3 2.50 × 10 3 7.37 × 10 5 2.48 × 10 4 1.59 × 10 4 6.03 × 10 5 1.52 × 10 3 2.82 × 10 3 9.72 × 10 2 1.85 × 10 3 6.65 × 10 4
F22Mean 2.28 × 10 3 2.34 × 10 3 2.51 × 10 3 2.27 × 10 3 2.40 × 10 3 2.54 × 10 3 2.38 × 10 3 2.27 × 10 3 2.34 × 10 3 2.28 × 10 3 2.32 × 10 3
STD 5.13 × 10 1 6.49 × 10 1 1.37 × 10 2 5.81 × 10 1 3.57 × 10 1 8.80 × 10 1 2.89 × 10 1 2.39 × 10 1 4.96 × 10 1 6.67 × 10 1 4.62 × 10 1
SEM 2.29 × 10 1 2.90 × 10 1 6.12 × 10 1 2.60 × 10 1 1.60 × 10 1 3.94 × 10 1 1.29 × 10 1 1.07 × 10 1 2.22 × 10 1 2.98 × 10 1 2.06 × 10 1
F23Mean 2.50 × 10 3 2.64 × 10 3 2.69 × 10 3 2.63 × 10 3 2.50 × 10 3 2.52 × 10 3 2.50 × 10 3 2.64 × 10 3 2.61 × 10 3 2.63 × 10 3 2.50 × 10 3
STD 4.72 × 10 2 8.95 × 10 0 3.55 × 10 1 1.37 × 10 0 0.00 × 10 0 1.71 × 10 0 0.00 × 10 0 3.12 × 10 0 6.39 × 10 1 4.32 × 10 0 0.00 × 10 0
SEM 2.11 × 10 2 4.00 × 10 0 1.59 × 10 1 6.13 × 10 1 0.00 × 10 0 7.63 × 10 1 0.00 × 10 0 1.40 × 10 0 2.86 × 10 1 1.93 × 10 0 0.00 × 10 0
F24Mean 2.54 × 10 3 2.57 × 10 3 2.60 × 10 3 2.57 × 10 3 2.58 × 10 3 2.60 × 10 3 2.59 × 10 3 2.55 × 10 3 2.54 × 10 3 2.56 × 10 3 2.60 × 10 3
STD 1.33 × 10 1 4.02 × 10 1 3.84 × 10 1 2.25 × 10 1 2.34 × 10 1 2.76 × 10 1 3.10 × 10 1 1.03 × 10 1 5.80 × 10 0 2.61 × 10 1 4.25 × 10 0
SEM 5.96 × 10 0 1.80 × 10 1 1.72 × 10 1 1.01 × 10 1 1.05 × 10 1 1.23 × 10 1 1.39 × 10 1 4.59 × 10 0 2.60 × 10 0 1.17 × 10 1 1.90 × 10 0
F25Mean 2.70 × 10 3 2.70 × 10 3 2.70 × 10 3 2.70 × 10 3 2.69 × 10 3 2.70 × 10 3 2.69 × 10 3 2.69 × 10 3 2.68 × 10 3 2.70 × 10 3 2.70 × 10 3
STD 0.00 × 10 0 1.47 × 10 0 4.10 × 10 0 2.13 × 10 0 1.64 × 10 1 6.94 × 10 2 1.37 × 10 1 2.14 × 10 1 2.57 × 10 1 7.58 × 10 0 1.25 × 10 0
SEM 0.00 × 10 0 6.57 × 10 1 1.83 × 10 0 9.52 × 10 1 7.32 × 10 0 3.10 × 10 2 6.13 × 10 0 9.58 × 10 0 1.15 × 10 1 3.39 × 10 0 5.59 × 10 1
F26Mean 2.70 × 10 3 2.70 × 10 3 2.70 × 10 3 2.70 × 10 3 2.70 × 10 3 2.71 × 10 3 2.70 × 10 3 2.70 × 10 3 2.70 × 10 3 2.70 × 10 3 2.70 × 10 3
STD 8.70 × 10 2 5.10 × 10 2 4.53 × 10 0 2.05 × 10 1 9.38 × 10 1 2.16 × 10 0 5.00 × 10 1 1.57 × 10 1 4.38 × 10 2 1.29 × 10 1 8.62 × 10 1
SEM 3.89 × 10 2 2.28 × 10 2 2.03 × 10 0 9.18 × 10 2 4.20 × 10 1 9.64 × 10 1 2.24 × 10 1 7.00 × 10 2 1.96 × 10 2 5.77 × 10 2 3.85 × 10 1
F27Mean 2.91 × 10 3 3.11 × 10 3 3.11 × 10 3 3.04 × 10 3 2.84 × 10 3 2.94 × 10 3 2.89 × 10 3 2.95 × 10 3 3.01 × 10 3 2.95 × 10 3 3.05 × 10 3
STD 1.75 × 10 2 3.60 × 10 1 1.61 × 10 2 1.91 × 10 2 1.56 × 10 2 8.80 × 10 0 2.48 × 10 1 2.14 × 10 2 1.72 × 10 2 2.03 × 10 2 1.36 × 10 2
SEM 7.81 × 10 1 1.61 × 10 1 7.19 × 10 1 8.53 × 10 1 6.96 × 10 1 3.93 × 10 0 1.11 × 10 1 9.58 × 10 1 7.70 × 10 1 9.09 × 10 1 6.08 × 10 1
F28Mean 3.15 × 10 3 3.45 × 10 3 3.53 × 10 3 3.38 × 10 3 3.31 × 10 3 3.05 × 10 3 3.00 × 10 3 3.24 × 10 3 3.33 × 10 3 3.36 × 10 3 3.36 × 10 3
STD 9.69 × 10 1 1.86 × 10 2 1.39 × 10 2 1.68 × 10 2 1.90 × 10 2 7.84 × 10 0 0.00 × 10 0 4.44 × 10 0 7.70 × 10 1 7.27 × 10 1 2.33 × 10 2
SEM 4.33 × 10 1 8.31 × 10 1 6.21 × 10 1 7.50 × 10 1 8.50 × 10 1 3.51 × 10 0 0.00 × 10 0 1.99 × 10 0 3.45 × 10 1 3.25 × 10 1 1.04 × 10 2
F29Mean 4.13 × 10 3 2.04 × 10 6 1.21 × 10 7 3.65 × 10 3 3.25 × 10 3 1.06 × 10 7 3.99 × 10 3 8.35 × 10 3 4.24 × 10 5 4.25 × 10 5 4.43 × 10 4
STD 6.22 × 10 2 4.56 × 10 6 1.87 × 10 7 1.65 × 10 2 2.05 × 10 2 5.29 × 10 6 1.08 × 10 3 5.24 × 10 3 9.39 × 10 5 9.42 × 10 5 7.62 × 10 4
SEM 2.78 × 10 2 2.04 × 10 6 8.35 × 10 6 7.37 × 10 1 9.18 × 10 1 2.36 × 10 6 4.84 × 10 2 2.34 × 10 3 4.20 × 10 5 4.21 × 10 5 3.41 × 10 4
F30Mean 4.37 × 10 3 5.14 × 10 3 2.61 × 10 4 5.10 × 10 3 7.27 × 10 3 7.50 × 10 4 5.98 × 10 3 4.41 × 10 3 4.40 × 10 3 4.63 × 10 3 5.12 × 10 4
STD 3.37 × 10 2 3.42 × 10 2 2.74 × 10 4 9.74 × 10 2 1.48 × 10 3 6.49 × 10 4 2.58 × 10 3 1.25 × 10 2 6.29 × 10 2 5.28 × 10 2 9.50 × 10 4
SEM 1.51 × 10 2 1.53 × 10 2 1.22 × 10 4 4.36 × 10 2 6.62 × 10 2 2.90 × 10 4 1.15 × 10 3 5.61 × 10 1 2.81 × 10 2 2.36 × 10 2 4.25 × 10 4
17292023272021161825

Appendix A.3. Results Analysis over CEC2022

Table A5. Results over CEC2022.
Table A5. Results over CEC2022.
FunStatisticsSCAAHAHLOAHGSOGJOBATSHOSHIOWOABOAMTDESCA
F1Mean 1.51 × 10 3 3.01 × 10 2 4.91 × 10 3 3.79 × 10 3 1.08 × 10 4 2.33 × 10 3 4.64 × 10 3 1.95 × 10 4 1.01 × 10 4 2.08 × 10 4 1.52 × 10 3
STD 7.04 × 10 2 3.23 × 10 0 1.28 × 10 3 3.92 × 10 3 1.28 × 10 2 1.72 × 10 3 2.84 × 10 3 1.10 × 10 4 2.01 × 10 3 9.52 × 10 3 1.04 × 10 3
SEM 2.23 × 10 2 1.02 × 10 0 4.05 × 10 2 1.24 × 10 3 4.06 × 10 1 5.43 × 10 2 8.97 × 10 2 3.49 × 10 3 6.36 × 10 2 3.01 × 10 3 3.30 × 10 2
F2Mean 4.34 × 10 2 4.16 × 10 2 4.91 × 10 2 4.35 × 10 2 7.61 × 10 3 4.23 × 10 2 4.42 × 10 2 4.25 × 10 2 9.90 × 10 2 6.09 × 10 2 4.70 × 10 2
STD 1.80 × 10 1 1.85 × 10 1 1.96 × 10 1 2.33 × 10 1 5.03 × 10 1 3.24 × 10 1 2.79 × 10 1 3.07 × 10 1 6.51 × 10 2 1.25 × 10 2 1.58 × 10 1
SEM 5.69 × 10 0 5.86 × 10 0 6.20 × 10 0 7.37 × 10 0 1.59 × 10 1 1.02 × 10 1 8.83 × 10 0 9.70 × 10 0 2.06 × 10 2 3.95 × 10 1 4.99 × 10 0
F3Mean 6.04 × 10 2 6.43 × 10 2 6.25 × 10 2 6.08 × 10 2 7.00 × 10 2 6.15 × 10 2 6.04 × 10 2 6.45 × 10 2 6.46 × 10 2 6.36 × 10 2 6.19 × 10 2
STD 8.71 × 10 1 1.50 × 10 1 6.35 × 10 0 6.21 × 10 0 2.57 × 10 0 6.70 × 10 0 4.53 × 10 0 9.98 × 10 0 6.20 × 10 0 9.61 × 10 0 3.66 × 10 0
SEM 2.75 × 10 1 4.76 × 10 0 2.01 × 10 0 1.97 × 10 0 8.11 × 10 1 2.12 × 10 0 1.43 × 10 0 3.16 × 10 0 1.96 × 10 0 3.04 × 10 0 1.16 × 10 0
F4Mean 8.26 × 10 2 8.52 × 10 2 8.34 × 10 2 8.28 × 10 2 9.02 × 10 2 8.25 × 10 2 8.16 × 10 2 8.43 × 10 2 8.52 × 10 2 8.74 × 10 2 8.40 × 10 2
STD 3.45 × 10 0 1.56 × 10 1 5.25 × 10 0 9.65 × 10 0 1.19 × 10 0 9.00 × 10 0 7.61 × 10 0 1.74 × 10 1 8.57 × 10 0 7.60 × 10 0 9.45 × 10 0
SEM 1.09 × 10 0 4.93 × 10 0 1.66 × 10 0 3.05 × 10 0 3.77 × 10 1 2.85 × 10 0 2.41 × 10 0 5.50 × 10 0 2.71 × 10 0 2.40 × 10 0 2.99 × 10 0
F5Mean 9.14 × 10 2 1.46 × 10 3 1.01 × 10 3 9.83 × 10 2 2.62 × 10 3 1.07 × 10 3 9.23 × 10 2 1.44 × 10 3 1.50 × 10 3 1.87 × 10 3 1.01 × 10 3
STD 1.59 × 10 1 2.78 × 10 2 3.92 × 10 1 8.99 × 10 1 1.51 × 10 2 1.03 × 10 2 2.78 × 10 1 2.65 × 10 2 1.56 × 10 2 3.12 × 10 2 6.55 × 10 1
SEM 5.01 × 10 0 8.80 × 10 1 1.24 × 10 1 2.84 × 10 1 4.77 × 10 1 3.24 × 10 1 8.78 × 10 0 8.39 × 10 1 4.92 × 10 1 9.87 × 10 1 2.07 × 10 1
F6Mean 9.87 × 10 4 2.49 × 10 3 2.69 × 10 6 8.78 × 10 3 2.11 × 10 8 4.04 × 10 3 3.40 × 10 3 3.88 × 10 3 5.80 × 10 7 3.40 × 10 7 2.49 × 10 6
STD 5.23 × 10 4 1.42 × 10 3 2.03 × 10 6 3.66 × 10 3 1.37 × 10 7 1.84 × 10 3 1.72 × 10 3 1.70 × 10 3 2.95 × 10 7 2.46 × 10 7 2.74 × 10 6
SEM 1.65 × 10 4 4.50 × 10 2 6.43 × 10 5 1.16 × 10 3 4.33 × 10 6 5.82 × 10 2 5.45 × 10 2 5.38 × 10 2 9.31 × 10 6 7.79 × 10 6 8.66 × 10 5
F7Mean 2.04 × 10 3 2.13 × 10 3 2.07 × 10 3 2.05 × 10 3 2.81 × 10 3 2.04 × 10 3 2.05 × 10 3 2.08 × 10 3 2.12 × 10 3 2.11 × 10 3 2.06 × 10 3
STD 6.43 × 10 0 3.35 × 10 1 5.35 × 10 0 1.49 × 10 1 1.58 × 10 2 1.48 × 10 1 3.41 × 10 1 2.90 × 10 1 2.72 × 10 1 7.12 × 10 0 1.32 × 10 1
SEM 2.03 × 10 0 1.06 × 10 1 1.69 × 10 0 4.70 × 10 0 5.01 × 10 1 4.69 × 10 0 1.08 × 10 1 9.17 × 10 0 8.61 × 10 0 2.25 × 10 0 4.16 × 10 0
F8Mean 2.23 × 10 3 2.29 × 10 3 2.23 × 10 3 2.22 × 10 3 3.20 × 10 3 2.22 × 10 3 2.24 × 10 3 2.23 × 10 3 2.27 × 10 3 2.25 × 10 3 2.23 × 10 3
STD 1.48 × 10 0 6.17 × 10 1 3.51 × 10 0 7.58 × 10 0 2.52 × 10 2 2.76 × 10 0 3.77 × 10 1 5.43 × 10 0 4.90 × 10 1 1.10 × 10 1 1.78 × 10 0
SEM 4.68 × 10 1 1.95 × 10 1 1.11 × 10 0 2.40 × 10 0 7.96 × 10 1 8.72 × 10 1 1.19 × 10 1 1.72 × 10 0 1.55 × 10 1 3.49 × 10 0 5.62 × 10 1
F9Mean 2.55 × 10 3 2.53 × 10 3 2.63 × 10 3 2.60 × 10 3 3.05 × 10 3 2.60 × 10 3 2.60 × 10 3 2.59 × 10 3 2.75 × 10 3 2.67 × 10 3 2.58 × 10 3
STD 7.81 × 10 0 6.35 × 10 0 3.19 × 10 1 2.19 × 10 1 7.45 × 10 0 2.16 × 10 1 4.28 × 10 1 5.93 × 10 1 6.30 × 10 1 2.56 × 10 1 1.23 × 10 1
SEM 2.47 × 10 0 2.01 × 10 0 1.01 × 10 1 6.92 × 10 0 2.36 × 10 0 6.84 × 10 0 1.35 × 10 1 1.87 × 10 1 1.99 × 10 1 8.08 × 10 0 3.90 × 10 0
F10Mean 2.50 × 10 3 2.78 × 10 3 2.52 × 10 3 2.55 × 10 3 5.17 × 10 3 2.53 × 10 3 2.57 × 10 3 2.58 × 10 3 2.66 × 10 3 2.59 × 10 3 2.52 × 10 3
STD 5.99 × 10 2 4.49 × 10 2 4.68 × 10 1 6.25 × 10 1 5.02 × 10 1 5.31 × 10 1 5.95 × 10 1 1.18 × 10 2 8.70 × 10 1 7.86 × 10 1 4.59 × 10 1
SEM 1.89 × 10 2 1.42 × 10 2 1.48 × 10 1 1.98 × 10 1 1.59 × 10 1 1.68 × 10 1 1.88 × 10 1 3.74 × 10 1 2.75 × 10 1 2.48 × 10 1 1.45 × 10 1
F11Mean 2.74 × 10 3 2.83 × 10 3 2.79 × 10 3 2.95 × 10 3 5.13 × 10 3 2.81 × 10 3 2.84 × 10 3 2.84 × 10 3 3.46 × 10 3 3.01 × 10 3 2.78 × 10 3
STD 1.16 × 10 1 2.14 × 10 2 1.27 × 10 1 2.53 × 10 2 7.08 × 10 0 1.85 × 10 2 2.62 × 10 2 2.25 × 10 2 5.53 × 10 2 1.14 × 10 2 7.06 × 10 0
SEM 3.65 × 10 0 6.76 × 10 1 4.01 × 10 0 8.02 × 10 1 2.24 × 10 0 5.85 × 10 1 8.30 × 10 1 7.12 × 10 1 1.75 × 10 2 3.60 × 10 1 2.23 × 10 0
F12Mean 2.87 × 10 3 2.91 × 10 3 2.90 × 10 3 2.87 × 10 3 4.75 × 10 3 2.90 × 10 3 2.88 × 10 3 2.89 × 10 3 2.99 × 10 3 2.92 × 10 3 2.87 × 10 3
STD 8.33 × 10 1 4.00 × 10 1 4.00 × 10 0 5.36 × 10 0 1.71 × 10 1 1.84 × 10 1 2.16 × 10 1 3.41 × 10 1 8.57 × 10 1 1.64 × 10 1 1.42 × 10 0
SEM 2.63 × 10 1 1.27 × 10 1 1.27 × 10 0 1.69 × 10 0 5.42 × 10 0 5.81 × 10 0 6.83 × 10 0 1.08 × 10 1 2.71 × 10 1 5.17 × 10 0 4.49 × 10 1
81291281010121212
Table A6. CEC2017 Wilcoxon sum-rank test results.
Table A6. CEC2017 Wilcoxon sum-rank test results.
FunctionSHIOGWOWOABOAHHOOHOAOASCAGJOSHORSA
F1 3.02 × 10 11 0.258051 3.02 × 10 11 2.37 × 10 10 3.02 × 10 11 7.39 × 10 11 0.000178 3.02 × 10 11 4.08 × 10 5 3.5 × 10 9 3.02 × 10 11
U: 465.0000U: 992.0000U: 465.0000U: 486.0000U: 465.0000U: 474.0000U: 661.0000U: 465.0000U: 637.0000U: 515.0000U: 465.0000
+=+++++++++
F2 3.02 × 10 11 8.2 × 10 7 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 6.7 × 10 11 3.02 × 10 11 3.02 × 10 11 0.079782 4.08 × 10 11 3.02 × 10 11
U: 465.0000U: 1249.0000U: 465.0000U: 465.0000U: 465.0000U: 473.0000U: 465.0000U: 465.0000U: 796.0000U: 468.0000U: 465.0000
+++++++=++
F3 3.02 × 10 11 0.549327 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 4.98 × 10 11 3.35 × 10 8 3.02 × 10 11
U: 465.0000U: 874.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 1360.0000U: 541.0000U: 465.0000
+=++++++++
F4 1.96 × 10 10 0.000337 3.02 × 10 11 1.31 × 10 8 3.02 × 10 11 3.2 × 10 9 3.02 × 10 11 3.02 × 10 11 2.39 × 10 8 4.62 × 10 10 3.02 × 10 11
U: 484.0000U: 1158.0000U: 465.0000U: 530.0000U: 465.0000U: 514.0000U: 465.0000U: 465.0000U: 537.0000U: 493.0000U: 465.0000
++++++++++
F5 1.21 × 10 10 0.012212 3.02 × 10 11 1.55 × 10 9 3.02 × 10 11 0.0030340.549327 3.02 × 10 11 6.69 × 10 11 4.98 × 10 11 0.162375
U: 479.0000U: 1085.0000U: 465.0000U: 506.0000U: 465.0000U: 714.0000U: 956.0000U: 465.0000U: 473.0000U: 470.0000U: 820.0000
+++++=+++=
F6 3.02 × 10 11 4.12 × 10 6 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 9.83 × 10 8 3.01 × 10 11 0.379036 5.57 × 10 10 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 1227.0000U: 465.0000U: 465.0000U: 465.0000U: 554.0000U: 1365.0000U: 855.0000U: 495.0000U: 465.0000U: 465.0000
++++++=+++
F7 3.02 × 10 11 3.16 × 10 5 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 5.57 × 10 10 0.673495 2.23 × 10 9 0.935192 3.02 × 10 11 0.010315
U: 465.0000U: 1197.0000U: 465.0000U: 465.0000U: 465.0000U: 495.0000U: 944.0000U: 510.0000U: 909.0000U: 465.0000U: 741.0000
+++++=+=++
F8 2.61 × 10 10 3.83 × 10 5 3.02 × 10 11 3.16 × 10 10 3.02 × 10 11 5.09 × 10 8 0.695215 6.12 × 10 10 0.002051 3.02 × 10 11 0.046756
U: 487.0000U: 1194.0000U: 465.0000U: 489.0000U: 465.0000U: 546.0000U: 942.0000U: 496.0000U: 706.0000U: 465.0000U: 1050.0000
+++++=+++-
F9 3.02 × 10 11 0.003848 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 2.99 × 10 11 1.21 × 10 12 5.26 × 10 9 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 719.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 1365.0000U: 1365.0000U: 1305.0000U: 465.0000U: 465.0000
+++++++++
F100.002052 3.35 × 10 8 3.02 × 10 11 5.53 × 10 8 4.98 × 10 11 0.0006910.325527 3.02 × 10 11 0.22823 1.31 × 10 8 0.026077
U: 706.0000U: 1289.0000U: 465.0000U: 547.0000U: 470.0000U: 685.0000U: 982.0000U: 465.0000U: 833.0000U: 530.0000U: 1066.0000
+++++=+=+-
F11 3.02 × 10 11 0.09049 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 6.07 × 10 11 0.020681 3.02 × 10 11 3.69 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 1030.0000U: 465.0000U: 465.0000U: 465.0000U: 472.0000U: 1072.0000U: 465.0000U: 467.0000U: 465.0000U: 465.0000
+=++++++++
F12 3.02 × 10 11 7.38 × 10 10 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 4.08 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 1332.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 468.0000U: 465.0000U: 465.0000
++++++++++
F13 3.02 × 10 11 4.57 × 10 9 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 2.13 × 10 5 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 1312.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 627.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
++++++++++
F14 3.02 × 10 11 2.15 × 10 6 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 1.17 × 10 9 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 1236.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 1327.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++
F15 3.02 × 10 11 7.39 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 6.12 × 10 10 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 1356.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 1334.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++
F16 3.16 × 10 10 1.55 × 10 9 3.02 × 10 11 3.34 × 10 11 3.02 × 10 11 0.074827 7.2 × 10 5 3.02 × 10 11 1.11 × 10 6 3.02 × 10 11 1.73 × 10 6
U: 489.0000U: 1324.0000U: 465.0000U: 466.0000U: 465.0000U: 794.0000U: 1184.0000U: 465.0000U: 585.0000U: 465.0000U: 591.0000
++++=++++
F17 9.92 × 10 11 0.023243 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 4.12 × 10 6 1.21 × 10 10 3.02 × 10 11 7.38 × 10 10 4.08 × 10 11 4.2 × 10 10
U: 477.0000U: 1069.0000U: 465.0000U: 465.0000U: 465.0000U: 603.0000U: 1351.0000U: 465.0000U: 498.0000U: 468.0000U: 492.0000
+++++++++
F18 3.02 × 10 11 8.89 × 10 10 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 4.31 × 10 8 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 1330.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 1286.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++
F19 3.02 × 10 11 3.82 × 10 9 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 1314.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 1365.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++
F20 3.02 × 10 11 0.000691 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 2.57 × 10 7 5.57 × 10 10 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 685.0000U: 465.0000U: 465.0000U: 465.0000U: 566.0000U: 1335.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
++++++++++
F210.000655 4.98 × 10 11 3.83 × 10 5 2.67 × 10 9 3.52 × 10 7 0.001953 9.06 × 10 8 0.2860740.000111 3.02 × 10 11 0.005084
U: 684.0000U: 1360.0000U: 636.0000U: 512.0000U: 570.0000U: 705.0000U: 553.0000U: 845.0000U: 653.0000U: 465.0000U: 725.0000
++++++=+++
F22 5.07 × 10 10 3.83 × 10 6 3.02 × 10 11 1.6 × 10 7 3.02 × 10 11 0.059428 3.82 × 10 9 0.0010280.045146 5.07 × 10 10 6.52 × 10 9
U: 494.0000U: 1228.0000U: 465.0000U: 560.0000U: 465.0000U: 787.0000U: 1314.0000U: 1132.0000U: 779.0000U: 494.0000U: 522.0000
++++=+++++
F23 1.33 × 10 10 0.652044 3.02 × 10 11 4.98 × 10 11 3.02 × 10 11 0.0001780.000268 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 0.501144
U: 480.0000U: 946.0000U: 465.0000U: 470.0000U: 465.0000U: 661.0000U: 668.0000U: 465.0000U: 465.0000U: 465.0000U: 869.0000
+=++++++++=
F24 3.26 × 10 7 3.5 × 10 9 3.02 × 10 11 5 × 10 9 3.16 × 10 10 3.65 × 10 8 3.16 × 10 10 3.02 × 10 11 0.000399 6.53 × 10 8 0.000284
U: 569.0000U: 1315.0000U: 465.0000U: 519.0000U: 489.0000U: 542.0000U: 489.0000U: 465.0000U: 675.0000U: 549.0000U: 669.0000
++++++++++
F250.000284 5 × 10 9 7.12 × 10 9 0.118817 1.61 × 10 10 0.0850.363222 6.07 × 10 11 0.228191 3.08 × 10 8 0.000239
U: 669.0000U: 1311.0000U: 523.0000U: 809.0000U: 482.0000U: 798.0000U: 853.0000U: 472.0000U: 997.0000U: 540.0000U: 666.0000
++=+==+=++
F26 5.07 × 10 10 0.001597 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 1.03 × 10 6 4.44 × 10 7 3.02 × 10 11 0.464254 3.02 × 10 11 9.06 × 10 8
U: 494.0000U: 1129.0000U: 465.0000U: 465.0000U: 465.0000U: 584.0000U: 573.0000U: 465.0000U: 965.0000U: 465.0000U: 553.0000
+++++++=++
F27 3.02 × 10 11 2.15 × 10 10 3.16 × 10 10 3.02 × 10 11 3.34 × 10 11 0.000377 1.64 × 10 5 3.02 × 10 11 1.36 × 10 7 3.02 × 10 11 0.195791
U: 465.0000U: 1345.0000U: 489.0000U: 465.0000U: 466.0000U: 674.0000U: 1207.0000U: 465.0000U: 558.0000U: 465.0000U: 827.0000
++++++++=
F28 7.6 × 10 7 0.0090690.006377 7.04 × 10 7 2.23 × 10 9 0.0006910.000376 3.16 × 10 10 0.684323 1.11 × 10 6 1.19 × 10 6
U: 580.0000U: 1092.0000U: 730.0000U: 579.0000U: 510.0000U: 685.0000U: 674.0000U: 489.0000U: 887.0000U: 585.0000U: 586.0000
+++++++=++
F29 7.39 × 10 11 4.98 × 10 11 3.02 × 10 11 3.16 × 10 10 3.02 × 10 11 0.0144120.56922 3.02 × 10 11 1.6 × 10 7 3.02 × 10 11 0.079782
U: 474.0000U: 1360.0000U: 465.0000U: 489.0000U: 465.0000U: 749.0000U: 954.0000U: 465.0000U: 560.0000U: 465.0000U: 796.0000
+++++=+++=
F30 6.12 × 10 10 0.000111 4.5 × 10 11 3.82 × 10 10 1.09 × 10 10 1.2 × 10 8 9.06 × 10 8 1.2 × 10 10 5.86 × 10 6 1.55 × 10 9 2.02 × 10 8
U: 496.0000U: 1177.0000U: 469.0000U: 491.0000U: 478.0000U: 529.0000U: 553.0000U: 497.0000U: 608.0000U: 506.0000U: 535.0000
Total+: 30, −: 0, =: 0+: 3, −: 23, =: 4+: 30, −: 0, =: 0+: 29, −: 0, =: 1+: 30, −: 0, =: 0+: 27, −: 0, =: 3+: 13, −: 11, =: 6+: 27, −: 1, =: 2+: 22, −: 2, =: 6+: 30, −: 0, =: 0+: 24, −: 2, =: 4
Table A7. CEC2017 Wilcoxon sum-rank test results.
Table A7. CEC2017 Wilcoxon sum-rank test results.
FunctionSHIOGWOWOABOAHHOOHOAOASCAGJOSHORSA
F1 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F2 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F3 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F4 3.02 × 10 11 3.02 × 10 11 6.7 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 465.0000U: 473.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F5 1.01 × 10 8 0.059428 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 8.15 × 10 11 3.02 × 10 11 9.26 × 10 9 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 527.0000U: 787.0000U: 465.0000U: 465.0000U: 465.0000U: 475.0000U: 465.0000U: 526.0000U: 465.0000U: 465.0000U: 465.0000
+=+++++++++
F6 3.34 × 10 11 4.98 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 466.0000U: 470.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F7 9.21 × 10 5 0.001767 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.34 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 650.0000U: 703.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 466.0000U: 465.0000U: 465.0000
+++++++++++
F8 1.07 × 10 7 0.000284 3.69 × 10 11 3.02 × 10 11 3.02 × 10 11 4.08 × 10 11 3.02 × 10 11 6.07 × 10 11 1.33 × 10 10 3.02 × 10 11 3.02 × 10 11
U: 555.0000U: 669.0000U: 467.0000U: 465.0000U: 465.0000U: 468.0000U: 465.0000U: 472.0000U: 480.0000U: 465.0000U: 465.0000
+++++++++++
F9 3.69 × 10 11 8.1 × 10 10 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 467.0000U: 499.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F10 1.7 × 10 8 0.027086 2.87 × 10 10 3.02 × 10 11 3.02 × 10 11 1.96 × 10 10 3.02 × 10 11 7.04 × 10 7 4.69 × 10 8 3.02 × 10 11 3.02 × 10 11
U: 533.0000U: 765.0000U: 488.0000U: 465.0000U: 465.0000U: 484.0000U: 465.0000U: 579.0000U: 545.0000U: 465.0000U: 465.0000
+++++++++++
F11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F12 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F13 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F14 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F15 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F16 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F17 4.08 × 10 11 9.92 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 7.39 × 10 11 1.7 × 10 8 3.02 × 10 11 3.02 × 10 11
U: 468.0000U: 477.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 474.0000U: 533.0000U: 465.0000U: 465.0000
+++++++++++
F18 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F19 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F20 5.49 × 10 11 3.16 × 10 10 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.69 × 10 11 2.87 × 10 10 3.02 × 10 11 3.02 × 10 11
U: 471.0000U: 489.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 467.0000U: 488.0000U: 465.0000U: 465.0000
+++++++++++
F21 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 5.07 × 10 10
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 494.0000
+++++++++++
F22 1.01 × 10 8 1.46 × 10 10 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 2.37 × 10 10 1.41 × 10 9 1.41 × 10 9 3.02 × 10 11 3.02 × 10 11
U: 527.0000U: 481.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 486.0000U: 505.0000U: 505.0000U: 465.0000U: 465.0000
+++++++++++
F23 7.39 × 10 11 0.074827 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 9.92 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 474.0000U: 794.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 477.0000U: 465.0000U: 465.0000U: 465.0000
+=+++++++++
F24 3.34 × 10 11 3.02 × 10 11 3.02 × 10 11 6.07 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.34 × 10 11 4.5 × 10 11 3.02 × 10 11 8.15 × 10 11
U: 466.0000U: 465.0000U: 465.0000U: 472.0000U: 465.0000U: 465.0000U: 465.0000U: 466.0000U: 469.0000U: 465.0000U: 475.0000
+++++++++++
F25 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F26 6.07 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 1.78 × 10 10 3.02 × 10 11 3.02 × 10 11
U: 472.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 483.0000U: 465.0000U: 465.0000
+++++++++++
F27 3.02 × 10 11 6.07 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 472.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F28 5.07 × 10 10 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 494.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F29 6.07 × 10 11 6.53 × 10 8 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 2.87 × 10 10 6.7 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 472.0000U: 549.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 488.0000U: 473.0000U: 465.0000U: 465.0000
+++++++++++
F30 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
Total+: 30, −: 0, =: 0+: 28, −: 0, =: 2+: 30, −: 0, =: 0+: 30, −: 0, =: 0+: 30, −: 0, =: 0+: 30, −: 0, =: 0+: 30, −: 0, =: 0+: 30, −: 0, =: 0+: 30, −: 0, =: 0+: 30, −: 0, =: 0+: 30, −: 0, =: 0
Table A8. CEC2022Wilcoxon sum-rank test results.
Table A8. CEC2022Wilcoxon sum-rank test results.
FunctionSHIOGWOWOABOAHHOOHOAOASCAGJOSHORSA
F1 2.36 × 10 12 2.36 × 10 12 2.36 × 10 12 2.36 × 10 12 2.36 × 10 12 2.36 × 10 12 2.36 × 10 12 2.36 × 10 12 2.36 × 10 12 2.36 × 10 12 2.36 × 10 12
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F20.1318040.174758 1.58 × 10 11 3.04 × 10 7 2.21 × 10 7 0.0029150.0115140.270395 2.59 × 10 6 1.25 × 10 7 0.000462
U: 1016.0000U: 824.0000U: 465.0000U: 573.0000U: 569.0000U: 716.0000U: 1084.0000U: 841.0000U: 601.0000U: 562.0000U: 681.0000
==++++=+++
F3 1.24 × 10 11 1.24 × 10 11 1.24 × 10 11 1.24 × 10 11 1.24 × 10 11 1.24 × 10 11 1.24 × 10 11 1.24 × 10 11 1.24 × 10 11 1.24 × 10 11 1.24 × 10 11
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F4 2.84 × 10 11 2.84 × 10 11 2.84 × 10 11 2.84 × 10 11 2.84 × 10 11 2.84 × 10 11 2.84 × 10 11 2.84 × 10 11 2.84 × 10 11 2.84 × 10 11 2.84 × 10 11
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F5 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F6 6.07 × 10 11 3.34 × 10 11 3.02 × 10 11 6.07 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 472.0000U: 466.0000U: 465.0000U: 472.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F7 1.01 × 10 8 2.23 × 10 9 3.02 × 10 11 3.34 × 10 11 3.02 × 10 11 3.02 × 10 11 5.49 × 10 11 4.08 × 10 11 3.02 × 10 11 3.02 × 10 11 3.02 × 10 11
U: 527.0000U: 510.0000U: 465.0000U: 466.0000U: 465.0000U: 465.0000U: 471.0000U: 468.0000U: 465.0000U: 465.0000U: 465.0000
+++++++++++
F80.8999950.05746 3.02 × 10 11 1.29 × 10 9 1.96 × 10 10 1.29 × 10 9 1.96 × 10 10 3.2 × 10 9 3.02 × 10 11 3.02 × 10 11 7.09 × 10 8
U: 924.0000U: 786.0000U: 465.0000U: 504.0000U: 484.0000U: 504.0000U: 484.0000U: 514.0000U: 465.0000U: 465.0000U: 550.0000
==+++++++++
F9 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 0.000145
U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 465.0000U: 735.0000
+++++++++++
F10 2.83 × 10 8 3.09 × 10 6 5.57 × 10 10 3.16 × 10 10 1.96 × 10 10 3.08 × 10 8 2.49 × 10 6 2.61 × 10 10 1.33 × 10 10 3.16 × 10 10 1.69 × 10 9
U: 539.0000U: 599.0000U: 495.0000U: 489.0000U: 484.0000U: 540.0000U: 596.0000U: 487.0000U: 480.0000U: 489.0000U: 507.0000
++++++++++
F11 7.45 × 10 9 3.03 × 10 9 1.78 × 10 11 1.62 × 10 10 6.02 × 10 11 3.2 × 10 10 1.62 × 10 10 1.97 × 10 10 9.91 × 10 11 6.02 × 10 11 1.09 × 10 10
U: 528.0000U: 518.0000U: 465.0000U: 487.0000U: 477.0000U: 494.0000U: 487.0000U: 489.0000U: 482.0000U: 477.0000U: 483.0000
+++++++++++
F12 3.3 × 10 11 4.07 × 10 5 2.99 × 10 11 3.13 × 10 10 3.44 × 10 10 9.76 × 10 8 6 × 10 11 1.72 × 10 6 2.99 × 10 11 6.06 × 10 10 0.673367
U: 466.0000U: 637.0000U: 465.0000U: 489.0000U: 490.0000U: 554.0000U: 472.0000U: 591.0000U: 465.0000U: 496.0000U: 886.0000
Total+: 10, −: 0, =: 2+: 10, −: 0, =: 2+: 12, −: 0, =: 0+: 12, −: 0, =: 0+: 12, −: 0, =: 0+: 12, −: 0, =: 0+: 10, −: 2, =: 0+: 11, −: 0, =: 1+: 12, −: 0, =: 0+: 12, −: 0, =: 0+: 11, −: 0, =: 1

References

  1. Kochenderfer, M.J.; Wheeler, T.A. Algorithms for Optimization; Mit Press: Cambridge, MA, USA, 2019. [Google Scholar]
  2. Diwekar, U.M. Introduction to Applied Optimization; Springer: Cham, Switzerland, 2020; Volume 22. [Google Scholar]
  3. Duchi, J.C.; Ruan, F. Stochastic methods for composite and weakly convex optimization problems. SIAM J. Optim. 2018, 28, 3229–3259. [Google Scholar] [CrossRef]
  4. Dokeroglu, T.; Sevinc, E.; Kucukyilmaz, T.; Cosar, A. A survey on new generation metaheuristic algorithms. Comput. Ind. Eng. 2019, 137, 106040. [Google Scholar] [CrossRef]
  5. Hamadneh, T.; Kaabneh, K.; Alssayed, O.; Eguchi, K.; Gochhait, S.; Leonova, I.; Dehghani, M. Addax Optimization Algorithm: A Novel Nature-Inspired Optimizer for Solving Engineering Applications. Int. J. Intell. Eng. Syst. 2024, 17, 732–743. [Google Scholar] [CrossRef]
  6. Abualhaj, M.M.; Al-Khatib, S.N.; Abu-Shareha, A.A.; Almomani, O.; Al-Mimi, H.; Al-Allawee, A.; Anbar, M. Spam Detection Boosted by Firefly-Based Feature Selection and Optimized Classifiers. Int. J. Adv. Soft Comput. Its Appl. 2025, 17. [Google Scholar] [CrossRef]
  7. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  8. Kaur, S.; Awasthi, L.K.; Sangal, A.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  9. Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  10. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar]
  11. Zhao, W.; Zhang, Z.; Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  12. Raidl, G.R.; Puchinger, J.; Blum, C. Metaheuristic hybrids. In Handbook of Metaheuristics; Springer: Cham, Switzerland, 2019; pp. 385–417. [Google Scholar]
  13. Sorensen, K.; Sevaux, M.; Glover, F. A history of metaheuristics. arXiv 2017, arXiv:1704.00853. [Google Scholar] [CrossRef]
  14. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  15. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  16. Mirjalili, S.; Lewis, A. Sine-Cosine Algorithm: A Novel Metaheuristic Optimization Algorithm. Future Gener. Comput. Syst. 2016, 61, 99–111. [Google Scholar] [CrossRef]
  17. Zhao, M.; Wang, S.; Zhang, L. Hummingbird Foraging Algorithm: A Novel Metaheuristic Algorithm. Appl. Soft Comput. 2022, 110, 107–118. [Google Scholar] [CrossRef]
  18. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  19. Abdollahzadeh, B.; Soleimanian Gharehchopogh, F.; Mirjalili, S. Artificial gorilla troops optimizer: A new nature-inspired metaheuristic algorithm for global optimization problems. Int. J. Intell. Syst. 2021, 36, 5887–5958. [Google Scholar] [CrossRef]
  20. Abdollahzadeh, M.; Moghaddam, M. Vultures Optimization Algorithm: A New Nature-Inspired Algorithm. Eng. Appl. Artif. Intell. 2021, 94, 103731. [Google Scholar] [CrossRef]
  21. Hussain, K.; Salleh, M.N.M.; Cheng, S.; Shi, Y. On the exploration and exploitation in popular swarm-based metaheuristic algorithms. Neural Comput. Appl. 2019, 31, 7665–7683. [Google Scholar] [CrossRef]
  22. Abdel-Basset, M.; Abdel-Fatah, L.; Sangaiah, A.K. Metaheuristic algorithms: A comprehensive review. In Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications; Academic Press: Cambridge, MA, USA, 2018; pp. 185–231. [Google Scholar]
  23. Mirjalili, S.Z.; Mirjalili, S.; Saremi, S.; Faris, H.; Aljarah, I. Grasshopper optimization algorithm for multi-objective optimization problems. Appl. Intell. 2018, 48, 805–820. [Google Scholar] [CrossRef]
  24. Hrinov, V.; Khorolskyi, A. Improving the Process of Coal Extraction Based on the Parameter Optimization of Mining Equipment. E3S Web Conf. 2018, 60, 00017. [Google Scholar] [CrossRef]
  25. Siarry, P. Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2016; Volume 71. [Google Scholar]
  26. Zhao, W.; Wang, L.; Mirjalili, S. Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar] [CrossRef]
  27. Mirjalili, S.M.; Mirjalili, S.Z.; Saremi, S.; Mirjalili, S. Sine cosine algorithm: Theory, literature review, and application in designing bend photonic crystal waveguides. In Nature-Inspired Optimizers: Theories, Literature Reviews and Applications; Springer: Cham, Switzerland, 2020; pp. 201–217. [Google Scholar]
  28. Chopard, B.; Tomassini, M. An Introduction to Metaheuristics for Optimization; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  29. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  30. Zhao, S.; Zhang, T.; Ma, S.; Wang, M. Sea-horse optimizer: A novel nature-inspired meta-heuristic for global optimization problems. Appl. Intell. 2023, 53, 11833–11860. [Google Scholar] [CrossRef]
  31. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  32. Peraza-Vázquez, H.; Peña-Delgado, A.; Merino-Treviño, M.; Morales-Cepeda, A.B.; Sinha, N. A novel metaheuristic inspired by horned lizard defense tactics. Artif. Intell. Rev. 2024, 57, 59. [Google Scholar] [CrossRef]
  33. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  34. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  35. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  36. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile search algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  37. Rezaei, F.; Safavi, H.R.; Abd Elaziz, M.; Mirjalili, S. GMO: Geometric mean optimizer for solving engineering problems. Soft Comput. 2023, 27, 10571–10606. [Google Scholar] [CrossRef]
  38. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  39. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  40. Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  41. Abdollahzadeh, B.; Gharehchopogh, F.S.; Khodadadi, N.; Mirjalili, S. Mountain gazelle optimizer: A new nature-inspired metaheuristic algorithm for global optimization problems. Adv. Eng. Softw. 2022, 174, 103282. [Google Scholar] [CrossRef]
  42. Karami, H.; Anaraki, M.V.; Farzin, S.; Mirjalili, S. Flow direction algorithm (FDA): A novel optimization approach for solving optimization problems. Comput. Ind. Eng. 2021, 156, 107224. [Google Scholar] [CrossRef]
  43. Sadeeq, H.T.; Abdulazeez, A.M. Giant trevally optimizer (GTO): A novel metaheuristic algorithm for global optimization and challenging engineering problems. IEEE Access 2022, 10, 121615–121640. [Google Scholar] [CrossRef]
  44. Sandgren, E. Nonlinear Integer and Discrete Programming in Mechanical Design Optimization. J. Mech. Des. 1990, 112, 223–229. [Google Scholar] [CrossRef]
  45. Kumar, A.; Wu, G.; Ali, M.Z.; Mallipeddi, R.; Suganthan, P.N.; Das, S. A test-suite of non-convex constrained optimization problems from the real-world and some baseline results. Swarm Evol. Comput. 2020, 56, 100693. [Google Scholar] [CrossRef]
  46. Dehghani, M.; Montazeri, Z.; Dhiman, G.; Malik, O.; Morales-Menendez, R.; Ramirez-Mendoza, R.A.; Dehghani, A.; Guerrero, J.M.; Parra-Arroyo, L. A spring search algorithm applied to engineering optimization problems. Appl. Sci. 2020, 10, 6173. [Google Scholar] [CrossRef]
  47. Rahul, M.; Rameshkumar, K. Multi-objective optimization and numerical modelling of helical coil spring for automotive application. Mater. Today Proc. 2021, 46, 4847–4853. [Google Scholar] [CrossRef]
  48. Lin, M.H.; Tsai, J.F.; Hu, N.Z.; Chang, S.C. Design optimization of a speed reducer using deterministic techniques. Math. Probl. Eng. 2013, 2013, 419043. [Google Scholar] [CrossRef]
  49. Tudose, L.; Buiga, O.; Jucan, D.; Ştefanache, C. Optimal Design of Two-Stage Speed Reducer. 2008. Available online: https://www.semanticscholar.org/paper/Optimal-design-of-two-stage-speed-reducer-Tudose-Buiga/66138b3cb8e6752f0c4f2619fc6cff35faa2702e (accessed on 12 November 2025).
  50. Datseris, P. Weight minimization of a speed reducer by heuristic and decomposition techniques. Mech. Mach. Theory 1982, 17, 255–262. [Google Scholar] [CrossRef]
  51. Essa, H.S.; Kennedy, D.L. Design of cantilever steel beams: Refined approach. J. Struct. Eng. 1994, 120, 2623–2636. [Google Scholar] [CrossRef]
  52. Canbaz, B.; Yannou, B.; Yvars, P.A. A new framework for collaborative set-based design: Application to the design problem of a hollow cylindrical cantilever beam. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Washington, DC, USA, 28–31 August 2011; Volume 54822, pp. 197–206. [Google Scholar]
Figure 1. Inspirational metaphors of the hybrid algorithm: (a) the Sine–Cosine Algorithm (SCA) uses decaying trigonometric waves to guide candidates; (b) the Artificial Hummingbird Algorithm (AHA) models axial, diagonal, and omnidirectional flights toward a target.
Figure 1. Inspirational metaphors of the hybrid algorithm: (a) the Sine–Cosine Algorithm (SCA) uses decaying trigonometric waves to guide candidates; (b) the Artificial Hummingbird Algorithm (AHA) models axial, diagonal, and omnidirectional flights toward a target.
Computers 15 00035 g001
Figure 2. Movement strategy of the hybrid algorithm. (a) The SCA phase generates a decaying spiral around the best solution using sine and cosine functions (Equation (4)). (b) The AHA phase chooses among axial, diagonal, and omnidirectional flights, and performs guided or territorial foraging according to Equations (5) and (6).
Figure 2. Movement strategy of the hybrid algorithm. (a) The SCA phase generates a decaying spiral around the best solution using sine and cosine functions (Equation (4)). (b) The AHA phase chooses among axial, diagonal, and omnidirectional flights, and performs guided or territorial foraging according to Equations (5) and (6).
Computers 15 00035 g002
Figure 3. Convergence curve analysis over selected functions of CEC2022 (F1–F6).
Figure 3. Convergence curve analysis over selected functions of CEC2022 (F1–F6).
Computers 15 00035 g003
Figure 4. Convergence curve analysis over selected function of CEC2022 (F7–F12).
Figure 4. Convergence curve analysis over selected function of CEC2022 (F7–F12).
Computers 15 00035 g004
Figure 5. Pressure Vessel Design Problem showing thicknesses T h , T s , radius R, and vessel length L.
Figure 5. Pressure Vessel Design Problem showing thicknesses T h , T s , radius R, and vessel length L.
Computers 15 00035 g005
Figure 6. Illustration of Spring Design Optimization Problem.
Figure 6. Illustration of Spring Design Optimization Problem.
Computers 15 00035 g006
Figure 7. Speed Reducer Design using blue-green palette, highlighting x 1 to x 7 dimensions.
Figure 7. Speed Reducer Design using blue-green palette, highlighting x 1 to x 7 dimensions.
Computers 15 00035 g007
Figure 8. Cantilever Beam Design Optimisation Problem with segment lengths l 1 to l 5 , and cross-sectional dimensions b i and h i .
Figure 8. Cantilever Beam Design Optimisation Problem with segment lengths l 1 to l 5 , and cross-sectional dimensions b i and h i .
Computers 15 00035 g008
Table 1. Recent metaheuristic optimisers (extended) and commonly reported characteristics in their original studies.
Table 1. Recent metaheuristic optimisers (extended) and commonly reported characteristics in their original studies.
AlgorithmStrengthsLimitations
SMA [7]Adaptive feedback promotes global exploration; competitive on multimodal functionsSlow refinement near optima; risk of stagnation if step sizes shrink early
TSA [8]Effective local exploitation in several cases; simple update rulesLimited exploration on complex landscapes; premature convergence under diversity loss
RKO [9]Structured trajectory updates; competitive on continuous smooth functionsMay struggle on rugged landscapes without additional diversification
EO [10]Strong early exploration; broad search via equilibrium-based updatesPerformance can be parameter/control-schedule-sensitive across problem classes
MRFO [11]Multi-strategy foraging supports phase switching; good results on several suitesConvergence can be inconsistent across different landscapes and dimensions
HHO [14]Fast exploitation and competitive accuracy on many benchmarksAggressive exploitation may overshoot or reduce robustness; exploration period may be insufficient
BOA [15]Adequate exploration at moderate dimensions; easy implementationLocal entrapment in high-dimensional/multimodal problems without enhancements
SCA [16]Use of mathematical sine–cosine oscillation successful on many classic benchmarksProne to premature convergence without modifications; requires careful parameter tuning to maintain diversity
HFA [17]Multiple flight and foraging strategies with memory yield flexible search; provided promising results across various problemsCan stagnate in local optima; exploration capability may diminish on highly complex landscapes
AO [18]Adaptive hunting phases (soaring/diving) dynamically balance global and local search; exhibits fast, efficient convergence on many benchmarksConvergence precision can benefit from hybridisation; variants combining AO with other methods yield more stable and accurate results than basic AO
GTO [19]Intelligent social moves (migration, competition, etc.) enable broad exploration; applied successfully to various engineering problemsSusceptible to diversity loss and premature convergence; exploration–exploitation balance can become unstable in later stages
AVOA [20]Scavenging-inspired strategy balances exploration and exploitation effectively; efficient on various optimisation tasks across domainsPerformance sensitive to control parameters; often improved by chaotic or adaptive variants to prevent premature convergence
Table 2. CEC2022 summary performance across all functions. Average rank is computed over the suite (lower is better); winscounts the number of functions for which the algorithm achieves the best mean value. Additional columns summarise how often each method finishes among the top three and its average standard deviation (lower indicates greater robustness).
Table 2. CEC2022 summary performance across all functions. Average rank is computed over the suite (lower is better); winscounts the number of functions for which the algorithm achieves the best mean value. Additional columns summarise how often each method finishes among the top three and its average standard deviation (lower indicates greater robustness).
AlgorithmAverage RankWinsTop–3 CountAverage Std
SCA–AHA2.18101.5
HLOA3.0571.8
HGSO3.2462.0
GJO3.0552.1
BAT4.5242.5
SHO4.8232.6
SHIO3.5442.2
WOA4.2232.4
BOA4.8122.4
MTDE5.0122.6
SCA5.0122.6
Table 3. CEC2014 summary performance across all 30 functions. Average rank is computed over the suite (lower is better); winscounts the number of functions for which the algorithm achieves the best mean value. Additional columns summarise how often each method finishes among the top three and its average standard deviation (lower indicates greater robustness).
Table 3. CEC2014 summary performance across all 30 functions. Average rank is computed over the suite (lower is better); winscounts the number of functions for which the algorithm achieves the best mean value. Additional columns summarise how often each method finishes among the top three and its average standard deviation (lower indicates greater robustness).
AlgorithmAverage RankWinsTop–3 CountAverage Std
SCA–AHA1.920271.1
SHIO3.05121.5
GWO3.63111.6
WOA4.2291.7
BOA5.0051.8
OHO5.2041.9
AOA5.5032.0
SCA5.8032.1
GJO6.1022.2
SHO6.5012.3
RSA6.7012.4
Table 4. CEC2022 summary performance across all 12 functions.
Table 4. CEC2022 summary performance across all 12 functions.
AlgorithmAverage RankWinsTop–3 CountAverage Std
SCAAHA2.5610 4.42 × 10 3
SHO3.824 3.34 × 10 2
GJO4.225 6.73 × 10 2
SHIO4.325 4.24 × 10 2
SCA4.316 2.28 × 10 5
HGSO5.803 1.69 × 10 5
HLOA6.044 2.13 × 10 2
WOA6.103 1.12 × 10 3
MTDE8.900 2.05 × 10 6
BOA9.400 2.46 × 10 6
BAT10.800 1.14 × 10 6
Table 5. Results of Pressure Vessel Design Problem, iterations = 200, run = 30, agents = 50.
Table 5. Results of Pressure Vessel Design Problem, iterations = 200, run = 30, agents = 50.
OptimiserMinMeanMaxStd
AHASCA5885.3335885.3335885.333 2.31 × 10 5
SSA5927.5446087.1736252.051136.4673
MFO5885.3336449.9736994.548453.9395
SCA6300.016720.4587562.309455.0258
GWO5894.9676050.3386792.291363.5019
FDA5885.3396186.8326519.631255.577
GJO5910.2036100.7336965.753424.0478
GTO5885.3366204.167319.001555.8939
MGO6272.2876622.7747080.441271.7134
HLOA5927.3066584.6447103.567519.5903
MVO6009.0836769.2477259.605446.2025
POA5885.3335952.5526288.646164.6518
COA5917.216040.9856191.80487.61273
AHA5885.3335885.3835885.610.111128
AOA6718.8768684.57411376.981586.498
Table 6. Best solutions of Spring Design Optimisation Problem, iterations = 200, run = 30, agents = 50.
Table 6. Best solutions of Spring Design Optimisation Problem, iterations = 200, run = 30, agents = 50.
OptimiserBest ScoreX1X2X3
AHASCA0.0126650.0516930.35680911.2836
SSA0.0126870.0510.34014712.34002
MFO0.0126660.0518190.35984511.10795
SCA0.0127610.0510.34005612.42766
GWO0.0126780.0516830.3564911.31422
FDA0.0126650.0517250.35757811.2387
GJO0.012680.0510.34027612.32652
GTO0.0126670.0513770.34926111.73992
MGO0.0126740.0510.34036612.31619
HLOA0.0126740.0510.34036612.31619
MVO0.0127660.0510.33884312.48487
POA0.0126650.0517890.35913611.14861
COA0.0127310.052930.3866479.753054
AHA0.0126650.0517030.35705411.26928
AOA0.0141040.0510.31896615
Table 7. Results of Spring Design Optimisation Problem, iterations = 200, run = 30, agents = 50.
Table 7. Results of Spring Design Optimisation Problem, iterations = 200, run = 30, agents = 50.
OptimiserMinMeanMaxStd
AHASCA0.0126650.0126650.012665 5.7 × 10 10
SSA0.0126870.0128160.0130360.000125
MFO0.0126660.0131950.0143890.000816
SCA0.0127610.0129150.0131170.000139
GWO0.0126780.0127260.0129380.000104
FDA0.0126650.0128040.0132670.00024
GJO0.012680.0126990.012747 2.44 × 10 5
GTO0.0126670.0127030.012856 7.53 × 10 5
MGO0.0126740.0131790.0148580.000887
HLOA0.0126740.0141420.0169130.001823
MVO0.0127660.0168490.0181040.002135
POA0.0126650.0126680.012679 5.08 × 10 6
COA0.0127310.0129360.0131460.000154
AHA0.0126650.0126660.012668 9.75 × 10 7
AOA0.0141040.0141220.014153 2 × 10 5
Table 8. Results of Speed Reducer Design Problem, iterations = 200, run = 30, agents = 50.
Table 8. Results of Speed Reducer Design Problem, iterations = 200, run = 30, agents = 50.
OptimiserMinMeanMaxStd
AHASCA2994.4712994.4712994.471 6.6 × 10 5
SSA3012.9933022.9713043.24511.4687
MFO2994.4713007.5643033.74920.28279
SCA3056.9473127.2923193.20452.77254
GWO3002.5723006.5173008.3452.057789
FDA2994.4712994.4712994.4710
GJO3002.653015.5253026.078.751206
GTO2994.4712999.7573016.779.129572
MGO2994.4712994.4712994.4710
HLOA3003.3423039.5643204.6580.90014
MVO3014.3293035.0833055.64516.40223
POA2994.7633000.9533008.2625.549717
COA2994.9392995.1672995.6810.27189
AHA2994.4712994.4712994.471 1.97 × 10 6
AOA3109.0343165.5013224.77846.56557
Table 9. Results of Cantilever Beam Design Optimisation Problem, iterations = 200, run = 30, agents = 50.
Table 9. Results of Cantilever Beam Design Optimisation Problem, iterations = 200, run = 30, agents = 50.
OptimiserMinMeanMaxStd
AHASCA1.3399561.3399561.339956 4.43 × 10 13
SSA1.3399571.3399711.34 1.84 × 10 5
MFO1.3399921.3405461.3412560.0005
SCA1.3625281.3736741.3922660.011464
GWO1.3399921.3400411.340139 5.62 × 10 5
FDA1.339991.3400831.340207 8.68 × 10 5
GJO1.3400091.3401551.3404460.000156
GTO1.3399631.3404621.3424920.000999
MGO1.3400381.3401651.340360.000124
HLOA1.3437211.3968831.5539920.080691
MVO1.3399731.3400161.340102 5.04 × 10 5
POA1.3399581.3399731.34002 2.35 × 10 5
COA1.3399631.3399761.339985 8.44 × 10 6
AHA1.3399581.3399731.339997 1.64 × 10 5
AOA1.3553921.3706081.3842360.009419
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zraqou, J.; Al-Shamayleh, A.S.; Alrousan, R.; Fakhouri, H.; Hamad, F.; Halalsheh, N. Hybrid Sine–Cosine with Hummingbird Foraging Algorithm for Engineering Design Optimisation. Computers 2026, 15, 35. https://doi.org/10.3390/computers15010035

AMA Style

Zraqou J, Al-Shamayleh AS, Alrousan R, Fakhouri H, Hamad F, Halalsheh N. Hybrid Sine–Cosine with Hummingbird Foraging Algorithm for Engineering Design Optimisation. Computers. 2026; 15(1):35. https://doi.org/10.3390/computers15010035

Chicago/Turabian Style

Zraqou, Jamal, Ahmad Sami Al-Shamayleh, Riyad Alrousan, Hussam Fakhouri, Faten Hamad, and Niveen Halalsheh. 2026. "Hybrid Sine–Cosine with Hummingbird Foraging Algorithm for Engineering Design Optimisation" Computers 15, no. 1: 35. https://doi.org/10.3390/computers15010035

APA Style

Zraqou, J., Al-Shamayleh, A. S., Alrousan, R., Fakhouri, H., Hamad, F., & Halalsheh, N. (2026). Hybrid Sine–Cosine with Hummingbird Foraging Algorithm for Engineering Design Optimisation. Computers, 15(1), 35. https://doi.org/10.3390/computers15010035

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop