Next Article in Journal
Artificial Intelligence in Stock Market Investment Through the RSI Indicator
Previous Article in Journal
NLP Models for Military Terminology Analysis and Detection of Information Operations on Social Media
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ripple Evolution Optimizer: A Novel Nature-Inspired Metaheuristic

1
Faculty of Artificial Intelligence, Al-Balqa Applied University, Al-Salt 19117, Jordan
2
Department of Computer Science Al-Balqa Applied University, Al-Salt 19117, Jordan
3
Faculty of Information Technology, Aqaba University of Technology, Aqaba 77110, Jordan
4
Design and Visual Communication Department, School of Architecture and Built Environment (SABE), German Jordanian University (GJU), Amman 11180, Jordan
5
Information Studies Department, Sultan Qaboos University, Muscat 123, Oman
6
Information Sciences and Educational Technology Department, School of Educational Sciences, The University of Jordan, Amman 11196, Jordan
7
Faculty of Arts and Educational Sciences; Middle East University, Amman 11831, Jordan
*
Author to whom correspondence should be addressed.
Computers 2025, 14(11), 486; https://doi.org/10.3390/computers14110486
Submission received: 5 October 2025 / Revised: 3 November 2025 / Accepted: 4 November 2025 / Published: 7 November 2025
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)

Abstract

This paper presents a novel Ripple Evolution Optimizer (REO) that incorporates adaptive and diversified movement—a population-based metaheuristic that turns a coastal-dynamics metaphor into principled search operators. REO augments a JADE-style current-to-p-best/1 core with jDE self-adaptation and three complementary motions: (i) a rank-aware that pulls candidates toward the best, (ii) a time-increasing that aligns agents with an elite mean, and (iii) a scale-aware sinusoidal that lead solutions with a decaying envelope; rare Lévy-flight kicks enable long escapes. A reflection/clamp rule preserves step direction while enforcing bound feasibility. On the CEC2022 single-objective suite (12 functions spanning unimodal, rotated multimodal, hybrid, and composition categories), REO attains 10 wins and 2 ties, never ranking below first among 34 state-of-the-art compared optimizers, with rapid early descent and stable late refinement. Population-size studies reveal predictable robustness gains for larger N. On constrained engineering designs, REO achieves outperforming results on Welded Beam, Spring Design, Three-Bar Truss, Cantilever Stepped Beam, and 10-Bar Planar Truss. Altogether, REO couples adaptive guidance with diversified perturbations in a compact, transparent optimizer that is competitive on rugged benchmarks and transfers effectively to real engineering problems.

1. Introduction

Optimization plays a central role across the sciences, engineering, and management because many real-world decisions involve choosing the best configuration from an immense combinatorial or continuous search space. Problems such as structural design, energy management, parameter tuning in machine learning, and operations scheduling often lead to nonlinear, multimodal, or NP-hard formulations for which deterministic exact algorithms are computationally prohibitive [1]. To circumvent these limitations, researchers have developed a broad class of metaheuristic algorithms that combine stochastic exploration with problem-specific heuristics to efficiently approximate near-optimal solutions. Metaheuristics encompass a diverse family of strategies, including evolutionary algorithms, swarm intelligence, physics-based heuristics, and other population-based approaches, all designed to escape local minima and balance exploration and exploitation in the search space [2,3].
In the past decade, the landscape of metaheuristics has expanded rapidly, spurred by inspiration from natural processes and mathematical analogies. Nature-inspired optimizers model biological or physical phenomena such as the cooperative hunting behavior of animal groups [4], the social dynamics of jackals [5], or the foraging patterns of insects and birds [6,7]. These algorithms exploit analogies with ecosystems and physical laws to design simple update rules and information-sharing mechanisms. For example, the spotted hyena optimizer simulates group hunting to update candidate solutions and avoid premature convergence [4], whereas the golden jackal optimizer mimics the dispersal and prey-detection strategies of golden jackals to enhance diversification [5]. Inspired by natural resource allocation, economic and physical principles have also given rise to supply–demand-based optimizers [8], light spectrum optimizers [9], and tornado optimizers with Coriolis force [10], highlighting the versatility of metaheuristic design.
Another hallmark of contemporary metaheuristic research is the integration of metaheuristics with problem-specific models and hybrid frameworks to tackle complex engineering tasks. For instance, researchers have combined physics-inspired search with neural network architectures to build fault-diagnosis systems for transformers [11] and gas turbines [12]. Swarm-inspired algorithms have been deployed for proton exchange membrane fuel cell flow-field optimization [13], multilevel image segmentation [1], software-defined networking [14], and malware detection in Android systems [15]. These hybrid strategies illustrate how metaheuristics can be customized to exploit domain knowledge while preserving generic search capabilities.
As the number of metaheuristics grows, so does the need for comparative studies and performance analyses. Recent research has proposed numerically efficient variants and multi-objective frameworks aimed at improving convergence speed, robustness, and scalability. Examples include modified backtracking search algorithms informed by species evolution and simulated annealing [16], nomad–migration-inspired algorithms for global optimization [17], and quantum-inspired swarm optimizers [18]. Multi-objective approaches such as the geometric mean optimizer extend the search to Pareto frontiers [19], and advances in regularization and graph filtering have improved metaheuristic-based machine-learning models [20]. Collectively, these developments illustrate the vibrancy of research in metaheuristics and the need for continued innovation.
To address this gap, this paper introduces the ripple evolution optimizer, a novel nature-inspired metaheuristic designed to handle complex and engineering design problems. The new algorithm derives its inspiration from the propagation of ripples on water surfaces, exploiting the nonlinear interactions between neighboring waves to guide candidate solutions. Following a description of the algorithm, we present extensive experimental comparisons on benchmark and engineering design problems to demonstrate the competitiveness of the ripple evolution optimizer. We also discuss the main contributions of the present work.

2. Related Work

Metaheuristics inspired by animal behavior remain a fertile source of algorithmic innovation. The hippopotamus optimization algorithmproposed by Amiri et al. [21] models the social hierarchy and foraging strategies of hippopotamuses to strike a balance between exploration and exploitation, whereas the spotted hyena optimizer by Dhiman and Kumar [4] captures cooperative hunting to search the solution space efficiently. Similar inspirations include the grasshopper optimization algorithm applied to wireless communications networks [22], the Golden Jackal Optimization algorithm for engineering applications [5], and the Humboldt squid optimization algorithm which imitates the collective intelligence of squids [23]. Researchers have also drawn from avian behavior to develop pigeon-inspired and hummingbird-based schemes for structural health monitoring and communication systems [24,25]. These works highlight the trend of designing population-based search rules grounded in zoological observations.
Another line of research develops metaheuristics based on the dynamics of physical and economic systems. Wang et al. [16] introduced a modified backtracking search algorithm that integrates species-evolution principles and simulated annealing to improve convergence in constrained engineering problems. Zhao et al. [8] proposed a supply–demand-based optimization where search agents negotiate and trade resources to achieve equilibrium in the objective landscape. Abdel-Basset et al. [9] formulated a light spectrum optimizer that leverages properties of light wavelengths, while Braik et al. [10] incorporated the Coriolis force into a tornado-inspired algorithm to enhance diversification. Other novel metaphors include the garden balsam algorithm [6], the nomad–migration-inspired algorithm [17], the hunting search heuristic [2], the light spectrum optimizer [9], and the migration search algorithm [26]. Each of these approaches introduces distinct mechanisms for information sharing and reinforcement, illustrating the creative use of analogies from disparate domains.
Hybrid and problem-specific strategies continue to expand the scope of metaheuristics. Tao et al. [11] combined a probabilistic neural network with a bio-inspired optimizer for transformer fault diagnosis, and Hou et al. [12] developed a dynamic recurrent fuzzy neural network trained via a chaotic quantum pigeon-inspired optimizer for gas turbine modelling. In power systems, Kumar et al. [27] devised a nutcracker optimizer for congestion control, while Rathee et al. [28] used a quantum-inspired genetic algorithm to deploy sink nodes in wireless sensor networks. Metaheuristics have also been applied to structural design and fuel cell engineering: Ghanbari et al. [13] optimized bio-inspired flow fields for proton exchange membrane fuel cells, and Nemati et al. [29] employed a banking-system-inspired algorithm for truss size and layout design. Within the realm of image processing, Haddadi et al. [30] restored medical images using an enhanced regularized inverse filtering approach guided by a bio-inspired search, while Bhandari et al. [1] designed a multilevel image thresholding method using nature-inspired algorithms. These examples show how metaheuristics are being integrated into domain-specific models to solve real-world tasks beyond conventional benchmarks.
Many studies focus on tailoring metaheuristics to address multiple objectives and improve numerical efficiency. Pandya et al. [19] proposed a multi-objective geometric mean optimizer that eschews metaphors and uses mathematical transformations to locate Pareto-optimal solutions. Dalla Vedova et al. [10] applied bio-inspired algorithms to prognostics of electromechanical actuators, while Li and Sun [6] introduced a numerical optimization algorithm based on garden balsam biology. Anaraki and Farzin [23] reported the Humboldt squid optimizer, Maroosi and Muniyandi [31] developed a membrane-inspired multiverse optimizer for web service composition, and Panigrahi et al. [15] used nature-inspired optimization with support vector machines to identify malicious Android data. Recent work also explored quantum-inspired swarm algorithms [18], orca predation models [32], and improved graph neural networks with nonconvex norms inspired by unified optimization frameworks [20]. The diversity of applications—from scheduling and communication networks to health monitoring and machine learning—demonstrates the versatility of metaheuristics when adapted with suitable objective functions and constraints.
Several comparative studies provide insights into the relative strengths of different algorithms and offer guidelines for metaheuristic design. Attaran et al. [33] introduced an evolutionary algorithm inspired by hunter spiders and compared its performance against established heuristics. Bhandari et al. [1] evaluated nature-inspired optimizers for color image segmentation, while Sivakumar and Kanagasabapathy [34] benchmarked multiple bio-inspired algorithms for cantilever beam parameter optimization. Chiang et al. [14] combined artificial bee colony algorithms with support vector machines for software-defined networking, and Chou and Truong [7] proposed a jellyfish-inspired optimizer and showed its competitiveness on benchmark functions. In structural engineering, Zhou et al. [35] examined bamboo-inspired multi-cell structures optimized by particle swarm algorithms, while Janizadeh et al. [36] used three novel optimizers to tune LightGBM models for wildfire susceptibility prediction. These studies reveal that no single algorithm universally outperforms others; rather, the effectiveness of a metaheuristic depends on the problem domain, search landscape, and parameter settings.
Overall, the literature demonstrates a vigorous pursuit of novel metaphors and hybrid architectures to enhance the capability of metaheuristics. Despite the proliferation of algorithms, challenges remain in balancing exploration and exploitation, avoiding premature convergence, and ensuring robustness across diverse problems. The proposed ripple evolution optimizer contributes to this ongoing evolution by introducing new interaction dynamics based on ripple propagation, aiming to deliver competitive performance on complex and engineering optimization tasks.

3. Ripple Evolution Optimizer: A Novel Nature-Inspired Metaheuristic

3.1. Inspiration from Nature (Ocean Ripples, Tides, and Undertow)

The three core inspirations in REO are: concentric ripples traveling across the sea surface, a tide that gradually pulls the water mass in a consistent direction, and an undertow that draws material back toward the sea after waves break on the shore. We use these notions to encode stochastic (ripples), time-aware (tides), and rank-aware (undertow) forces on each search agent.

3.1.1. Ripple Superposition

We model ripples as sinusoidal, decaying perturbations that encourage agents to probe along multiple directions, analogous to overlapping wavefronts. This intuition is later stated analytically in Equation (7). The visual metaphor is shown in Figure 1, where circular wavefronts fade in thickness and opacity with radius to depict amplitude decay Equation (6) and directional probing.

3.1.2. Tide and Undertow

The tide makes exploitation gradually stronger (Equation (5)), while the undertow rewards high-quality agents with a stronger pull toward the global best (Equation (4)). Together, they bias movement without removing diversity. Figure 2 renders the ocean/shore interface with a slow rightward tide arrow (growing with time t / T ) and a leftward near-shore undertow arrow (stronger for better-ranked agents).

3.1.3. Elite Crest

The top-performing agents form a crest whose mean (Equation (10)) provides a stable, multi-point guide, mitigating premature convergence to a single attractor. Figure 3 marks the elite agents in green, the best with a star, and overlays the crest mean as a highlighted marker.
For interference between multiple ripple sources, Figure 4 adds a second source and a global swell (sinusoid), reflecting the superposition implied by Equation (7) and the multiple guides in Equation (11).

3.2. Mathematical Model

We formalize REO on a bounded domain D = [ l , u ] R d with population size N and iteration budget T. Let Δ = u l and x i ( t ) D be agent i at iteration t.

3.3. Initialization and Fitness

We state the randomized initialization in Equation (1), where ⊙ is the Hadamard product:
x i ( 0 ) = l + rand i ( d ) Δ , i = 1 , , N .
Agent quality is given by the objective evaluation in Equation (2):
f i ( t ) = f x i ( t ) .

3.4. Rank- and Time-Aware Pulls

After sorting agents by ascending fitness, let r i ( t ) { 0 , , N 1 } denote the rank (0 for the best). We define the normalized rank share in Equation (3) and the undertow strength in Equation (4):
rankshare i ( t ) = r i ( t ) max ( 1 , N 1 ) .
η i ( t ) = η 0 1 rankshare i ( t ) , η 0 [ 0 , 1 ] .
The tide is a time-aware exploitation factor that grows linearly with t, as shown in Equation (5):
τ ( t ) = τ 0 t T , τ 0 > 0 .

3.5. Ripple (Swell) and Amplitude Decay

To encode decaying sinusoidal perturbations, we define the amplitude in Equation (6) and the swell vector in Equation (7); here σ ( 0 , 1 ) scales the swell relative to the search span and ω > 0 is the angular frequency:
A ( t ) = A 0 δ t , 0 < δ < 1 ,
s ( t ) = A ( t ) σ sin ω t T + ϕ ( t ) Δ .

3.6. Self-Adaptive Parameters (jDE)

We adopt the jDE mechanism [37] to update the mutation scale F i and crossover rate Cr i . The F update in Equation (8) and the C r update in Equation (9) use probabilities τ F , τ C r ( 0 , 1 ) :
F i ( t + 1 ) = U ( F min , F max ) , if u F < τ F , F i ( t ) , otherwise ,
Cr i ( t + 1 ) = U ( 0 , 1 ) , if u C r < τ C r , Cr i ( t ) , otherwise .

3.7. Elite Crest and Mutant Construction

Let the top-k agents ( k = p N ) constitute the crest. We use the mean of the top-m ( m = ρ N ) for stability, defined in Equation (10):
c ¯ ( t ) = 1 m j = 1 m x ( j ) ( t ) , ( · ) sorts by fitness asc .
Combine Equations (4)–(7) with a JADE-like current-to-p-best/1 [38]. The mutant vector is defined by Equation (11), where p { 1 , , k } indexes a random member of the crest, r 1 r 2 i are random indices, and x ( t ) is the global best:
v i ( t ) = x i ( t ) + F i ( t ) x p ( t ) x i ( t ) + F i ( t ) x r 1 ( t ) x r 2 ( t ) + η i ( t ) x ( t ) x i ( t ) + τ ( t ) c ¯ ( t ) x i ( t ) + s ( t ) .

3.8. Crossover, Lévy Drift, and Selection

Binomial crossover forms the trial vector per dimension j as in Equation (12) (with a forced dimension j rand ):
u i , j ( t ) = v i , j ( t ) , if rand ( ) < Cr i ( t ) or j = j rand , x i , j ( t ) , otherwise .
With probability in Equation (13), we apply a Lévy-flight kick [39,40] as in Equation (14), using Mantegna’s recipe Equations (15) and (16) with stability α ( 1 , 2 ) and scale κ > 0 :
p drift ( t ) = p 0 1 t T , p 0 ( 0 , 1 ) .
u i ( t ) u i ( t ) + κ ( t ) Δ .
j ( t ) = u j | v j | 1 / α , u j N ( 0 , σ u 2 ) , v j N ( 0 , 1 ) ,
σ u = Γ ( 1 + α ) sin ( π α / 2 ) Γ 1 + α 2 α 2 ( α 1 ) / 2 1 / α .
We use reflection for boundary handling [41], dimension-wise as in Equation (17) (applied twice for long jumps):
x ˜ i , j ( t ) = 2 l j u i , j ( t ) , u i , j ( t ) < l j , 2 u j u i , j ( t ) , u i , j ( t ) > u j , u i , j ( t ) , otherwise .
Finally, greedy selection in Equation (18) ensures non-worsening replacement:
x i ( t + 1 ) = u ˜ i ( t ) , f ( u ˜ i ( t ) ) < f ( x i ( t ) ) , x i ( t ) , otherwise .

3.9. High-Level Pseudocode

Algorithm 1 shows the high-level pseudocode.
Algorithm 1 REO: Ripple Evolution with adaptive and diversified movement
Require: 
N (pop), T (Max_iter), l , u (lb, ub), d (dim), f (fobj)
Ensure: 
Best_score, Best_pos, curve
1:
Initialize Δ u l and agents x i ( 0 ) by Equation (1); evaluate f i ( 0 ) by Equation (2)
2:
Set jDE params and initialize F i , Cr i [37] (updates via Equations (8) and (9))
3:
Initialize A ( 0 ) = A 0 and δ for amplitude decay (6)
4:
for  t = 0 T 1  do
5:
    Sort population by f i ( t ) ; compute rankshare i ( t ) by Equation (3)
6:
    Compute undertow η i ( t ) using Equation (4) and tide τ ( t ) by Equation (5)
7:
    Update amplitude A ( t ) by Equation (6) and swell s ( t ) by Equation (7)
8:
    Form crest set and elite mean c ¯ ( t ) via Equation (10)
9:
    Update F i , Cr i using jDE Equations (8) and (9)
10:
   for  i = 1  N do
11:
        Construct mutant v i ( t ) by Equation (11)
12:
        Perform binomial crossover to obtain u i ( t ) by Equation (12)
13:
        With probability (13), apply Lévy drift using Equations (14)–(16)
14:
        Reflect to bounds by Equation (17); evaluate and select via Equation (18)
15:
    end for
16:
    Record best fitness in curve[t]
17:
end for
18:
return global best, best fitness

3.10. Movement Strategy

REO update decomposes into five forces: (i) current-to-p-best/1 (first two terms of Equation (11)), (ii) undertow to the global best (Equation (4)), (iii) tide toward the crest mean (Equations (5) and (10)), and (iv) sinusoidal swell (Equation (7)). Their vector superposition is the mutant step in Equation (11). Figure 5 illustrates these components as arrows from the current agent.

3.11. Exploration and Exploitation Behavior

REO balances exploration and exploitation via (Equation (6)) amplitude decay, the drift probability in Equation (13), and growing tide Equation (5). Early iterations have larger A ( t ) , higher p drift ( t ) , and weaker tide, yielding long-range, multi-directional search; late iterations invert these proportions. We depict the progression in two separate figures: Figure 6 (early) and Figure 7 (late).

3.12. Ripple Evolution Optimizer (REO) Novelty

The metaphor of ripples serves only as an intuitive analogy; the novelty of REO lies in three explicit operators and their interaction schedules, which collectively generate a two-anchor vector field atop a Differential Evolution (DE) backbone. As formalized in Equation (11), REO’s mutation decomposes the step for agent i at iteration t as follows:
v i ( t ) = F i ( t ) x p ( t ) x i ( t ) + F i ( t ) x r 1 ( t ) x r 2 ( t ) + η i ( t ) x ( t ) x i ( t ) + τ ( t ) c ¯ ( t ) x i ( t ) + s ( t ) .

3.12.1. Rank-Aware Undertow

The first unique operator is the rank-aware undertow  η i ( t ) ( x ( t ) x i ( t ) ) , which scales the exploitation pressure according to each agent’s fitness rank. As defined in Equations (3) and (4), the undertow coefficient η i ( t ) = η 0 ( 1 rankshare i ( t ) ) ensures that high-quality agents are pulled more strongly toward the global best, while low-quality ones retain more freedom to explore. Unlike the scalar selection pressure in Genetic Algorithms (GA) or the uniform global attraction in Particle Swarm Optimization (PSO), REO’s undertow operates as a vector field that modulates both direction and magnitude per agent and per coordinate.

3.12.2. Time-Growing Tide Toward Elite Mean

The second mechanism is the tide, a time-dependent attraction toward the mean of the top-m individuals (the “crest’’). As shown in Equations (5)–(10), the term τ ( t ) ( c ¯ ( t ) x i ( t ) ) grows linearly with time t. This creates a second attractor that represents collective elite knowledge, contrasting with DE’s reliance on a single sampled p-best or PSO’s dual global/local anchors. By incorporating two simultaneous attractors—the global best and the elite mean—REO mitigates premature convergence and stabilizes the search trajectory.

3.12.3. Zero-Mean Range-Scaled Swell

The third operator is the swell term s ( t ) , a zero-mean sinusoidal perturbation whose amplitude decays geometrically as A ( t ) = A 0 δ t . Defined in Equations (6) and (7), it introduces structured, range-scaled oscillations with random phase, providing controlled exploration early in the run while vanishing as convergence progresses. Unlike the constant stochastic variation in GA, PSO, or DE, REO’s swell adds a deterministic, frequency-controlled component that improves early coverage of the search space without destabilizing exploitation.

3.12.4. Complementary Schedules and Two-Anchor Dynamics

Each operator follows a complementary schedule: the undertow increases with solution quality, the tide increases with iteration count, the swell amplitude decreases with time, and Lévy kicks are applied only early in the search. These opposing schedules define a consistent exploration–exploitation trajectory: broad exploration in early iterations followed by controlled exploitation near convergence. The combined “two-anchor’’ behavior—bias toward both x and c ¯ —is absent in GA, PSO, and classical DE.

3.12.5. Analytical Expectation of the Search Bias

Taking expectations over random terms in Equation (19), and using E [ x r 1 x r 2 ] = 0 and E [ s ( t ) ] = 0 , the mean step simplifies to:
E v i ( t ) x i ( t ) P ( t ) = η i ( t ) ( x ( t ) x i ( t ) ) + τ ( t ) ( c ¯ ( t ) x i ( t ) ) .
This demonstrates that exploration arises from the variance of the random terms (differential mutation, swell, and Lévy drift), while exploitation is driven by two biased forces toward distinct anchors. Hence, REO’s exploration/exploitation balance is structurally different from GA, DE, and PSO.

3.13. Standard-Terms Presentation and Comparative Analysis with DE and PSO

This subsection presents REO strictly in standard optimization terms (populations, candidate solutions, and move operators), and compares its update rule side by side with well-established algorithms. We avoid metaphorical phrasing and focus on functional operators, schedules, and resulting biases/variances.

3.13.1. Notation and Baseline Updates

Let x i ( t ) [ l , u ] R d denote the position of individual i at iteration t; x ( t ) is the best individual, and c ¯ ( t ) is the mean of the top-m elites. The JADE-style DE mutant and PSO updates are:
JADE (Current-to-p-best/1) [38]
v i ( t ) = x i ( t ) + F i ( t ) x p ( t ) x i ( t ) + F i ( t ) x r 1 ( t ) x r 2 ( t ) .
  • A binomial crossover (12) and greedy selection (18) produce the next population.
PSO (Global Best Variant)
v i ( t + 1 ) = ω v i ( t ) + c 1 r 1 ( t ) p i ( t ) x i ( t ) + c 2 r 2 ( t ) x ( t ) x i ( t ) ,
x i ( t + 1 ) = x i ( t ) + v i ( t + 1 ) .
  • Here p i ( t ) is the personal best, ω is inertia, c 1 , c 2 are cognitive/social coefficients, and r 1 , r 2 are i.i.d. uniform vectors.

3.13.2. REO Update in Standard Terms

REO is inspired by the JADE core with two bias terms and one structured, scheduled perturbation:
v i ( t ) = x i ( t ) + F i ( t ) x p ( t ) x i ( t ) + F i ( t ) x r 1 ( t ) x r 2 ( t ) + η i ( t ) x ( t ) x i ( t ) rank - aware undertow ; Equations ( 3 ) and ( 4 ) + τ ( t ) c ¯ ( t ) x i ( t ) time - growing tide ; Equations ( 5 ) and ( 10 ) + s ( t ) range - scaled , zero - mean swell ; Equations ( 6 ) and ( 7 ) .
REO then applies the same binomial crossover (12) and greedy selection (18) as DE, with reflection/clamp for bounds (17). Parameters F i ( t ) and Cr i ( t ) self-adapt via jDE (8) and (9). Optional Lévy kicks are scheduled early (Equations (13)–(16)).

3.13.3. REO Mechanism in Comparison to PSO or JADE

REO’s operator in comparison to PSO or JADE:
(i) No velocity state: REO does not maintain a v i state or inertia ω (Equations (22) and (23)); it is a direct position update with DE-style crossover and greedy selection. (ii) Anchors: PSO uses p i (personal best) and x ; REO uses x  and  c ¯ (elite mean), a multi-point anchor not present in PSO. (iii) Scheduling: REO’s undertow depends on rank (quality) and its tide grows with time; PSO’s attraction coefficients c 1 , c 2 are static (or annealed but not rank-conditioned). (iv) Structured perturbation: REO’s s ( t ) is range-scaled, zero-mean, sinusoidal with decaying amplitude, unlike PSO’s stochastic terms, which enter as multiplicative random scalars of point-to-point differences and velocity memory.
REO In comparison with JADE (or classic DE: (i) Two anchors vs one: JADE’s step uses a single sampled p-best (Equation (21)). REO adds rank-aware pull to x and a time-growing pull to c ¯ , producing a two-anchor bias not expressible as F x p x i with any scalar F. (ii) Rank/time coupling: η i ( t ) depends on rank; τ ( t ) depends on time; these cannot be absorbed into a constant or jDE-updated F i ( t ) without changing the anchor point(s) and introducing quality conditioning. (iii) Swell: s ( t ) adds a zero-mean, span-aware, scheduled oscillation that is not part of DE’s differential term and cannot be re-parameterized as F ( x r 1 x r 2 ) .

3.13.4. REO Operator Mapping with Standard Terms

Table 1 summarizes the Operator-level comparison in standard terms of metaheuristic field.

3.14. Rationale for Component Design and Expected Advantages

While REO follows a standard population-based optimizer structure—initialization, variation, evaluation, and selection—each of its components is deliberately constructed to address known weaknesses of existing metaheuristics. This subsection explains the individual purpose of each operator and why their combination is expected to outperform other optimizers on specific classes of problems.

3.14.1. Initialization and Diversity Preservation

The population is initialized uniformly over the search domain to ensure unbiased coverage and maximal entropy at iteration 0. Because REO introduces no velocity state or historical memory, the diversity of the initial sampling directly affects global exploration. The subsequent swell term (Equation (7)) maintains controlled diversity through range-scaled oscillations even after several generations, preventing early clustering that often limits PSO or GA in high dimensions.

3.14.2. Differential and Rank-Conditioned Exploitation

The DE core (Equation (11)) provides efficient local search through difference vectors between randomly chosen agents. To enhance convergence stability, REO augments this with a rank-aware undertow  η i ( t ) ( x x i ) that adaptively scales exploitation intensity: high-ranking individuals are drawn strongly toward the global best, while lower-ranking ones explore more widely. This selective pressure accelerates convergence on unimodal landscapes and maintains exploration on multimodal ones.

3.14.3. Time-Dependent Collective Guidance

The tide component τ ( t ) ( c ¯ ( t ) x i ( t ) ) increases over time, guiding all agents toward the elite mean once the population begins to converge. This collective attractor mitigates oscillations around the best solution—a common issue in PSO—and allows cooperative exploitation of multiple promising regions. On composite functions or constrained engineering designs where multiple local minima exist, this term stabilizes the search trajectory and ensures steady improvement.

3.14.4. Structured Exploration via the Swell

The swell term s ( t ) injects sinusoidal, range-scaled perturbations with a geometrically decaying amplitude. Its early large amplitude promotes long-range exploration akin to Lévy flights but in a deterministic and bounded manner; its late small amplitude supports fine-grained local refinement. Empirically, this yields faster initial descent than DE and smoother final convergence than PSO.

3.14.5. Complementary Scheduling of Operators

The three dynamic coefficients— η i ( t ) (rank-based), τ ( t ) (time-based), and A ( t ) (amplitude decay)—are complementary:
  • early iterations: large A ( t ) , small τ ( t ) ⇒ exploration dominant;
  • mid-phase: balanced undertow and tide ⇒ exploration–exploitation equilibrium;
  • late iterations: small A ( t ) , large τ ( t ) ⇒ refined exploitation.
This predictable scheduling produces the rapid early progress and stable late convergence observed in CEC2022 experiments.

3.14.6. Theoretical Analysis of the Outperforming Performance

REO’s integration of a two-anchor bias (best and elite mean), rank-dependent adaptation, and structured perturbation provides an adaptive exploration–exploitation trajectory absent in GA, DE, and PSO. GA’s stochastic mutation can lose gradient direction; PSO’s global–local duality can oscillate; DE’s fixed differential scale can stagnate near optima. REO’s coordinated design offers:
  • faster descent on smooth unimodal functions due to rank-aware undertow;
  • improved robustness on rotated multimodal and hybrid functions through oscillatory exploration;
  • superior consistency on constrained engineering problems via the tide’s collective stabilization.
Hence, although REO conforms to the generic population-based template, its operator coupling and scheduling yield demonstrable advantages for landscapes where both global coverage and precise final refinement are required.

3.15. Computational Cost

Operation Counts per Iteration

Let N be the population size, d the dimensionality, and T the iteration budget. Denote by C f the time to evaluate the objective f ( · ) once on a single agent. One iteration of REO (Section 1) performs: (i) a population sort (to obtain the best and the elite set), (ii) N objective evaluations, and (iii) Θ ( N d ) vector operations for variation, recombination, and bound handling. Using Θ ( · ) -notation,
C sort = Θ ( N log N ) , C eval = Θ ( N C f ) , C arith = Θ ( N d ) .
Hence the per-iteration and total costs are
C iter = Θ N log N + N C f + N d ,
C total = Θ T N log N + N C f + N d ( plus N C f for initialization ) .
Space usage is linear in the population:
S = Θ ( N d ) .

Cheap Objectives (Closed-Form Mathematical Relations)

When f is a deterministic, closed-form mapping with low arithmetic cost, C f = Θ ( d ) or C f = Θ ( 1 ) , the iteration time is dominated by N log N + N d . In this regime, two engineering choices keep overhead small: (i) partial selection instead of full sorting. REO only needs the global best and the top-m agents to compute the elite mean; replacing a full sort by an m-element heap or nth_element yields
C select = Θ ( N log m ) ( m N ) ,
which tightens (26) to Θ ( N log m + N d ) for cheap f; and (ii) vectorization. The variation and crossover steps are BLAS-1/2 style and run efficiently in SIMD/GPU, making C arith bandwidth-bound in practice. Empirically, for common analytic benchmarks, the wall-clock is then driven by memory traffic rather than transcendental calls (e.g., sin in the perturbation), and REO scales near-linearly with N.

Expensive Objectives (Simulators, Black-Box Models, or API Calls)

When C f dominates, (26) reduces to C iter = Θ ( N C f ) , and the wall-clock is controlled by how evaluations are dispatched. Let P be the number of parallel workers (threads/processes/remote slots). Under synchronous batches,
C iter , wall Θ N P C f + Θ ( N log m + N d ) ,
where m is the elite set size used by REO. For remote services (APIs), decompose the evaluation time per agent as
C f api = L net + C srv + L parse ,
with network latency L net , service processing time C srv , and client-side serialization/parsing L parse . If the platform imposes a rate limit R max (requests/sec) and a maximum concurrency P max , the effective parallelism is P = min ( P local , P max ) , and the sustainable throughput is bounded by R max . In this case, REO behaves like a standard population-based optimizer whose wall-clock is dominated by external I/O; the algorithmic overhead (selection and vector ops) is negligible.

Asynchronous and Batched Execution

To reduce idle time when C f varies across agents (e.g., heterogeneous API latencies), an asynchronous variant evaluates agents as workers become free and updates the elite set on arrival; in practice, this preserves REO’s behavior while improving utilization. If the external platform supports batched queries, grouping the N candidates per iteration into a few large requests amortizes L net and L parse , effectively shrinking C f api .

4. Experimental Setup

All experiments were conducted in MATLAB R2024 using a common protocol across REO and all baselines: a population of N = 50 agents, a budget of T = 1000 iterations per run, and 30 independent runs per problem with distinct seeds (rng(seed,’twister’)). We evaluate on the CEC2022 single-objective suite (12 functions spanning unimodal, rotated multimodal, hybrid, and composition) and five constrained engineering designs (Welded Beam, Spring Design, Three-Bar Truss, Cantilever Stepped Beam, Ten-Bar Planar Truss), treating all as minimization with canonical bounds/constraints; initialization is uniform in [ l , u ] , bound violations are handled by reflection/clamp (long jumps reflected twice), feasibility has priority in replacement (feasible ≻ infeasible; among infeasible, lower total violation), and required discrete variables are projected to the nearest admissible value before evaluation. REO employs a JADE current-to-p-best/1 core with jDE self-adaptation ( τ F = τ Cr = 0.1 , F min = 0.1 , F max = 0.9 ) , elite sampling p = 0.1 and elite-mean fraction ρ = 0.2 , a rank-aware undertow of strength η 0 = 0.6 , a time-growing tide τ ( t ) = τ 0 ( t / T ) with τ 0 = 0.6 , a range-scaled swell A ( t ) = A 0 δ t with A 0 = 0.2 , δ = 0.995 , ω = π , σ = 0.05 , and early Lévy kicks p drift ( t ) = p 0 ( 1 t / T ) with p 0 = 0.2 , α = 1.5 , κ = 0.01 ; these settings are fixed across problems unless sensitivity analyses are explicitly reported. Baselines (e.g., DE/JADE, PSO, CMA–ES, GA, ABC, CS, and recent nature-inspired methods) follow canonical parameterizations from their primary sources but strictly match the shared budget/population/runs, stopping at T or when the suite’s tolerance to the known optimum is achieved. For each algorithm and problem, we report mean, standard deviation (when known), error to the optimum, and a per-function rank (lower-is-better) with tie-breaking by better mean, then better best, then lower std; global summaries include average rank, Top-k counts, and head-to-head tallies. Statistical significance is assessed via two-sided Wilcoxon signed-rank tests over paired per-function results ( n = 12 for CEC2022). Convergence is visualized with mean best-so-far curves and shaded variability bands, complemented by boxplots of final scores, ECDFs across functions, and a Dolan–Moré performance profile.

5. Statistical Comparison Results over CEC2022

The CEC2022 test suite comprises 12 benchmark functions ( F 1 F 12 ) designed to evaluate single-objective optimization algorithms. Each function provides a challenging landscape with different modalities and complexities, requiring algorithms to explore and exploit the search space effectively. In dynamic multimodal optimization research, the presence of multiple global or local optima is well recognized, and benchmark test suites combine multimodal functions with different change modes to generate diverse environments for algorithm evaluation. The populations used in CEC competitions often include both “simple” environments with several global and local peaks and “composition” functions whose landscapes include a large number of local peaks that can mislead evolutionary search. The results provided here are static single-objective data from the CEC2022 suite; however, the insights from dynamic multimodal problems illustrate the importance of robustness and adaptability across heterogeneous functions.

5.1. Evaluation Metrics

For each test function, four statistics were recorded for every optimizer: the mean objective value over multiple runs, the standard deviation (Std.), an error measure (difference between the observed mean value and the known optimum), and a rank. Lower mean and error values are better, and lower standard deviations indicate more consistent performance. Ranks are assigned per function; the best mean receives rank 1, and higher numbers denote worse performance. Average ranks across all functions allow holistic comparison among optimizers.

5.2. Performance of the REO Optimizer Compared with Other Optimizers

The REO algorithm consistently achieved the best performance across all 12 CEC2022 functions. As shown in the following tables, REO obtained the lowest mean objective value, zero or very small error, and negligible standard deviation on every function. Consequently, it was ranked first on all functions, giving it an average rank of 1 (Table 2). The second-best algorithms overall were RUN, ALO, MVO, DO and COA. Nonetheless, their average ranks were much higher (6.42–8.92), and their mean objective values and variances were noticeably worse than those of REO.
On each benchmark function F i , the mean objective value of REO was either equal to or lower than that of every competitor. For example, on the shifted and fully rotated Zakharov function F 1 , the mean of REO was 300.0 with zero standard deviation, whereas the nearest competitors (RTH, DO, and several others) obtained the same mean value but had non-zero standard deviations. For F 2 , REO achieved a mean of 402.58 , while the second-best algorithm (ALO) produced 403.98 ; REO also had an almost zero error measure compared with larger errors for other methods. Similarly, on F 3 , the mean of REO was 600.0 (with zero error), whereas the best competitor (GWO) attained 600.30 with an error of approximately 0.30 . These small yet consistent margins illustrate the superiority of REO.
A more pronounced gap appears on functions with higher dimensions or more complex landscapes. On F 6 REO yielded a mean of 1809.77 , whereas RTH achieved 1826.65 and COA produced 1916.73 . For F 7 , the mean of REO was 2004.24 , while the next best algorithm (COA) delivered 2020.41 , and ALO produced 2053.60 . Similar patterns are observed on F 8 F 12 , where REO maintained the lowest means and standard deviations; for instance, on F 12 the mean of REO was 2860.20 , whereas MVO (second) achieved 2862.36 and RUN produced 2876.17 . Across all functions, the error measures of REO were minimal (often below 40), while many competing algorithms recorded errors in the hundreds or thousands (Table A1, Table A2 and Table A3).
The stability of an optimizer can be inferred from the standard deviation across runs. REO had an exceptionally low average standard deviation (about 7.75 ) compared with other algorithms; the second-best method (RTH) had an average standard deviation of approximately 88.19 , and most algorithms exhibited much larger variances (hundreds to millions). The low variance of REO demonstrates that it consistently converged to high-quality solutions across repeated runs.
The next strongest optimizers were RUN, ALO, and MVO. RUN achieved an average rank of 6.42 and an average mean value of 1762.26 . ALO had a slightly higher average mean ( 1780.95 ) and rank ( 6.92 ). While these algorithms occasionally approached REO on simpler functions (for instance, RUN equaled REO on F 1 and produced only slightly higher means on F 4 and F 12 ), none matched REO’s uniform dominance across the entire suite. Algorithms such as SMA, GBO, SPBO, SSOA, and OHO performed poorly, with extremely large error measures and standard deviations, leading to average ranks greater than 29.
As it can be seen in Table 3, REO does not lose to any competitor on any function; only RTH ties REO (two functions: F 1 and F 9 ). The most frequent non-REO top 3 finishes are observed for GWO (5), followed by MVO (4), and COA/RTH (3 each).
The rank-based summaries in Table 4 and Table 5 show a clear overall winner: REO attains an average rank of 1.00 with 12/12 wins, indicating consistent dominance across all CEC2022 functions considered. The only ties at the top occur on F1 and F9, where RTH matches REO’s best rank.
Among the compared optimizers other than REO, the strongest overall performers by average rank are RUN ( R ¯ = 6.42 ), ALO ( 6.92 ), MVO ( 7.75 ), COA ( 8.92 ), DO ( 8.92 ), GWO ( 9.17 ), and RTH ( 9.42 ). Notably, GWO and MVO accumulate the largest number of Top-3 finishes among non-REO algorithms (five and four, respectively). In contrast, OHO, SSOA, and SPBO reside at the bottom of the ranking distribution, with high Bottom-3 counts ( 8, 11, and 8, respectively ) and average ranks above 30.
Stability varies substantially across optimizers as reflected by the SD of ranks. Some methods are consistently good or bad: e.g., RUN (SD 2.93 ) is tightly clustered in the single-digit ranks, while SSOA (SD 0.71 ) is consistently near the bottom. Others are more volatile: FOX shows the largest variability (SD 11.17 ), switching between mid-pack and near-bottom ranks depending on the function, whereas HLOA (SD 10.14 ) also exhibits wide fluctuations.

5.3. REO vs. Traditional Optimizers on CEC2022

Table A4 compares REO against Traditional Optimizers, including PSO, CMAES, GA, DE, CS, and ABC, on the CEC2022 suite. REO attains the top rank on 10 of 12 functions (with ties on F3 with DE and on F5 with CMAES), yielding the best average rank of 1.25 (versus DE 2.17 , ABC 3.50 , PSO 4.08 , CS 4.50 , CMAES 5.58 , and GA 6.75 ). In terms of solution quality and robustness, REO achieves the smallest error measure on nearly all problems—zero error on F1, F3, F5, and F11—and exhibits very low variability (Std. = 0 on F1, F3, F5, F9, F11 and 0.38 on F12), while substantially outperforming classical methods on challenging instances such as F6. The only clear exceptions are F8 and F10, where DE leads (with ABC second on F8), yet REO remains competitive (third and second, respectively). Overall, the evidence indicates that REO provides consistently superior accuracy and stability across diverse landscapes.

6. Visual Results over CECC2022

This section provides a discussion of the visual results obtained by running the optimizer over the CEC2022 benchmark functions. The following subsections will cover different aspects of the optimization process, including convergence behavior, performance distribution, and the impact of population size on optimization.

6.1. Search History, Trajectory, Fitness and Convergence Curve Comparison Comparison of Different Population Sizes

Figure 8 and Figure 9 presents Search history, trajectory, fitness and convergence curve comparison and plots comparing the final objective values across population sizes for each benchmark function. The plot confirms that larger population sizes generally lead to better performance, especially for complex functions. This result further emphasizes the importance of selecting an appropriate population size to balance between computational cost and optimization quality.
The optimizer’s convergence performance is illustrated across CEC2022 12 benchmark functions. Figure 10 shows the convergence of the best objective value with respect to the number of iterations for each of the 12 functions. These plots illustrate the optimizer’s ability to find solutions that improve over time. Generally, it can be observed that the optimizer exhibits rapid convergence initially, especially for simpler functions such as F1, F2, and F5, where the best objective rapidly approaches the optimal value within a few hundred iterations.
For more complex functions like F3, F7, and F12, the convergence slows significantly, and the optimizer requires more iterations to improve its objective value. For these functions, the algorithm’s exploration phase seems to dominate, as it spends more time searching for better solutions. Notably, functions such as F10 and F11 show a plateau after initial improvement, suggesting that the optimizer may become stuck in local optima before continuing to improve after several iterations. The performance of the optimizer across these functions is generally consistent, but the complexity of the objective landscape plays a significant role in the rate of convergence.

6.2. Final Objective Values

Figure 11 shows the Empirical Cumulative Distribution Function (ECDF) of the final objective values for the different population sizes (pop = 20, 50, 100). This plot provides insight into how the optimizer performs across different population sizes. The results show that larger populations (pop = 100) generally result in a higher fraction of benchmark functions achieving better final objective values. The ECDF indicates that the larger population tends to explore the search space more thoroughly, leading to improved optimization outcomes. Conversely, the smaller population sizes (pop = 20) often result in suboptimal final objective values, as they are less capable of escaping local optima.
Interestingly, while larger populations have a clear advantage in terms of final objective values, the overall trend is relatively consistent across population sizes, with a noticeable shift towards better performance with increased population size. This result emphasizes the trade-off between computational expense and solution quality, suggesting that while a larger population improves performance, it requires more resources.

6.3. Final Objective Scatter Plot

In Figure 12, a scatter plot of the final objective values is presented across all benchmark functions. This plot shows the best objective values achieved by the optimizer for different population sizes. From the scatter plot, it is clear that the optimizer’s performance varies significantly across functions, with certain functions showing excellent optimization results regardless of population size, while others exhibit a more erratic performance.
For instance, functions such as F1 and F2 show a very tight clustering of objective values for all population sizes, indicating that the optimizer consistently finds high-quality solutions. However, functions like F3 and F12 show a wider spread of results, particularly for smaller population sizes, which suggests that the optimizer struggles to find optimal solutions on these more complex functions. The scatter plot also confirms that larger population sizes (pop = 100) generally lead to better optimization results, but the magnitude of improvement is not uniform across all functions.

6.4. Impact of Population Size on Optimization

Figure 13 and Figure 14 present boxplots for the final objective values across different population sizes. These boxplots offer a summary of the distribution of objective values achieved by the optimizer for each function. From the plots, it is evident that the optimizer’s performance is significantly affected by population size. Larger population sizes lead to a higher probability of achieving lower objective values, as reflected by the lower median values and narrower interquartile ranges in the boxplots for pop = 50 and pop = 100 compared to pop = 20.
However, the effect of population size varies depending on the complexity of the function. For simpler functions (F1, F2), the performance differences between population sizes are minimal, suggesting that these functions do not require large populations to achieve near-optimal solutions. In contrast, more complex functions (F3, F6, F10) show a clear advantage for larger population sizes, as the larger populations allow for better exploration and more consistent convergence.

6.5. Performance Profile

Figure 15 shows the performance profile, which tracks the fraction of functions solved within a given factor of the best objective value across all population sizes. The plot illustrates the relative performance of the optimizer for each population size. The population size of 100 consistently outperforms the other sizes, showing that it solves a higher fraction of functions within a given performance threshold. The population size of 50 shows moderate performance, while the size of 20 is the least effective.

6.6. Sensitivity Analysis

Population-based optimization algorithms often have several control parameters that regulate the balance between exploration and exploitation. As noted in the literature, proper parameter tuning is critical for competitive performance yet is notoriously difficult. To follow the reviewer’s suggestion for a thorough assessment of the original Ripple Evolution (REO) algorithm, we conducted a sensitivity analysis across all 12 benchmark functions. Two key parameters were investigated:
  • Amplitude decay rate ρ : controls how quickly the ripple amplitude diminishes over iterations.
  • Pull strength γ : scales the attraction of each individual toward the current global best.
For each parameter, three values were tested ( ρ { 0.80 , 0.95 , 0.99 } and γ { 0.30 , 0.50 , 0.80 } ). The REO algorithm was run with population size 50, 50 iterations, dimension 10 and bounds [ 100 , 100 ] . Each configuration was repeated three times, and the final objective values were averaged.

6.7. Sensitivity to the Amplitude Decay Rate ρ

Table 6 lists the averaged best objective values for each function and value of ρ . Figure 16 visualizes the same data. Overall, a faster decay ( ρ = 0.80 ) generally leads to better performance on most functions, particularly Sphere, Rastrigin, Griewank, and Alpine. On the Schwefel, Rosenbrock, and Zakharov functions, the differences are more nuanced, but very slow decay ( ρ = 0.99 ) tends to degrade performance by injecting too much stochasticity.

6.8. Sensitivity to the Pull Strength γ

Table 7 and Figure 17 summarize the effect of varying the pull strength. A moderate pull ( γ = 0.50 ) yields the best or near-best performance on several functions (Rastrigin, Griewank, Ackley, Alpine, and Levy). Too low a pull ( γ = 0.30 ) leads to under-exploitation and higher errors on many problems, whereas an excessively strong pull ( γ = 0.80 ) causes premature convergence and poor performance on Sphere, Schwefel, Rosenbrock, and Dixon–Price.
The sensitivity analysis confirms that the original RE algorithm is highly sensitive to its control parameters across diverse problem landscapes. On most functions, a relatively fast amplitude decay ( ρ = 0.80 ) is beneficial because it reduces oscillations quickly and concentrates search effort; however, on functions with complex topologies like Levy and Styblinski–Tang, a slightly slower decay retains diversity and prevents premature convergence. Overall, very slow decay ( ρ = 0.99 ) tends to inject too much random perturbation, causing stagnation or divergence.
The pull strength γ regulates exploitation. Moderate values ( γ = 0.50 ) offer a good balance between exploring new regions and exploiting the current best solutions. Very small pulls ( γ = 0.30 ) fail to guide the population effectively on many functions, while excessive pulls ( γ = 0.80 ) can drive the swarm toward local optima prematurely, producing large errors on several test problems. These observations mirror general findings in the metaheuristic community: algorithms that rely solely on exploitation risk premature convergence, whereas those that overemphasize exploration may converge too slowly. Similar trade-offs are seen in genetic algorithms with Lévy flights, where increased exploration improves escape from local minima but introduces parameter sensitivity and may hinder convergence.

6.9. Wilcoxon Signed-Rank Summary for REO

As it can be seen in Table A5, Table A6 and Table A7 for the (12 test functions × 32 peer optimizers), REO shows overwhelmingly superior performance: out of 12 functions, REO is significant on almost all of them with no significant losses, and only 7 ties—all on F2 against RUN, ALO, MVO, DO, COA, and AVOA, plus one on F7 against COA—while every other case favors REO, typically with very small p-values ( p = 8.86 × 10 5 ) and large negative test statistics ( Z 3.92 ; e.g., on F1, F3–F6, F9–F11 we observe T = 0 versus every competitor). Aggregating per competitor, REO achieves 12–0–0 (wins–losses–ties) against all compared optimizers, 11–0–1 against RUN, ALO, MVO, DO, and AVOA, and 10–0–2 against COA, confirming that REO’s advantage is consistent, statistically robust, and practically large across the entire benchmark suite.

7. Applications of REO in Solving Engineering Design Problems

7.1. Welded Beam Engineering Design Problem

The welded beam design problem seeks the minimum fabrication cost of a welded beam subject to constraints on shear stress, bending stress, buckling load, and end deflection. Typical decision variables are weld size ( x 1 ), weld length ( x 2 ), beam thickness ( x 3 ), and beam width ( x 4 ) (see Figure 18).
REO attains the top rank (Rank 1) on Welded Beam with best objective = 1.72485 and mean = 1.72485 (std = 6.84 × 10 7 ). Compared with the next best method, POA, the first optimizer improves the best objective by 0.30% (from 1.73003 to 1.72485) and reduces the mean by 0.36% (from 1.73117 to 1.72485). Relative to the median across the remaining optimizers, the first optimizer yields a 42.46% lower mean objective, indicating stable performance. These figures (see Table 8) consistently highlight the superior performance of REO over the competing optimizers on this problem.

7.2. Spring Engineering Design Problem

The tension/compression spring design problem (See Figure 19) minimizes the spring weight while satisfying constraints on shear stress, surge frequency, and limits on geometry. The decision variables are wire diameter ( x 1 ), mean coil diameter ( x 2 ), and the number of active Lcoils ( x 3 ).
REO attains the top rank (Rank 1) on Spring Design with best Objective = 180,806 and mean = 180 , 806 (std = 0.0263 ). Compared with the next best method, POA, the first optimizer improves the best objective by 0.00 % (from 180,806 to 180,806) and reduces the mean by 0.00% (from 180,806 to 180,806). Relative to the median across the remaining optimizers, the first optimizer yields a 19.26% lower mean objective, indicating stable performance. These figures (see Table 9) consistently highlight the superior performance of RE over the competing optimizers on this problem.

7.3. Three-Bar Truss Engineering Design Problem

The three-bar truss problem (See Figure 20) minimizes structural weight with stress and displacement constraints. A symmetric V-shaped truss supports a vertical load; design variables typically parameterize the cross-sectional areas.
REO attains the top rank (Rank 1) on Three-Bar Truss with best Objective = 263.896 and mean = 263.896 (std = 1.32 × 10 7 ). Compared with the next best method, POA, the first optimizer improves the best objective by 0 . 00 % (from 263.896 to 263.896) and reduces the mean by 0.00% (from 263.896 to 263.896). Relative to the median across the remaining optimizers, the first optimizer yields a 0.31% lower mean objective, indicating stable performance. These figures (see Table 10) consistently highlight the superior performance of RE over the competing optimizers on this problem.

7.4. Cantilever Stepped Beam Engineering Design Problem

The cantilever stepped beam problem (See Figure 21) minimizes the material weight (or cost) of a beam divided into a number of segments, under stress and deflection constraints. The design variables ( x 1 x 10 ) are the cross-sectional dimensions of each segment.
REO attains the top rank (Rank 1) on Cantilever Stepped Beam with best Objective = 62,772.8 and mean = 62,795.2 (std = 31.8 ). Compared with the next best method, POA, the first optimizer improves the best objective by 1.97% (from 64,036 to 62,772.8) and reduces the mean by 2.05% (from 64,110.5 to 62,795.2). Relative to the median across the remaining optimizers, the first optimizer yields a 22.62% lower mean objective, indicating stable performance. These figures (see Table 11) consistently highlight the superior performance of RE over the competing optimizers on this problem.

7.5. Ten Bar Planar Truss Engineering Design Problem

The classical 10-bar planar truss problem (See Figure 22) minimizes the truss weight under stress and displacement constraints. The design variables are the cross-sectional areas of the ten members.
REO attains the top rank (Rank 1) on Ten-Bar Planar Truss with best Objective = 597.999 and mean = 598.153 (std = 0.217 ). Compared with the next best method, REA, the first optimizer improves the best objective by 1.91% (from 609.626 to 597.999) and reduces the mean by 2.65% (from 614.41 to 598.153). Relative to the median across the remaining optimizers, the first optimizer yields a 29.03% lower mean objective, indicating stable performance. These figures (see Table 12) consistently highlight the superior performance of RE over the competing optimizers on this problem.

7.6. Pressure-Vessel Design Problem

A pressure vessel (See Figure 23) stores fluid under pressure; it consists of a cylindrical shell capped by two hemispherical heads. The optimization task is to choose the shell thickness T s , head thickness T h , inner radius R, and cylindrical section length L so that the cost of material, forming, and welding is minimized. Rolled steel plate is only available in increments of 0.0625 in, so T s must be a multiple of 0.0625 in and T h must be selected from a discrete set, while R and L are continuous. NEORL’s problem description summarizes the variables and bounds.
The cost function (in U.S. dollars) includes material, forming, and welding costs and is defined as
min x f ( x ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3 ,
where x = ( x 1 , x 2 , x 3 , x 4 ) represents ( T s , T h , R , L ) in inches. The constraints ensure minimum thickness, sufficient volume, and a maximum length:
x 1 + 0.0193 x 3 0 , x 2 + 0.00954 x 3 0 , π x 3 2 x 4 4 3 π x 3 3 + 1,296,000 0 , x 4 240 0 ,
with bounds 0.0625 x 1 6.1875 (multiples of 0.0625), x 2 { 0.0625 , 0.125 , , 0.625 } , 10 x 3 200 and 10 x 4 200 .
As can be seen in Table 13, of the compared 16 metaheuristic algorithms. The RE optimizer achieved the lowest mean objective value (about 6106.46) and the best single solution (around 6043.92). Its modest standard deviation implies reliable convergence. The second-best (TTHHO) had a mean cost of approximately 6389.28, about 5% higher than RE. Algorithms such as MFO and ChOA had larger variability and higher costs. The poorest performers (SCA, FLO, ROA, WOA, TSO, SMA, RSA, SSOA) exhibited mean costs 40–420% higher than the best and large standard deviations. RE’s best design uses a thin shell ( x 1 0.83 in), a moderate head thickness ( x 2 0.42 in), an inner radius x 3 43.13 in, and a cylinder length x 4 164.73 in, satisfying all constraints and minimizing cost.

7.7. Stepped Transmission-Shaft Design

A stepped shaft transmits power through gears and pulleys mounted on different diameters (See Figure 24). The benchmark problem considers a three-segment shaft with fixed lengths L 1 , L 2 , L 3 and design variables ( d 1 , d 2 , d 3 ) = ( x 1 , x 2 , x 3 ) for the diameters. The goal is to minimize the shaft weight while satisfying fatigue strength, minimum step size, and maximum-deflection constraints. Rodríguez-Cabal et al. formulated the model using the Modified Goodman fatigue criterion.
The objective function is the total weight of the three cylindrical segments, given by
min x f ( x ) = γ π 4 L 1 d 1 2 + L 2 d 2 2 + L 3 d 3 2 ,
where γ = 0.2834 lb / in 3 and L 1 , L 2 , L 3 are fixed segment lengths. Six constraints enforce safety and manufacturability. First, each diameter must exceed a minimum safe diameter k ( d i ) derived from the Modified Goodman criterion, expressed as g i = d i k ( d i ) 0 for i = 1 , 2 , 3 Second, bearing manufacturers require the middle diameter to exceed each adjacent diameter by at least 0.0787 in, giving g 4 = d 2 d 1 0.0787 and g 5 = d 2 d 3 0.0787 . Third, the deflection of the shaft must not exceed 0.005 in: | y max | 0.005 . Bounds on the diameters are 1 d i 4 in.
As can be seen in Table 14, RE obtains the lowest mean objective value (about 0.002086) and the best single solution (≈0.001903), indicating the lightest weight shaft. Algorithms such as FLO, MFO, TSO, TTHHO, WOA, and SHO converge to a common mean objective of about 0.002268 with negligible variance; their designs use smaller diameters ( d 1 0.06736 in, d 2 = 0.5 in). RE’s design uses a slightly larger d 1 0.08725 in but a smaller d 2 = 0.25 in, reducing weight by roughly 8%. Lower-ranked algorithms show gradually larger mean objectives and variability, suggesting difficulties in handling the nonlinear constraints. The worst performers (SMA, ROA, RSA) yield substantially heavier shafts and higher standard deviations. Overall, RE offers superior performance on both benchmark problems, delivering the best solutions with low variability.

8. Conclusions

We introduced REO (Ripple Evolution), a nature inspired, population based optimizer that combines a JADE-style current to p-best/1 core with jDE self adaptation and three complementary motions—rank aware undertow toward the incumbent best, a time varying tide toward the elite mean, and a scale aware swell with occasional Lévy kicks—supplemented by reflection/clamp boundary handling; we provided a compact, equation labeled model, pseudocode tied to those equations, visual diagrams, and a complexity analysis showing evaluation dominated cost. Empirically, REO achieved first or tied first across the CEC2022 functions considered, with notable strength on rotated multimodal, hybrid, and composition landscapes; convergence band and performance profile plots showed rapid early descent and stable late refinement, and population studies indicated that larger populations (e.g., 100) improve robustness at predictable computational expense. On constrained engineering problems (Welded Beam, Spring Design, Three Bar Truss, Cantilever Stepped Beam, Ten Bar Planar Truss), REO attained the best objectives among compared methods, demonstrating transfer from benchmarks to real designs. Collectively, these results underscore REO’s contributions in (i) design—a principled composition that couples adaptive guidance with diversified perturbations atop a strong DE backbone; (ii) transparency—fully specified equations, pseudocode, and bounds; and (iii) effectiveness—consistent top tier performance with practical guidance on population sizing and schedules. For practitioners, we recommend moderate to large populations when budgets allow, an increasing tide weight with concurrent decay of swell amplitude and Lévy probability, and jDE self-adaptation to reduce manual tuning; when evaluations are costly, consider smaller populations with restarts or early stopping on stagnation. Limitations include evaluation budget sensitivity typical of wrapper methods; promising extensions include discrete/mixed encodings, surrogate and early acceptance schemes, multi-objective variants, restart/niching for multimodality, and scalable vectorized/GPU implementations, alongside formal convergence analyses under stochastic schedules.

Author Contributions

Conceptualization, H.N.F. and H.R.; methodology, H.N.F. and H.R.; software, H.N.F.; validation, H.N.F., H.R. and R.A.; formal analysis, H.N.F. and F.H.; investigation, H.N.F. and H.R.; resources, R.A., F.H. and Z.K.; data curation, F.H. and H.N.F.; writing—original draft preparation, H.N.F.; writing—review and editing, H.R., R.A., F.H. and Z.K.; visualization, R.A.; supervision, H.R.; project administration, F.H. and Z.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Results in Comparison with State-of-the-Art Optimizers over CEC2022

Table A1. REO Results in comparison with state-of-the-art optimizers over CEC2022 (Group 1).
Table A1. REO Results in comparison with state-of-the-art optimizers over CEC2022 (Group 1).
FunctionMeasureREOSMAGBORTHCPOCOASCSODOAZOASPBOTSOAOTTHHO
F1mean300.00016,900.5977868.615300.000981.508317.0241801.9661895.276697.86226,176.8374750.826872.700312.415
Std.0.00016,600.5977568.6150.000681.50817.0241501.9661595.276397.86225,876.8374450.826572.70012.415
error measure0.00011,874.1824956.4940.0001451.84526.0102273.2352553.890935.6146927.8665508.664463.01919.171
F1rank133251151216171334241411
F2mean402.581452.966444.742411.699423.869406.033433.995491.903431.4091094.518424.986418.084470.581
Std.3.94052.96644.74211.69923.8696.03333.99591.90331.409694.51824.98618.08470.581
error measure2.58131.29925.92420.35332.3452.94926.176106.03030.078274.22832.51526.19494.370
F2rank12322712319261830131024
F3mean600.000620.655620.413611.848637.061603.206615.503623.220616.166668.404631.587612.471628.735
Std.0.00020.65520.41311.84837.0613.20615.50323.22016.16668.40431.58712.47128.735
error measure0.00013.1969.87410.92311.7704.87412.03211.1627.2989.78812.6986.65912.167
F3rank117161026414181534231221
F4mean810.083838.870836.529822.457832.587828.879827.825824.514812.466902.295844.571823.668825.772
Std.1.43238.87036.52922.45732.58728.87927.82524.51412.466102.29544.57123.66825.772
error measure10.08311.4209.9799.8121.7645.8206.4309.5263.61112.13716.3668.6817.384
F4rank125231019171612234291113
F5mean900.0001484.6521050.1451049.8761551.4121019.7471010.1111173.4831014.6903451.3151827.040997.8731388.892
Std.0.000584.652150.145149.876651.412119.747110.111273.483114.6902551.315927.04097.873488.892
error measure0.000472.104152.08090.371199.784215.990117.853195.99266.519667.842874.95052.252143.077
F5rank128151430131117123433923
F6mean1809.7655677.84033,919.6731826.6543120.8344423.6983854.20967,336,761.7373597.939426,332,821.7624031.06611,087.1316258.135
Std.13.8523877.84032,119.67326.6541320.8342623.6982054.20967,334,961.7371797.939426,331,021.7622231.0669287.1314458.135
error measure9.7652616.17735816.91722.4031543.7961990.3471606.781301127840.4441798.520282,125,839.5801352.1326922.3443597.598
F6rank1192523151031633122421
F7mean2004.2352052.6502069.4202037.4012115.9332020.4082046.8902063.6332039.3072161.4092073.1962039.0082071.799
Std.8.82752.65069.42037.401115.93320.40846.89063.63339.307161.40973.19639.00871.799
error measure4.23526.49918.99915.01352.2644.31224.97941.43714.05440.98522.24913.07923.743
F7rank11722829215191334241123
F8mean2219.0912228.4752240.0692233.4422278.1992223.3702227.9162228.7052231.5482348.4222237.6102226.4842234.139
Std.5.55228.47540.06933.44278.19923.37027.91628.70531.548148.42237.61026.48434.139
error measure19.0916.87627.16036.71165.0277.6854.28013.28127.091139.7499.1333.11712.334
F8rank11423202821215163122921
F9mean2529.2842594.8212571.1852529.2842552.2042536.6312572.1292560.7662599.9202766.4422567.0882567.6942609.746
Std.0.000294.821271.185229.284252.204236.631272.129260.766299.920466.442267.088267.694309.746
error measure229.28447.87427.7070.00046.95032.85533.58547.17946.93963.02861.84033.09255.262
F9rank12420111821152532161727
F10mean2531.0342581.9952541.2772550.5042634.4262573.4032555.7382579.0612559.9072573.8862603.9912536.1912572.175
Std.59.074181.995141.277150.504234.426173.403155.738179.061159.907173.886203.991136.191172.175
error measure153.03467.97263.62562.531215.03861.56362.70872.96860.98134.190258.39955.02894.052
F10rank1225928191221142026318
F11mean2600.0002874.2152926.6632815.9792785.2292732.7892775.7182885.0802757.8083716.1262796.4742693.7872755.431
Std.0.000274.215326.663215.979185.229132.789175.718285.080157.8081116.126196.47493.787155.431
error measure0.000191.420212.963171.395181.294121.744184.306313.356153.406303.444195.56289.838135.942
F11rank12326201581325123318411
F12mean2860.1962872.7292871.5442869.0852898.1362864.9212866.0412911.8592932.8462884.3432892.8562866.0182896.950
Std.0.382172.729171.544169.085198.136164.921166.041211.859232.846184.343192.856166.018196.950
error measure65.19611.69013.42613.58234.7221.9124.84199.27527.0376.85533.9021.91629.487
F12rank1151411234926281720821
Table A2. REO Results in comparison with state-of-the-art optimizers over CEC2022 (Group 2).
Table A2. REO Results in comparison with state-of-the-art optimizers over CEC2022 (Group 2).
FunctionMeasureHHOSSOARUNGWOMVOAOAGJOHLOAWOARSASHOFLODO
F1mean301.80611,293.075300.0002029.523300.0108697.4362066.980300.01416,022.1328777.1052539.0398917.237300.003
Std.1.80610,993.0750.0001729.5230.0108397.4361766.9800.01415,722.1328477.1052239.0398617.2370.003
error measure1.0712433.4320.0001697.3690.0056884.0581447.9680.0258428.5732347.4691764.6911273.7960.005
F1rank1030618827199322821297
F2mean426.3181438.447409.757420.801406.559826.861438.023406.872427.4501102.226435.4041634.352413.445
Std.26.3181038.4479.75720.8016.559426.86138.0236.87227.450702.22635.4041234.35213.445
error measure33.210400.64321.26620.9702.671323.79541.1513.12630.346653.65957.937423.50723.044
F2rank1432611429215153120338
F3mean630.755655.976612.156600.303601.182637.163606.660648.059634.659645.812609.226645.670604.583
Std.30.75555.97612.1560.3031.18237.1636.66048.05934.65945.8129.22645.6704.583
error measure8.7437.4345.6890.5161.5457.7224.49611.39713.6743.3355.5109.5295.516
F3rank223211232783025299285
F4mean826.076855.488821.889814.461817.482832.485821.232844.108841.133848.923820.322849.216825.810
Std.26.07655.48821.88914.46117.48232.48521.23244.10841.13348.92320.32249.21625.810
error measure8.52211.6156.2266.7507.8238.3217.65518.06816.7936.3735.83411.96111.096
F4rank153383518728263163214
F5mean1319.9101604.662978.973907.954900.0571308.511962.5281402.1421423.9811441.3851072.1661456.632983.810
Std.419.910704.66278.9737.9540.057408.51162.528502.142523.981541.385172.166556.63283.810
error measure145.528193.92136.63112.7820.154157.51370.099232.996286.952158.522113.335191.607118.720
F5rank213273220524252616278
F6mean3806.335166,665,619.6253182.3516395.7975971.4584037.8377945.6593861.1264434.16153,751,268.6314464.44022,703,425.4194761.776
Std.2006.335166,663,819.6251382.3514595.7974171.4582237.8376145.6592061.1262634.16153,749,468.6312664.44022,701,625.4192961.776
error measure1747.588221,589,176.9041335.9251871.9692008.8881192.6881293.8222850.7062188.78524,335,005.2081647.10727,640,219.1681754.379
F6rank932422201323111630172918
F7mean2052.2472131.1982037.9712024.4982035.5162093.5842041.2902105.7082064.5702124.8842026.4202103.2492027.173
Std.52.247131.19837.97124.49835.51693.58441.290105.70864.570124.88426.420103.24927.173
error measure24.26226.3489.1379.18539.67726.76316.68029.13623.91222.84810.41520.3786.669
F7rank1632103726142820314275
F8mean2232.4762355.4812223.8992224.0832228.1382248.7082226.5232296.5052233.0592253.8772223.9422244.2602223.770
Std.32.476155.48123.89924.08328.13848.70826.52396.50533.05953.87723.94244.26023.770
error measure13.22696.6881.2785.16328.36837.8953.17062.6126.45621.8951.83024.8915.450
F8rank1832471326102919275253
F9mean2568.0462787.2972529.2852560.0932536.6472690.0402569.1912531.5022557.9042719.0792587.3312758.2612529.284
Std.268.046487.297229.285260.093236.647390.040269.191231.502257.904419.079287.331458.261229.284
error measure30.92349.7530.00039.03732.85138.99523.4736.14235.69827.55422.18244.2560.000
F9rank1833514929197123023314
F10mean2597.9872708.8622552.3282535.1332569.8782609.4372561.4662679.8962536.7862643.8232547.3932658.6472549.937
Std.197.987308.862152.328135.133169.878209.437161.466279.896136.786243.823147.393258.647149.937
error measure133.827107.95758.76262.16787.109115.88861.797342.14363.867128.50558.188127.82962.602
F10rank2532112172715314297308
F11mean2754.2033675.5862635.0642810.8172675.4513217.8822818.1812724.1922780.0183175.5032823.4003345.2982791.527
Std.154.2031075.58635.064210.81775.451617.882218.181124.192180.018575.503223.400745.298191.527
error measure177.997364.81097.513173.872155.101372.814188.813139.182183.967459.242253.988367.387201.031
F11rank9322193292171428223117
F12mean2898.8513069.7552863.4502865.3822862.3612986.6332870.3832901.8512891.0042952.1502891.1853077.3392867.937
Std.198.851369.755163.450165.382162.361286.633170.383201.851191.004252.150191.185377.339167.937
error measure44.92677.5701.5804.2352.29449.54810.40443.70936.930109.83126.590104.4736.752
F12rank24323623012251829193310
Table A3. REO Results in comparison with state-of-the-art optimizers over CEC2022 (Group 3).
Table A3. REO Results in comparison with state-of-the-art optimizers over CEC2022 (Group 3).
FunctionMeasureFOXROAALOAVOAChimpSHIOOHOHGSO
F1mean300.0008477.647300.000300.0002274.4433701.79114,724.6794246.992
Std.0.0008177.6470.0000.0001974.4433401.79114,424.6793946.992
error measure0.0001280.0740.0000.0001154.2143884.7696187.4181236.568
F1rank5263420223123
F2mean415.301753.296403.980431.224567.695431.2572451.052482.848
Std.15.301353.2963.98031.224167.69531.2572051.05282.848
error measure24.232324.9504.08933.680117.30430.103996.90615.775
F2rank92821627173425
F3mean649.560633.858606.490614.177627.504605.393660.693626.433
Std.49.56033.8586.49014.17727.5045.39360.69326.433
error measure9.92012.2576.7399.5347.1694.1514.0134.909
F3rank31247132063319
F4mean838.068844.107821.908833.631836.525816.676845.389832.948
Std.38.06844.10721.90833.63136.52516.67645.38932.948
error measure10.8578.86510.44812.2018.1957.9722.9334.052
F4rank24279212243020
F5mean1486.4741370.511974.6251215.9331269.705948.6491581.2901000.787
Std.586.474470.51174.625315.933369.70548.649681.290100.787
error measure60.739219.527136.945219.161182.51994.23083.25329.627
F5rank29226181943110
F6mean4294.815988,112.6793333.4683718.850855,099.2883759.744813,540,668.0791,638,867.010
Std.2494.815986,312.6791533.4681918.850853,299.2881959.744813,538,868.0791,637,067.010
error measure2067.6552,044,013.3251541.8201853.750548,275.2571754.143501,380,245.0391,071,348.661
F6rank1427572683428
F7mean2152.4452074.3982037.7022031.0012059.5012039.1922123.1322067.928
Std.152.44574.39837.70231.00159.50139.192123.13267.928
error measure53.50934.53215.6918.50810.68819.5969.0358.958
F7rank33259618123021
F8mean2405.7982243.1232227.5732223.9522304.7312226.4462457.0292232.235
Std.205.79843.12327.57323.952104.73126.446257.02932.235
error measure116.96317.0016.1473.21261.6025.090139.0942.024
F8rank33241163083417
F9mean2549.2452660.8202529.9282529.2842558.4732585.6552824.9902608.553
Std.249.245360.820229.928229.284258.473285.655524.990308.553
error measure36.70957.0061.8420.00020.85440.159107.42330.530
F9rank10286313223426
F10mean2875.7422596.3022559.3642568.0172592.9852551.9342965.7902544.649
Std.475.742196.302159.364168.017192.985151.934565.790144.649
error measure556.84986.16760.67269.926349.90458.524266.22061.169
F10rank332413162310346
F11mean2717.8082978.9272710.7842754.5403317.3332879.0244231.2452786.157
Std.117.808378.927110.784154.540717.333279.0241631.245186.157
error measure128.091168.036148.480150.486149.370203.864435.99813.195
F11rank62751030243416
F12mean2994.9192923.4122865.5552865.0092871.4822880.1023238.5492896.966
Std.294.919223.412165.555165.009171.482180.102538.549196.966
error measure87.26446.9292.5651.29712.83823.264135.9055.975
F12rank31277513163422
Table A4. REO Results vs. Traditional Optimizers over CEC2022).
Table A4. REO Results vs. Traditional Optimizers over CEC2022).
FunctionMeasureREOPSOCMAESGADECSABC
F1mean300.000625.40227,818.80335,989.4831927.60417,652.8825828.933
Std.0.0001029.01313,531.1728055.384564.14013,285.9732065.394
error measure0.000325.40227,518.80335,689.4831627.60417,352.8825528.933
F1rank1267354
F2mean402.581460.904651.355640.462402.615411.132405.085
Std.3.94034.32793.846204.2991.73110.3610.069
error measure2.58160.904251.355240.4622.61511.1325.085
F2rank1576243
F3mean600.000601.413614.347655.913600.000607.309600.000
Std.0.0001.35321.04414.1310.0007.7380.000
error measure0.0001.41314.34755.9130.0007.3090.000
F3rank1467153
F4mean810.083818.356821.340878.501817.500855.322823.101
Std.1.4328.9937.80411.3142.61225.3307.211
error measure10.08318.35621.34078.50117.50055.32223.101
F4rank1347265
F5mean900.000923.360900.0001570.241900.1591210.3751088.901
Std.0.00073.1730.000372.0090.121471.622124.480
error measure0.00023.3600.000670.2410.159310.375188.901
F5rank1417365
F6mean1809.7655967.52010,971,599.78647,007,215.6462295.720245,240.6071922.310
Std.13.8521781.66426,588,179.50879,306,727.240275.759612,571.849101.382
error measure9.7654167.52010,969,799.78647,005,415.646495.720243,440.607122.310
F6rank1467352
F7mean2004.2352022.1442093.5532124.3362005.5672035.5992007.375
Std.8.8271.14965.35750.2920.62618.3550.455
error measure4.23522.14493.553124.3365.56735.5997.375
F7rank1467253
F8mean2219.0912222.9012253.0492252.0682212.1262230.5222212.797
Std.5.5527.84215.46027.5583.3677.5957.194
error measure19.09122.90153.04952.06812.12630.52212.797
F8rank3476152
F9mean2529.2842556.6922569.6732697.6562529.2842531.6572556.563
Std.0.00051.95952.86242.9420.0006.32770.619
error measure229.284256.692269.673397.656229.284231.657256.563
F9rank1567234
F10mean2531.0342542.2402790.6932742.1172500.7482545.0712570.119
Std.59.07457.933463.781374.0600.24669.9880.046
error measure153.034142.240390.693342.117100.748145.071170.119
F10rank2376145
F11mean2600.0002964.1932963.7253277.0932648.5442788.5172601.718
Std.0.000248.061134.297346.90954.081219.0912.413
error measure0.000364.193363.725677.09348.544188.5171.718
F11rank1657342
F12mean2860.1962871.8082875.1603037.5452864.5182864.0132865.237
Std.0.38210.5084.30741.2640.4110.7001.373
error measure65.196171.808175.160337.545164.518164.013165.237
F12rank1567324
Table A5. Wilcoxon Signed Results part 1.
Table A5. Wilcoxon Signed Results part 1.
FunctionRUNALOMVODOCOAGWOAVOAAOSHIOSCSOSHO
F18.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F20.5015910.4114650.8519250.4330480.1671840.0006810.1474160.0005930.0028210.0057340.000681
T+: 87, T-: 123T+: 83, T-: 127T+: 100, T-: 110T+: 126, T-: 84T+: 142, T-: 68T+: 196, T-: 14T+: 146, T-: 64T+: 197, T-: 13T+: 185, T-: 25T+: 179, T-: 31T+: 196, T-: 14
Z: 0.6720, SRN: 123.0000Z: 0.8213, SRN: 127.0000Z: 0.1867, SRN: 110.0000Z: −0.7840, SRN: 84.0000Z: −1.3813, SRN: 68.0000Z: −3.3973, SRN: 14.0000Z: −1.4487, SRN: 59.0000Z: −3.4346, SRN: 13.0000Z: −2.9866, SRN: 25.0000Z: −2.7626, SRN: 31.0000Z: −3.3973, SRN: 14.0000
F38.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F48.86   ×   10 5 0.0002190.0005938.86   ×   10 5 8.86   ×   10 5 0.0002938.86   ×   10 5 8.86   ×   10 5 0.000128.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 204, T-: 6T+: 197, T-: 13T+: 210, T-: 0T+: 210, T-: 0T+: 202, T-: 8T+: 210, T-: 0T+: 210, T-: 0T+: 208, T-: 2T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.6959, SRN: 6.0000Z: −3.4346, SRN: 13.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.6213, SRN: 8.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.8453, SRN: 2.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F58.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F68.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F78.86   ×   10 5 8.86   ×   10 5 0.0003380.0001030.1789560.000780.000148.86   ×   10 5 0.0001030.000140.000219
T+: 210, T-: 0T+: 210, T-: 0T+: 201, T-: 9T+: 209, T-: 1T+: 141, T-: 69T+: 195, T-: 15T+: 207, T-: 3T+: 210, T-: 0T+: 209, T-: 1T+: 207, T-: 3T+: 204, T-: 6
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.5839, SRN: 9.0000Z: −3.8826, SRN: 1.0000Z: −1.3440, SRN: 69.0000Z: −3.3599, SRN: 15.0000Z: −3.8079, SRN: 3.0000Z: −3.9199, SRN: 0.0000Z: −3.8826, SRN: 1.0000Z: −3.8079, SRN: 3.0000Z: −3.6959, SRN: 6.0000
F80.000780.0001030.0015070.0002540.000140.0002540.0002540.0006810.0001038.86   ×   10 5 0.00039
T+: 195, T-: 15T+: 209, T-: 1T+: 190, T-: 20T+: 203, T-: 7T+: 207, T-: 3T+: 203, T-: 7T+: 203, T-: 7T+: 196, T-: 14T+: 209, T-: 1T+: 210, T-: 0T+: 200, T-: 10
Z: −3.3599, SRN: 15.0000Z: −3.8826, SRN: 1.0000Z: −3.1733, SRN: 20.0000Z: −3.6586, SRN: 7.0000Z: −3.8079, SRN: 3.0000Z: −3.6586, SRN: 7.0000Z: −3.6586, SRN: 7.0000Z: −3.3973, SRN: 14.0000Z: −3.8826, SRN: 1.0000Z: −3.9199, SRN: 0.0000Z: −3.5466, SRN: 10.0000
F98.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F108.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 0.0001030.000120.0001898.86   ×   10 5 8.86   ×   10 5 0.0001030.0001038.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 209, T-: 1T+: 208, T-: 2T+: 205, T-: 5T+: 210, T-: 0T+: 210, T-: 0T+: 209, T-: 1T+: 209, T-: 1T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.8826, SRN: 1.0000Z: −3.8453, SRN: 2.0000Z: −3.7333, SRN: 5.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.8826, SRN: 1.0000Z: −3.8826, SRN: 1.0000Z: −3.9199, SRN: 0.0000
F118.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F120.0006818.86   ×   10 5 0.0111290.000128.86   ×   10 5 0.0001030.0005178.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 196, T-: 14T+: 210, T-: 0T+: 173, T-: 37T+: 208, T-: 2T+: 210, T-: 0T+: 209, T-: 1T+: 198, T-: 12T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Total+:11, -:0, =:1+:11, -:0, =:1+:11, -:0, =:1+:11, -:0, =:1+:10, -:0, =:2+:12, -:0, =:0+:11, -:0, =:1+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0
Table A6. Wilcoxon Signed Results part 2.
Table A6. Wilcoxon Signed Results part 2.
FunctionZOAGJOHHOWOAHGSOHLOATTHHOGBOCPODOAFOX
F18.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F20.0010198.86   ×   10 5 0.0022048.86   ×   10 5 8.86   ×   10 5 0.0017130.0051118.86   ×   10 5 0.0168818.86   ×   10 5 0.036561
T+: 193, T-: 17T+: 210, T-: 0T+: 187, T-: 23T+: 210, T-: 0T+: 210, T-: 0T+: 189, T-: 21T+: 180, T-: 30T+: 210, T-: 0T+: 169, T-: 41T+: 210, T-: 0T+: 161, T-: 49
Z: −3.2853, SRN: 17.0000Z: −3.9199, SRN: 0.0000Z: −3.0613, SRN: 23.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.1359, SRN: 21.0000Z: −2.8000, SRN: 30.0000Z: −3.9199, SRN: 0.0000Z: −2.3893, SRN: 41.0000Z: −3.9199, SRN: 0.0000Z: −2.0906, SRN: 49.0000
F38.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F40.0006818.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 0.0001038.86   ×   10 5 8.86   ×   10 5 0.0001038.86   ×   10 5
T+: 196, T-: 14T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 209, T-: 1T+: 210, T-: 0T+: 210, T-: 0T+: 209, T-: 1T+: 210, T-: 0
Z: −3.3973, SRN: 14.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.8826, SRN: 1.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.8826, SRN: 1.0000Z: −3.9199, SRN: 0.0000
F58.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F68.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F78.86   ×   10 5 0.0001038.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 209, T-: 1T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.8826, SRN: 1.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F80.0002190.0001898.86   ×   10 5 0.000148.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 0.0001038.86   ×   10 5 8.86   ×   10 5
T+: 204, T-: 6T+: 205, T-: 5T+: 210, T-: 0T+: 207, T-: 3T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 209, T-: 1T+: 210, T-: 0T+: 210, T-: 0
Z: −3.6959, SRN: 6.0000Z: −3.7333, SRN: 5.0000Z: −3.9199, SRN: 0.0000Z: −3.8079, SRN: 3.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.8826, SRN: 1.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F98.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F108.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F118.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F128.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Total+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0
Table A7. Wilcoxon Signed Results part 3.
Table A7. Wilcoxon Signed Results part 3.
FunctionSMATSOChimpAOAROARSAFLOSPBOSSOAOHO
F18.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F20.0040450.0276218.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 182, T-: 28T+: 164, T-: 46T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −2.8746, SRN: 28.0000Z: −2.2026, SRN: 46.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F38.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F48.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F58.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F68.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F78.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F88.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F98.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F108.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F118.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000Z: −3.9199, SRN: 0.0000
F128.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5 8.86   ×   10 5
T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0T+: 210, T-: 0
Total+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0+:12, -:0, =:0

Appendix A.1. Cantilever Stepped Beam

Decision variables: x = ( x 1 , x 2 , x 3 , x 4 , x 5 ) R > 0 5 .
Objective:
min x f ( x ) = 0.0624 ( x 1 + x 2 + x 3 + x 4 + x 5 ) .
Constraint:
g 1 ( x ) = 61 x 1 3 + 37 x 2 3 + 19 x 3 3 + 7 x 4 3 + 1 x 5 3 1 0 .

Appendix A.2. Three-Bar Truss

Decision variables: x = ( x 1 , x 2 ) R > 0 2 .
Parameters: l = 100 , P = 2 , σ max = o = 2 .
Objective (weight/volume):
min x f ( x ) = ( 2 2 x 1 + x 2 ) l .
Stress constraints:
g 1 ( x ) = 2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 P o 0 , g 2 ( x ) = x 2 2 x 1 2 + 2 x 1 x 2 P o 0 , g 3 ( x ) = 1 2 x 2 + x 1 P o 0 .

Appendix A.3. Welded Beam

Decision variables: x = ( x 1 , x 2 , x 3 , x 4 ) R > 0 4 .
Parameters:
P = 6000 , L = 14 , δ max = 0.25 , E = 30 × 10 6 , G = 12 × 10 6 , τ max = 13 , 600 , σ max = 30 , 000 .
Objective (cost):
min x f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14 + x 2 ) .
Derived quantities:
J = 2 2 x 1 x 2 x 2 2 4 + ( x 1 + x 3 ) 2 4 , R = x 2 2 4 + ( x 1 + x 3 ) 2 4 , M = P L + x 2 2 , τ = P 2 x 1 x 2 , τ = M R J , τ = ( τ ) 2 + x 2 R τ τ + ( τ ) 2 , σ = 6 P L x 4 x 3 2 , δ = 6 P L 3 E x 3 2 x 4 , P c = 4.013 E x 3 2 x 4 6 / 30 L 2 1 x 3 2 L E 4 G .
Constraints:
g 1 ( x ) = τ τ max 0 , g 2 ( x ) = σ σ max 0 , g 3 ( x ) = x 1 x 4 0 , g 4 ( x ) = δ δ max 0 , g 5 ( x ) = P P c 0 .

Appendix A.4. Planetary Gear Train

Decision variables:
N 1 , , N 6 Z 0 , p { 3 , 4 , 5 } , m 1 , m 2 { 1.75 , 2 , 2.25 , 2.5 , 2.75 , 3.0 } .
Parameters:
i 01 = 3.11 , i 02 = 1.84 , i 0 R = 3.11 , D max = 220 , δ 22 = δ 33 = δ 55 = δ 35 = δ 34 = δ 56 = 0.5 .
Speed ratios:
i 1 = N 6 N 4 , i 2 = N 6 ( N 1 N 3 + N 2 N 4 ) N 1 N 3 ( N 6 N 4 ) , i R = N 2 N 6 N 1 N 3 .
Objective (match target ratios above their targets):
min f = max i 1 i 01 , i 2 i 02 , i R i 0 R .
Auxiliary angle:
β = arccos ( N 6 N 3 ) 2 + ( N 4 + N 5 ) 2 ( N 3 + N 5 ) 2 2 ( N 6 N 3 ) ( N 4 + N 5 ) .
Constraints:
g 1 = m 2 ( N 6 + 2.5 ) D max 0 , g 2 = m 1 ( N 1 + N 2 ) + m 1 ( N 2 + 2 ) D max 0 , g 3 = m 2 ( N 4 + N 5 ) + m 2 ( N 5 + 2 ) D max 0 , g 4 = | m 1 ( N 1 + N 2 ) m 2 ( N 6 N 3 ) | ( m 1 + m 2 ) 0 , g 5 = ( N 1 + N 2 ) sin π p N 2 2 δ 22 0 , g 6 = ( N 6 N 3 ) sin π p N 3 2 δ 33 0 , g 7 = ( N 4 + N 5 ) sin π p N 5 2 δ 55 0 , g 8 = ( N 3 + N 5 + 2 + δ 35 ) 2 ( ( N 6 N 3 ) 2 + ( N 4 + N 5 ) 2 2 ( N 6 N 3 ) ( N 4 + N 5 ) cos 2 π p β ) 0 , g 9 = ( N 6 2 N 3 N 4 4 2 δ 34 ) 0 , g 10 = ( N 6 N 4 2 N 5 4 2 δ 56 ) 0 , g 11 = rem ( N 6 N 4 , p ) 10 4 0 ( i . e . , N 6 N 4 0 ( mod p ) approximately ) .

Appendix A.5. Robot Gripper

Decision variables: x = ( a , b , c , e , f , , δ ) R 7 .
Parameters:
P = 100 , Z max = 99.9999 , Y min = 50 , Y max = 100 , Y G = 150 .
Kinematic angles:
α 0 = arccos a 2 + 2 + e 2 b 2 2 a 2 + e 2 + arctan e , β 0 = arccos b 2 + 2 + e 2 a 2 2 b 2 + e 2 arctan e , α m = arccos a 2 + ( Z max ) 2 + e 2 b 2 2 a ( Z max ) 2 + e 2 + arctan e Z max , β m = arccos b 2 + ( Z max ) 2 + e 2 a 2 2 b ( Z max ) 2 + e 2 arctan e Z max .
Openings:
Y x m i n = 2 e + f + c sin ( β m + δ ) , Y x m a x = 2 e + f + c sin ( β 0 + δ ) .
Gripping force along the stroke: for z [ 0 , Z max ] , let
θ a ( z ) = arccos a 2 + ( z ) 2 + e 2 b 2 2 a ( z ) 2 + e 2 , θ b ( z ) = arccos b 2 + ( z ) 2 + e 2 a 2 2 b ( z ) 2 + e 2 , G ( z ) = P b sin θ a ( z ) + θ b ( z ) 2 c cos θ a ( z ) + arctan e z .
Objective (minimize force nonuniformity over the stroke):
min x F ( x ) = max 0 z Z max G ( z ) min 0 z Z max G ( z ) .
Constraints:
g 1 ( x ) = Y x m i n Y min 0 , g 2 ( x ) = Y x m i n 0 , g 3 ( x ) = Y max Y x m a x 0 , g 4 ( x ) = Y x m a x Y G 0 , g 5 ( x ) = 2 + e 2 ( a + b ) 2 0 , g 6 ( x ) = b 2 ( a e ) 2 ( Z max ) 2 0 , g 7 ( x ) = Z max 0 .
(As in the code, arguments of arccos ( · ) and square roots must remain real; complex values are penalized.)

References

  1. Bhandari, A.; Kumar, A.; Chaudhary, S.; Singh, G. A novel color image multilevel thresholding based segmentation using nature inspired optimization algorithms. Expert Syst. Appl. 2016, 63, 112–133. [Google Scholar] [CrossRef]
  2. Oftadeh, R.; Mahjoob, M.; Shariatpanahi, M. A novel meta-heuristic optimization algorithm inspired by group hunting of animals: Hunting search. Comput. Math. Appl. 2010, 60, 2087–2098. [Google Scholar] [CrossRef]
  3. Christy, A.; Raj, P.A.D.V.; Padmanaban, S.; Selvamuthukumaran, R.; Ertas, A.H. A bio-inspired novel optimization technique for reactive power flow. Eng. Sci. Technol. Int. J. 2016, 19, 1682–1692. [Google Scholar] [CrossRef]
  4. Dhiman, G.; Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 2017, 114, 48–70. [Google Scholar] [CrossRef]
  5. Chopra, N.; Mohsin Ansari, M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  6. Li, S.; Sun, Y. A novel numerical optimization algorithm inspired from garden balsam. Neural Comput. Appl. 2020, 32, 16783–16794. [Google Scholar] [CrossRef]
  7. Chou, J.S.; Truong, D.N. A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl. Math. Comput. 2021, 389. [Google Scholar] [CrossRef]
  8. Zhao, W.; Wang, L.; Zhang, Z. Supply-Demand-Based Optimization: A Novel Economics-Inspired Algorithm for Global Optimization. IEEE Access 2019, 7, 73182–73206. [Google Scholar] [CrossRef]
  9. Abdel-Basset, M.; Mohamed, R.; Sallam, K.M.; Chakrabortty, R.K. Light Spectrum Optimizer: A Novel Physics-Inspired Metaheuristic Optimization Algorithm. Mathematics 2022, 10, 3466. [Google Scholar] [CrossRef]
  10. Braik, M.; Al-Hiary, H.; Alzoubi, H.; Hammouri, A.; Azmi Al-Betar, M.; Awadallah, M.A. Tornado optimizer with Coriolis force: A novel bio-inspired meta-heuristic algorithm for solving engineering problems. Artif. Intell. Rev. 2025, 58, 123. [Google Scholar] [CrossRef]
  11. Tao, L.; Yang, X.; Zhou, Y.; Yang, L. A novel transformers fault diagnosis method based on probabilistic neural network and bio-inspired optimizer. Sensors 2021, 21, 3623. [Google Scholar] [CrossRef]
  12. Hou, G.; Fan, Y.; Wang, J. Application of a novel dynamic recurrent fuzzy neural network with rule self-adaptation based on chaotic quantum pigeon-inspired optimization in modeling for gas turbine. Energy 2024, 290, 130188. [Google Scholar] [CrossRef]
  13. Ghanbari, S.; Ghasabehi, M.; Asadi, M.R.; Shams, M. An inquiry into transport phenomena and artificial intelligence-based optimization of a novel bio-inspired flow field for proton exchange membrane fuel cells. Appl. Energy 2024, 376, 124260. [Google Scholar] [CrossRef]
  14. Chiang, H.S.; Sangaiah, A.K.; Chen, M.Y.; Liu, J.Y. A Novel Artificial Bee Colony Optimization Algorithm with SVM for Bio-inspired Software-Defined Networking. Int. J. Parallel Program. 2020, 48, 310–328. [Google Scholar] [CrossRef]
  15. Panigrahi, B.S.; Nagarajan, N.; Prasad, K.D.V.; Sathya; Salunkhe, S.S.; Kumar, P.D.; Kumar, M.A. Novel nature-inspired optimization approach-based svm for identifying the android malicious data. Multimed. Tools Appl. 2024, 83, 71579–71597. [Google Scholar] [CrossRef]
  16. Wang, H.; Hu, Z.; Sun, Y.; Su, Q.; Xia, X. A novel modified BSA inspired by species evolution rule and simulated annealing principle for constrained engineering optimization problems. Neural Comput. Appl. 2019, 31, 4157–4184. [Google Scholar] [CrossRef]
  17. Lin, N.; Fu, L.; Zhao, L.; Hawbani, A.; Tan, Z.; Al-Dubai, A.; Min, G. A novel nomad migration-inspired algorithm for global optimization. Comput. Electr. Eng. 2022, 100, 107862. [Google Scholar] [CrossRef]
  18. Alvarez-Alvarado, M.S.; Alban-Chacón, F.E.; Lamilla-Rubio, E.A.; Rodríguez-Gallegos, C.D.; Velásquez, W. Three novel quantum-inspired swarm optimization algorithms using different bounded potential fields. Sci. Rep. 2021, 11, 11655. [Google Scholar] [CrossRef]
  19. Pandya, S.B.; Kalita, K.; Jangir, P.; Ghadai, R.K.; Abualigah, L. Multi-objective Geometric Mean Optimizer (MOGMO): A Novel Metaphor-Free Population-Based Math-Inspired Multi-objective Algorithm. Int. J. Comput. Intell. Syst. 2024, 17, 91. [Google Scholar] [CrossRef]
  20. Yang, Y.; Yang, Z.; Yang, Z. Improved Graph Neural Network with Graph Filtering Kernel and Generalized Nonconvex Norm Inspired by a Novel Unified Optimization Framework. Circuits Syst. Signal Process. 2024, 44, 1239–1259. [Google Scholar] [CrossRef]
  21. Amiri, M.H.; Mehrabi Hashjin, N.; Montazeri, M.; Mirjalili, S.; Khodadadi, N. Hippopotamus optimization algorithm: A novel nature-inspired optimization algorithm. Sci. Rep. 2024, 14, 5032. [Google Scholar] [CrossRef] [PubMed]
  22. Sherif, A.; Haci, H. A Novel Bio-Inspired Energy Optimization for Two-Tier Wireless Communication Networks: A Grasshopper Optimization Algorithm (GOA)-Based Approach. Electronics 2023, 12, 1216. [Google Scholar] [CrossRef]
  23. Anaraki, M.V.; Farzin, S. Humboldt Squid Optimization Algorithm (HSOA): A Novel Nature-Inspired Technique for Solving Optimization Problems. IEEE Access 2023, 11, 122069–122115. [Google Scholar] [CrossRef]
  24. Sun, X.X.; Pan, J.S.; Chu, S.C.; Hu, P.; Tian, A.Q. A novel pigeon-inspired optimization with QUasi-Affine TRansformation evolutionary algorithm for DV-Hop in wireless sensor networks. Int. J. Distrib. Sens. Netw. 2020, 16, 1550147720932749. [Google Scholar] [CrossRef]
  25. Prajna, K.; Mukhopadhyay, C. Acoustic Emission Denoising Based on Bio-inspired Antlion Optimization: A Novel Technique for Structural Health Monitoring. J. Vib. Eng. Technol. 2024, 12, 7671–7687. [Google Scholar] [CrossRef]
  26. Zhou, X.; Guo, Y.; Yan, Y.; Huang, Y.; Xue, Q. Migration Search Algorithm: A Novel Nature-Inspired Metaheuristic Optimization Algorithm. J. Netw. Intell 2023, 8, 324–345. [Google Scholar]
  27. Kumar, V.; Rao, R.N.; Singh, A.; Shekher, V.; Paul, K.; Sinha, P.; Alghamdi, T.A.; Abdelaziz, A.Y. A novel nature-inspired nutcracker optimizer algorithm for congestion control in power system transmission lines. Energy Explor. Exploit. 2024, 42, 2056–2091. [Google Scholar] [CrossRef]
  28. Rathee, M.; Kumar, S.; Dilip, K.; Dohare, U.; Aanchal; Parveen. Towards energy balancing optimization in wireless sensor networks: A novel quantum inspired genetic algorithm based sinks deployment approach. Ad Hoc Netw. 2024, 153, 103350. [Google Scholar] [CrossRef]
  29. Nemati, M.; Zandi, Y.; Agdas, A.S. Application of a novel metaheuristic algorithm inspired by stadium spectators in global optimization problems. Sci. Rep. 2024, 14, 3078. [Google Scholar] [CrossRef]
  30. Haddadi, Y.R.; Mansouri, B.; Khodja, F.Z.D. A novel bio-inspired optimization algorithm for medical image restoration using Enhanced Regularized Inverse Filtering. Res. Biomed. Eng. 2023, 39, 233–244. [Google Scholar] [CrossRef]
  31. Maroosi, A.; Muniyandi, R.C. A novel membrane-inspired multiverse optimizer algorithm for quality of service-aware cloud web service composition with service level agreements. Int. J. Commun. Syst. 2023, 36, e5483. [Google Scholar] [CrossRef]
  32. Jiang, Y.; Wu, Q.; Zhu, S.; Zhang, L. Orca predation algorithm: A novel bio-inspired algorithm for global optimization problems. Expert Syst. Appl. 2022, 188, 116026. [Google Scholar] [CrossRef]
  33. Attaran, B.; Ghanbarzadeh, A.; Moradi, S. A novel evolutionary optimization algorithm inspired in the intelligent behaviour of the hunter spider. Int. J. Comput. Math. 2021, 98, 627–655. [Google Scholar] [CrossRef]
  34. Sivakumar, N.; Kanagasabapathy, H. Optimization of parameters of cantilever beam using novel bio-inspired algorithms: A comparative approach. J. Comput. Theor. Nanosci. 2018, 15, 66–77. [Google Scholar] [CrossRef]
  35. Zhou, B.; Zhang, H.; Han, S.; Ji, X. Crashworthiness analysis and optimization of a novel thin-walled multi-cell structure inspired by bamboo. Structures 2024, 59, 105827. [Google Scholar] [CrossRef]
  36. Janizadeh, S.; Thi Kieu Tran, T.; Bateni, S.M.; Jun, C.; Kim, D.; Trauernicht, C.; Heggy, E. Advancing the LightGBM approach with three novel nature-inspired optimizers for predicting wildfire susceptibility in Kaua’i and Moloka’i Islands, Hawaii. Expert Syst. Appl. 2024, 258, 124963. [Google Scholar] [CrossRef]
  37. Brest, J.; Greiner, S.; Boskovic, B.; Mernik, M.; Zumer, V. Self-Adapting Control Parameters in Differential Evolution: A Comparative Study on Numerical Benchmark Problems. IEEE Trans. Evol. Comput. 2006, 10, 646–657. [Google Scholar] [CrossRef]
  38. Zhang, J.; Sanderson, A.C. JADE: Adaptive Differential Evolution with Optional External Archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  39. Mantegna, R.N. Fast, Accurate Algorithm for Numerical Simulation of Lévy Stable Stochastic Processes. Phys. Rev. E 1994, 49, 4677–4683. [Google Scholar] [CrossRef]
  40. Yang, X.; Deb, S. Engineering Optimization by Cuckoo Search. In Proceedings of the International Conference on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; IEEE: New York, NY, USA, 2009; pp. 210–214. [Google Scholar] [CrossRef]
  41. Lampinen, J. A Constraint Handling Approach for Differential Evolution. In Proceedings of the 2002 Congress on Evolutionary Computation (CEC), Honolulu, HI, USA, 12–17 May 2002; IEEE: New York, NY, USA, 2002; pp. 1468–1473. [Google Scholar] [CrossRef]
Figure 1. Ripple superposition. Rain creates multiple overlapping ripples that decay with distance. Energy radiates outward in all directions, with debris particles showing the multi-directional flow patterns.
Figure 1. Ripple superposition. Rain creates multiple overlapping ripples that decay with distance. Energy radiates outward in all directions, with debris particles showing the multi-directional flow patterns.
Computers 14 00486 g001
Figure 2. Tide and undertow dynamics. Surface debris and birds move with the incoming tide (rightward), while sediment near the beach is pulled back by undertow (leftward). Wave angle and particle trajectories reveal the dual flow system.
Figure 2. Tide and undertow dynamics. Surface debris and birds move with the incoming tide (rightward), while sediment near the beach is pulled back by undertow (leftward). Wave angle and particle trajectories reveal the dual flow system.
Computers 14 00486 g002
Figure 3. Elite crest formation. Energy concentrates at the breaking wave peak where elite agents (green spheres) cluster. The brightest point marks the global best, while the orange center represents the stable crest mean. Spray and turbulence show the dynamic convergence.
Figure 3. Elite crest formation. Energy concentrates at the breaking wave peak where elite agents (green spheres) cluster. The brightest point marks the global best, while the orange center represents the stable crest mean. Spray and turbulence show the dynamic convergence.
Computers 14 00486 g003
Figure 4. Wave interference and superposition. Two raindrop sources create overlapping ripple patterns that interfere constructively (bright zones) and destructively (calm zones).
Figure 4. Wave interference and superposition. Two raindrop sources create overlapping ripple patterns that interfere constructively (bright zones) and destructively (calm zones).
Computers 14 00486 g004
Figure 5. Vector decomposition of movement strategy. Current agent position x i ( t ) experiences five force components that combine to form the mutant vector v i ( t ) . Contour-like regions indicate fitness landscape structure.
Figure 5. Vector decomposition of movement strategy. Current agent position x i ( t ) experiences five force components that combine to form the mutant vector v i ( t ) . Contour-like regions indicate fitness landscape structure.
Computers 14 00486 g005
Figure 6. Early exploration phase. Large-amplitude movements with frequent long-range jumps (Lévy flights) enable the discovery of diverse regions. Dashed circles indicate exploration radii. Multiple evaluations (small dots) span the search space.
Figure 6. Early exploration phase. Large-amplitude movements with frequent long-range jumps (Lévy flights) enable the discovery of diverse regions. Dashed circles indicate exploration radii. Multiple evaluations (small dots) span the search space.
Computers 14 00486 g006
Figure 7. Late exploitation phase, steps with decreasing amplitude converge precisely toward the global optimum. The gradient field (gray arrows) and tight evaluation cluster (small dots) indicate focused local search. Distance ϵ to optimum decreases rapidly.
Figure 7. Late exploitation phase, steps with decreasing amplitude converge precisely toward the global optimum. The gradient field (gray arrows) and tight evaluation cluster (small dots) indicate focused local search. Distance ϵ to optimum decreases rapidly.
Computers 14 00486 g007
Figure 8. Search history, trajectory, fitness and convergence curve comparison of the final objective values achieved by different population sizes for the CEC2022 benchmark functions.
Figure 8. Search history, trajectory, fitness and convergence curve comparison of the final objective values achieved by different population sizes for the CEC2022 benchmark functions.
Computers 14 00486 g008
Figure 9. Search history, trajectory, fitness and convergence curve comparison of the final objective values achieved by different population sizes for the CEC2022 benchmark functions.
Figure 9. Search history, trajectory, fitness and convergence curve comparison of the final objective values achieved by different population sizes for the CEC2022 benchmark functions.
Computers 14 00486 g009
Figure 10. Convergence behavior of the optimizer across the CEC2022 benchmark functions. The best objective values are plotted against iterations for different functions.
Figure 10. Convergence behavior of the optimizer across the CEC2022 benchmark functions. The best objective values are plotted against iterations for different functions.
Computers 14 00486 g010
Figure 11. Empirical Cumulative Distribution Function (ECDF) of final objective values for different population sizes.
Figure 11. Empirical Cumulative Distribution Function (ECDF) of final objective values for different population sizes.
Computers 14 00486 g011
Figure 12. Scatter plot of the final objective values achieved by the optimizer for different population sizes across the CEC2022 benchmark functions.
Figure 12. Scatter plot of the final objective values achieved by the optimizer for different population sizes across the CEC2022 benchmark functions.
Computers 14 00486 g012
Figure 13. Boxplot showing the distribution of final objective values across different population sizes for various CEC2022 benchmark functions.
Figure 13. Boxplot showing the distribution of final objective values across different population sizes for various CEC2022 benchmark functions.
Computers 14 00486 g013
Figure 14. Boxplot showing the distribution of final objective values for different population sizes, emphasizing the impact of population size on optimization performance.
Figure 14. Boxplot showing the distribution of final objective values for different population sizes, emphasizing the impact of population size on optimization performance.
Computers 14 00486 g014
Figure 15. Performance profile comparing the fraction of functions solved within a given factor of the best objective value for different population sizes.
Figure 15. Performance profile comparing the fraction of functions solved within a given factor of the best objective value for different population sizes.
Computers 14 00486 g015
Figure 16. Bar plots (log scale) showing the influence of ρ on RE performance for all 12 CEC2022 functions. Faster decay ( ρ = 0.80 ) generally provides better convergence on most functions, while very slow decay ( ρ = 0.99 ) often degrades performance.
Figure 16. Bar plots (log scale) showing the influence of ρ on RE performance for all 12 CEC2022 functions. Faster decay ( ρ = 0.80 ) generally provides better convergence on most functions, while very slow decay ( ρ = 0.99 ) often degrades performance.
Computers 14 00486 g016
Figure 17. Bar plots (log scale) of the effect of γ on RE performance. A moderate pull ( γ = 0.50 ) generally balances exploration and exploitation across many functions, while too strong or too weak attraction hampers performance on specific problems.
Figure 17. Bar plots (log scale) of the effect of γ on RE performance. A moderate pull ( γ = 0.50 ) generally balances exploration and exploitation across many functions, while too strong or too weak attraction hampers performance on specific problems.
Computers 14 00486 g017
Figure 18. Welded connection with design parameters and load application.
Figure 18. Welded connection with design parameters and load application.
Computers 14 00486 g018
Figure 19. Tension –compression spring with axial load P, overall coil diameter D, and wire diameter d at the central valley.
Figure 19. Tension –compression spring with axial load P, overall coil diameter D, and wire diameter d at the central valley.
Computers 14 00486 g019
Figure 20. Illustration of the three-bar truss Design.
Figure 20. Illustration of the three-bar truss Design.
Computers 14 00486 g020
Figure 21. Cantilever stepped beam with ten segments clamped at the left wall, subjected to a tip load P. The total span is L. Cross-section dimensions are width W and height H.
Figure 21. Cantilever stepped beam with ten segments clamped at the left wall, subjected to a tip load P. The total span is L. Cross-section dimensions are width W and height H.
Computers 14 00486 g021
Figure 22. Ten Bar Planar Truss Design.
Figure 22. Ten Bar Planar Truss Design.
Computers 14 00486 g022
Figure 23. Pressure vessel with cylindrical shell (length L) and hemispherical heads. Inner radius R; shell thickness T s ; head thickness T h . Colored rings indicate wall regions; dark disks show the inner cavity at the seam planes.
Figure 23. Pressure vessel with cylindrical shell (length L) and hemispherical heads. Inner radius R; shell thickness T s ; head thickness T h . Colored rings indicate wall regions; dark disks show the inner cavity at the seam planes.
Computers 14 00486 g023
Figure 24. Schematic of a stepped transmission shaft with symmetric shoulders. Red callouts denote diameters d 1 d 12 ; teal callouts denote radii r 1 r 8 .
Figure 24. Schematic of a stepped transmission shaft with symmetric shoulders. Red callouts denote diameters d 1 d 12 ; teal callouts denote radii r 1 r 8 .
Computers 14 00486 g024
Table 1. Operator-level comparison in standard terms.
Table 1. Operator-level comparison in standard terms.
AspectJADE (DE)PSOREO
StatePositions onlyPositions + velocitiesPositions only
Primary move F ( x p x i ) + F ( x r 1 x r 2 ) ω v + c 1 r 1 ( p i x i ) + c 2 r 2 ( x x i ) JADE core + rank-aware
η i ( x x i ) + time-growing
τ ( c ¯ x i ) + s ( t )
AnchorsSingle sampled p-bestPersonal best + global bestGlobal best + elite mean (two anchors)
SchedulingjDE for F , Cr Optional annealing of ω jDE for F , Cr ; η i rank-conditioned; τ increases with t; s ( t ) decays
Exploration sourceDifferential termRandom coefficients + inertiaDifferential term + structured s ( t ) + early Lévy
Exploitation sourcep-best termCognitive/social pullsRank-aware best pull + time-growing elite-mean pull
SelectionGreedy, binomial crossoverNone (direct state update)Greedy, binomial crossover
Table 2. Average performance metrics across all CEC2022 functions.
Table 2. Average performance metrics across all CEC2022 functions.
Avg. MeanAvg. StdAvg. ErrorAvg. Rank
REO1630.5227.75541.1061.000
RUN1762.260178.927131.1676.417
ALO1780.948197.615161.2876.917
MVO1992.062408.728197.2077.750
DO1906.588323.255182.9398.917
COA1879.176295.843206.3388.917
GWO2182.404599.070325.3359.167
RTH1671.51988.18637.7589.417
AVOA1840.468257.135196.81310.417
AO2478.426895.092639.60111.000
SHIO2118.822535.489510.48912.750
SCSO1965.670382.337363.10014.000
SHO2086.689503.356330.63414.083
ZOA1857.656274.322264.26214.500
GJO2327.343744.010264.12714.500
HHO1867.917284.584197.48516.750
WOA3237.2381653.905942.99618.833
HGSO138,432.792136,849.45989,396.78719.417
HLOA1891.831308.498311.60419.500
TTHHO2085.397502.064352.04919.500
GBO4996.6903413.3563444.59619.667
CPO1900.950317.616319.71719.917
DOA5,613,083.2705,611,499.93625,094,275.37820.167
FOX1973.348390.015262.73221.500
SMA3431.7061848.3721280.89221.667
TSO2306.774723.441698.20121.667
Chimp73,031.63971,448.30645,862.49321.750
AOA2682.2151098.881768.00025.083
ROA84,639.09083,055.757170,522.38925.750
RSA4,481,662.7834,480,079.4502,028,245.37029.083
FLO1,894,426.2981,892,842.9652,303,568.23229.583
SPBO35,531,797.14735,530,213.81323,511,193.30830.500
SSOA13,891,516.28813,889,932.95418,466,078.92332.000
OHO67,798,230.99367,796,647.66041,782,384.43632.750
Table 3. Optimizers results vs. REO across 12 functions and number of Top-3 finishes (rank ≤ 3). W/D/L counts functions where an algorithm had a better/same/worse rank than REO.
Table 3. Optimizers results vs. REO across 12 functions and number of Top-3 finishes (rank ≤ 3). W/D/L counts functions where an algorithm had a better/same/worse rank than REO.
AlgorithmW/D/L vs. REOTop-3 Finishes
W D L (Rank 3 )
RUN00122
ALO00122
MVO00124
DO00121
COA00123
GWO00125
RTH02103
AVOA00121
AO00121
SHIO00120
SCSO00120
SHO00120
ZOA00121
GJO00120
HHO00120
WOA00120
HGSO00120
HLOA00120
TTHHO00120
GBO00120
CPO00121
DOA00120
FOX00120
SMA00120
TSO00120
Chimp00120
AOA00120
ROA00120
RSA00120
FLO00120
SPBO00120
SSOA00120
OHO00120
Table 4. Global summary over the 12 CEC2022 functions using the reported ranks (lower is better). Columns show: Avg. rank, median rank, SD of ranks across functions, the number of wins (#rank = 1 ), Top-3/Top-5 counts, Bottom-3 (#rank 32 ), and the average-rank gap to REO ( Δ REO ).
Table 4. Global summary over the 12 CEC2022 functions using the reported ranks (lower is better). Columns show: Avg. rank, median rank, SD of ranks across functions, the number of wins (#rank = 1 ), Top-3/Top-5 counts, Bottom-3 (#rank 32 ), and the average-rank gap to REO ( Δ REO ).
AlgorithmAvg.Med.SDWinsTop-3Top-5Bottom-3 Δ REO
REO1.001.00.0012121200.00
RUN6.426.02.9302505.42
ALO6.926.53.0402405.92
MVO7.756.05.7604606.75
COA8.928.05.8503507.92
DO8.928.04.7501407.92
GWO9.176.57.0605508.17
RTH9.429.56.1423308.42
AVOA10.428.55.8801309.42
AO11.0010.55.37012010.00
SHIO12.7511.06.94002011.75
SCSO14.0013.53.44000013.00
CPO15.1715.08.36000014.17
TSO21.6722.55.99000120.67
SMA21.6722.55.34000120.67
Chimp21.7521.05.51000020.75
ROA25.7526.51.83000024.75
AOA25.0827.05.04000024.08
RSA29.0829.01.55000028.08
FLO29.5829.52.43000328.58
SPBO30.5033.05.55000829.50
SSOA32.0032.00.710001131.00
OHO32.7534.01.64000831.75
FOX21.5026.511.17001320.50
DOA20.1718.55.44000019.17
ZOA18.5015.06.79001017.50
TTHHO19.5021.04.39000018.50
HHO18.9217.07.02000017.92
GJO17.4216.06.01000016.42
HLOA19.9218.010.14000018.92
WOA23.9223.57.16000122.92
SHO24.5024.06.99000023.50
GBO16.7516.04.83001015.75
HGSO17.7516.57.08001016.75
Table 5. Best algorithm(s) per function according to the global rank rows in Table A1, Table A2 and Table A3. Ties shown comma-separated.
Table 5. Best algorithm(s) per function according to the global rank rows in Table A1, Table A2 and Table A3. Ties shown comma-separated.
FunctionWinner(s)
F1REO, RTH
F2REO
F3REO
F4REO
F5REO
F6REO
F7REO
F8REO
F9REO, RTH
F10REO
F11REO
F12REO
Table 6. Effect of the amplitude decay rate ρ on the REO algorithm. Each entry reports the mean final objective value over three runs. Smaller values indicate better performance.
Table 6. Effect of the amplitude decay rate ρ on the REO algorithm. Each entry reports the mean final objective value over three runs. Smaller values indicate better performance.
Function ρ = 0.80 ρ = 0.95 ρ = 0.99
Sphere (F1)0.08909.698146.7919
Schwefel (F2)3648.033655.353656.01
Rosenbrock (F3)1654.720.318216,250.39
Rastrigin (F4)48.0162.20109.09
Griewank (F5)0.41430.50390.6817
Ackley (F6)19.3219.9619.98
Alpine (F7)10.6714.2615.86
Schaffer (F8)2.702.732.64
Zakharov (F9)914.20974.44964.53
Levy (F10)141.76269.82570.13
Dixon–Price (F11)629.941838.142338.68
Styblinski–Tang (F12)1.090.590.44
Table 7. Effect of the pull strength γ on the REO algorithm. Smaller values indicate better performance.
Table 7. Effect of the pull strength γ on the REO algorithm. Smaller values indicate better performance.
Function γ = 0.30 γ = 0.50 γ = 0.80
Sphere (F1)5.305.686.23
Schwefel (F2)3663.033658.883684.40
Rosenbrock (F3)1626.170.321712,256.93
Rastrigin (F4)67.4859.2263.12
Griewank (F5)0.55010.26930.2768
Ackley (F6)19.4519.3815.98
Alpine (F7)20.1114.0610.50
Schaffer (F8)3.173.032.72
Zakharov (F9)1138.79598.88262.22
Levy (F10)357.16144.19227.18
Dixon–Price (F11)645.50186.41625.33
Styblinski–Tang (F12)0.29137.30536.70
Table 8. Optimization results for Welded Beam.
Table 8. Optimization results for Welded Beam.
AlgorithmBestMeanStd SemRanking
REO1.724851.724856.84344 8 × 10 7 4.83904 8 × 10 7 1
POA1.730031.731170.001610660.001138912
ChOA1.735991.740530.006419540.00453933
MFO1.73031.743620.01883840.01332084
TTHHO2.024352.186080.2287110.1617235
ZOA1.898492.313090.5863270.4145966
SCA2.070432.313210.3433320.2427727
TSO1.85012.426640.8153560.5765448
RSA2.956512.963560.009977390.007055089
SHO2.476673.031390.78450.55472510
SMA3.079153.08830.01293570.009146911
FLO2.908313.228460.452760.32014912
MPA3.324893.332120.01022870.0072328213
BOA2.834573.490450.9275530.65587914
WOA3.84044.817981.382520.97758815
ROA2.411045.407734.237962.9966916
SSOA112,592188,975108,02276,382.917
Table 9. Optimization results for Spring Design.
Table 9. Optimization results for Spring Design.
AlgorithmBest MeanStd SemRanking
REO180,806180,8060.02630290.0185991
POA180,806180,8060.9597520.6786472
ChOA180,945181,057157.649111.4743
WOA181,491182,103864.784611.4954
MFO180,865182,5642402.461698.85
ZOA183,473183,547104.83274.12746
TSO180,806198,34124,798.217,5357
SCA197,590198,3881129.08798.3838
TTHHO206,916211,5736586.44657.299
MPA236,266236,30454.055638.223110
SHO268,748276,01610,279.47268.6311
ROA221,451333,833158,931112,38112
SMA284,594428,570203,612143,97613
RSA269,537431,878229,585162,34114
BOA428,572439,55615,533.210,983.615
FLO483,334505,20730,933.221,873.116
SSOA489,755583,727132,89793,972.617
Table 10. Optimization results for Three-Bar Truss.
Table 10. Optimization results for Three-Bar Truss.
AlgorithmBestMeanStd SemRanking
RE263.896263.8961.32176   ×   10 7 9.34624   ×   10 8 1
POA263.896263.8968.93859   ×   10 7 6.32054   ×   10 7 2
MPA263.898263.9010.003152270.002228993
ZOA263.899263.9070.01188330.008402754
ChOA263.951263.9720.02928660.02070875
MFO263.905263.9880.1175950.0831526
SCA263.933264.0870.217490.1537887
TTHHO263.978264.2860.4362330.3084648
BOA264.282264.6090.4617660.3265189
FLO264.624264.8280.2877830.20349310
SHO265.164265.350.2633870.18624311
SSOA265.062265.4780.5881680.41589812
ROA265.129265.9231.123730.79459813
WOA264.383266.1182.454441.7355514
RSA267.161267.2930.1866170.13195815
SMA266.368268.5233.046932.1545116
TSO264.158273.513.21249.3425917
Table 11. Optimization results for Cantilever Stepped Beam.
Table 11. Optimization results for Cantilever Stepped Beam.
AlgorithmBestMeanStd SemRanking
RE62,772.862,795.231.759122.4571
POA64,03664,110.5105.41774.5412
ChOA64,047.764,722.5954.266674.7683
ZOA64,971.265,964.91405.33993.7154
MPA70,538.571,321.51107.4783.0475
MFO63,969.871,935.211,264.77965.376
TTHHO71,948.272,510794.495561.7937
ROA76,904.578,652.32471.81747.838
FLO78,149.880,258.72982.472108.939
SHO70,568.482,05116,238.911,482.610
SMA82,673.886,0604788.743386.1511
SCA86,389.586,715.6461.139326.07412
WOA82,236.887,037.66789.284800.7513
TSO87,611.392,027.46245.264416.0714
SSOA88,391.3103,40421,230.515,012.215
RSA107,990121,05018,469.213,059.716
BOA107,209385,248393,207278,04017
Table 12. Optimization results for Ten-Bar Planar Truss.
Table 12. Optimization results for Ten-Bar Planar Truss.
AlgorithmBestMeanStd SemRanking
RE597.999598.1530.2174280.1537451
REA609.626614.416.765874.784192
ChOA629.66631.0752.001031.414953
SCA618.151638.09828.209719.94724
POA639.131690.14472.143551.01325
TTHHO726.011772.24765.387546.23596
BOA756.704796.89256.834740.18827
ZOA802.904822.19527.282319.29158
MFO598.581829.543326.63230.9629
RSA816.156842.87337.78326.716610
FLO908.464910.2482.523361.7842811
ROA941.918968.64337.79626.725812
SHO910.2571043.38188.268133.12613
WOA606.651059.41640.303452.76314
MPA1652.481699.1766.033946.69315
TSO1754.832092.94478.167338.11516
SSOA2947.663022.2105.41574.539717
SMA3896.644206.47438.152309.8218
Table 13. Pressure-Vessel Design Problem.
Table 13. Pressure-Vessel Design Problem.
AlgorithmMeanStdSemBest_ Objectivex1x2x3x4Ranking
RE6106.46288.4450762.540116043.9220.8346450.42128343.13177164.72691
TTHHO6389.28380.4209856.866226332.4170.9371640.48992248.51611110.5722
MFO6468.769308.4362218.09736250.6720.9504790.46990749.24617104.44883
ChOA6615.45173.0552122.36856493.0820.8939770.48812846.7041133.40764
MPA6728.932902.8383638.40316090.5290.8717930.43336245.13376142.78265
ZOA6850.71259.4634183.46846667.2411.087790.53970156.3373454.879126
BOA7431.184318.6951225.35157205.8331.1234680.56537255.3021364.371567
SHO7457.611221.8144156.84657300.7641.0924710.64452356.0017756.872678
SCA8563.674527.8143373.22118190.4530.9450050.66917841.81536195.7949
FLO9325.825253.9611179.57769146.2471.1996750.85044852.8877977.3342710
ROA10,007.93392.8583277.79279730.1381.0907171.07025555.873159.2496611
WOA10,691.294116.7352910.9717780.3191.2218130.49129350.6100493.578712
TSO12,007.797287.1835152.8176854.9690.9596590.59725149.20803104.755413
SMA12,727.888309.5695875.7526852.1230.9005510.51830645.92008147.316114
RSA17,493.816137.3414339.75613,154.061.3870561.35540161.7434827.0997815
SSOA31,657.965171.0793656.50528,001.451.6663173.57695654.3497795.4616716
Table 14. Stepped Transmission-Shaft Design.
Table 14. Stepped Transmission-Shaft Design.
AlgorithmMeanStdSemBest Objectivex1x2Ranking
RE0.0020860.0002590.0001830.0019030.0872470.251
FLO0.002268000.0022680.0673560.52
MFO0.0022684.33   ×   10 12 3.06   ×   10 12 0.0022680.0673560.53
TSO0.0022684.68   ×   10 10 3.31   ×   10 10 0.0022680.0673560.54
TTHHO0.0022682.4   ×   10 9 1.7   ×   10 9 0.0022680.0673560.55
WOA0.0022681.39   ×   10 8 9.83   ×   10 9 0.0022680.0673560.56
SHO0.0022689.39   ×   10 8 6.64   ×   10 8 0.0022680.0673560.57
MPA0.0022692.06   ×   10 7 1.46   ×   10 7 0.0022680.0673560.58
ZOA0.0022698.95   ×   10 7 6.33   ×   10 7 0.0022690.0673590.59
BOA0.0022723.22   ×   10 6 2.28   ×   10 6 0.0022690.0673710.510
ChOA0.0022721.19   ×   10 6 8.4   ×   10 7 0.0022720.0674020.511
SCA0.0022922.62   ×   10 5 1.85   ×   10 5 0.0022740.0674370.512
SSOA0.0023084.16   ×   10 5 2.94   ×   10 5 0.0022790.067510.513
SMA0.0023171.24   ×   10 6 8.79   ×   10 7 0.0023160.0678880.50256114
ROA0.0027630.0006840.0004840.002280.0675210.515
RSA0.0051680.0002040.0001440.0050240.0810030.76561716
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fakhouri, H.N.; Rashaideh, H.; Alrousan, R.; Hamad, F.; Khrisat, Z. Ripple Evolution Optimizer: A Novel Nature-Inspired Metaheuristic. Computers 2025, 14, 486. https://doi.org/10.3390/computers14110486

AMA Style

Fakhouri HN, Rashaideh H, Alrousan R, Hamad F, Khrisat Z. Ripple Evolution Optimizer: A Novel Nature-Inspired Metaheuristic. Computers. 2025; 14(11):486. https://doi.org/10.3390/computers14110486

Chicago/Turabian Style

Fakhouri, Hussam N., Hasan Rashaideh, Riyad Alrousan, Faten Hamad, and Zaid Khrisat. 2025. "Ripple Evolution Optimizer: A Novel Nature-Inspired Metaheuristic" Computers 14, no. 11: 486. https://doi.org/10.3390/computers14110486

APA Style

Fakhouri, H. N., Rashaideh, H., Alrousan, R., Hamad, F., & Khrisat, Z. (2025). Ripple Evolution Optimizer: A Novel Nature-Inspired Metaheuristic. Computers, 14(11), 486. https://doi.org/10.3390/computers14110486

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop