Next Article in Journal
Prediction of Concrete Compressive Strength Based on Gradient-Boosting ABC Algorithm and Point Density Correction
Previous Article in Journal
Microelectrode Studies of Tertiary Amines in Organic Solvents: Considering Triethanolamine to Estimate the Composition of Acetic Acid–Ethyl Acetate Mixtures
Previous Article in Special Issue
Operational Cycle Detection for Mobile Mining Equipment: An Integrative Scoping Review with Narrative Synthesis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rain-Cloud Condensation Optimizer: Novel Nature-Inspired Metaheuristic for Solving Engineering Design Problems

1
Faculty of King Abdullah II, School of Information Technology, The University of Jordan, Amman 11196, Jordan
2
Data Science and Artificial Intelligence Department, University of Petra, Amman 11196, Jordan
*
Author to whom correspondence should be addressed.
Eng 2025, 6(10), 281; https://doi.org/10.3390/eng6100281
Submission received: 7 September 2025 / Revised: 25 September 2025 / Accepted: 13 October 2025 / Published: 21 October 2025
(This article belongs to the Special Issue Interdisciplinary Insights in Engineering Research)

Abstract

This paper presents Rain-Cloud Condensation Optimizer (RCCO), a nature-inspired metaheuristic that maps cloud microphysics to population-based search. Candidate solutions (“droplets”) evolve under a dual-attractor dynamic toward both a global leader and a rank-weighted cloud core, with time-decaying coefficients that progressively shift emphasis from exploration to exploitation. Diversity is preserved via domain-aware coalescence and opposition-based mirroring sampled within the coordinate-wise band defined by two parents. Rare heavy-tailed “turbulence gusts” (Cauchy perturbations) enable long jumps, while a wrap-and-reflect scheme enforces feasibility near the bounds. A sine-map initializer improves early coverage with negligible overhead. RCCO exposes a small hyperparameter set, and its per-iteration time and memory scale linearly with population size and problem dimension. RCOO has been compared with 21 state-of-the-art optimizers, over the CEC 2022 benchmark suite, where it achieves competitive to superior accuracy and stability, and achieves the top results over eight functions, including in high-dimensional regimes. We further demonstrate constrained, real-world effectiveness on five structural engineering problems—cantilever stepped beam, pressure vessel, planetary gear train, ten-bar planar truss, and three-bar truss. These results suggest that a hydrology-inspired search framework, coupled with simple state-dependent schedules, yields a robust, low-tuning optimizer for black-box, nonconvex problems.

1. Introduction

Optimization lies at the core of scientific discovery and engineering design, where the goal is to identify decision variables that minimize or maximize an objective under constraints [1]. Real-world problems often exhibit nonconvex landscapes, high dimensionality, nonlinearity, noise, and expensive evaluations. Classical deterministic methods—while powerful for smooth, convex problems with available gradients—struggle when objectives are black-box, discontinuous, or multi-modal [2]. In such cases, stochastic search methods are preferred because they require minimal assumptions about the objective and can flexibly incorporate constraints, multiple objectives, and domain heuristics.
Metaheuristics are high-level search strategies that orchestrate neighborhood exploration, adaptive sampling, and information sharing to guide a population (or trajectory) toward high-quality solutions [3]. They emphasize two complementary forces—diversification (exploration) and intensification (exploitation)—and use randomness to avoid bias and premature convergence [4]. The No Free Lunch (NFL) results assert that no single optimizer dominates across all problems, motivating a continuing need for methods that offer robust performance across classes of problems and that can be tailored to specific structures (e.g., separability, sparsity, or expensive constraints) [5]. Contemporary metaheuristics further incorporate parameter control, adaptive memory, restart strategies, and surrogate modeling to improve efficiency and reliability.
Within metaheuristics, nature-inspired methods have become a prominent family. These include evolutionary algorithms [6] (e.g., genetic algorithms and evolution strategies), swarm intelligence [7] (e.g., particle swarm optimization and ant colony optimization), physics- and chemistry-inspired approaches [8] (e.g., simulated annealing and electromagnetic-like mechanisms), and ecology-/epidemiology-inspired dynamics. Properly designed, nature-inspired algorithms offer intuitive metaphors for information flow, local and global guidance, and population diversity management. At the same time, rigorous algorithmic design—clear operator definitions, principled parameterization, computational complexity analysis, and transparent benchmarking—remains crucial to ensure that metaphors translate into genuine search advantages.
Despite substantial progress, longstanding challenges persist: balancing exploration and exploitation in rugged landscapes; preventing stagnation and loss of diversity; scaling to higher dimensions; handling constraints efficiently; and maintaining performance in noisy or dynamically changing objectives [9]. Many practical problems also exhibit funneling structures and clustered basins, suggesting potential gains from multi-scale search and state-dependent step adaptation. These observations motivate the development of a new, hydrological cycle-inspired metaheuristic that operationalizes evaporation, condensation, cloud drift, and precipitation as coordinated search mechanisms.
This work introduces the Rain-Cloud Condensation Optimizer (RCCO) [10], a novel nature-inspired metaheuristic grounded in the microphysics of cloud formation and rainfall. RCCO models candidate solutions as moisture parcels (droplets) that evolve through phases analogous to evaporation, condensation, coalescence, cloud drift, and precipitation. Each phase is mapped to a specific search function: initialization and large-step diversification, elite-guided attraction with adaptive step size, information mixing via pairwise differentials, directional bias at the population level, and restart-like reinjection near promising regions. A state variable—supersaturation—regulates the transition between phases, providing an adaptive, problem-agnostic mechanism to balance exploration and exploitation.
The key contribution of this work is a hydrology-inspired optimizer that was deliberately designed to reduce tuning burden while addressing common pathologies of population search. By tying all search pressures to simple, linearly decaying schedules in t / T , RCCO preserves adaptivity with only a minimal hyperparameter set (N, T, and two operator probabilities), thereby lowering the barrier to practical deployment across heterogeneous tasks where problem-specific calibration is costly or impractical. RCCO effectiveness on CEC2022 benchmarks (unimodal, multi-modal, hybrid, and composite), including scalable, high-dimensional cases.

2. Related Work

The past decade has witnessed the introduction of numerous metaheuristic optimizers that draw inspiration from natural, biological, and mathematical processes [11]. Such algorithms have been designed to cope with the nonconvex and multi-modal nature of modern engineering problems, often emulating cooperative behavior in animal groups or abstracting the physics of complex phenomena [12]. In one of the earliest examples from our sample, the Optics-Inspired Optimization algorithm (OIO) treats the search space as a mirror whose peaks and valleys are reflected to construct a concave counterpart, thereby guiding candidate solutions towards promising regions [13]. Later work adapted concepts from bird navigation: the high-level target navigation pigeon-inspired optimization (HTNPIO) uses strategies such as selective mutation and enhanced landmark search to speed convergence and escape local minima [14]. Similar ideas were used in the tree seed optimization technique, where the dispersal behavior of seeds determines how features are selected for support vector machines to detect malicious Android data [15].
Many optimizers explicitly mimic the survival strategies of animals. The gazelle optimization algorithm models the escape and pursuit phases of gazelles confronted with predators; by alternating between exploration and exploitation, it shows competitive performance on benchmark functions [16]. A complementary example is the nutcracker optimizer, which translates the mechanism by which nutcrackers crack shells into a rescheduling method for congestion management in power systems [17]. Another bio-inspired design uses the proliferation of cancer cells: the Liver Cancer Algorithm (LCA) simulates tumor growth and takeover processes to balance local and global searches [18]. Even microorganisms have inspired algorithms: the coronavirus metaheuristic algorithm (CMOA) uses metabolic transformation under various conditions to model candidate interactions and preserves diversity to avoid premature convergence [19].
Predatory behavior continues to motivate new search schemes. The migration search algorithm imitates the leader–follower dynamics within animal groups and divides the population into leaders, followers, and joiners to enhance information dissemination [20]. The bacterial foraging optimization algorithm has been adapted to optimize the cantilever beam parameters by emulating the chemotactic search patterns of bacteria [21]. Equally intriguing are the predator–prey models inspired by marine life. The orca predation algorithm assigns different weights to driving, encircling, and attacking phases in order to balance exploration and exploitation [22]. The Humboldt squid optimization algorithm uses attacks and fish-school escape patterns to orchestrate cooperation of subpopulations [23]. The Walrus optimizer adapts social signalling behaviors in walrus colonies to tune the trade-off between intensification and diversification [24].
Other animal-inspired approaches include the boosted African vulture optimization algorithm, which incorporates opposite learning and dynamic chaotic scaling [25]. The gooseneck barnacle optimization algorithm abstracts the hermaphroditic mating cycle of barnacles [26]. The Hunter algorithm models cooperative hunting to localize multiple tumours in biomedical imaging [27]. The squirrel search algorithm imitates the gliding behavior of squirrels to perform dynamic foraging [28].
Swarm-intelligence algorithms take inspiration from collective motion. The jellyfish search optimizer uses ocean current patterns and food attraction to steer solutions [29]. The tiki-taka algorithm models passing and positioning in football games to maintain ball possession and explore the search space [30]. Social interaction is also modeled in the membrane-inspired multiverse optimizer, where each membrane evolves in its own subpopulation and shares best solutions with others [31]. The leader–follower particle swarm optimizer (LFPSO) divides particles into leaders and followers to maintain diversity [32]. The modified krill herd algorithm integrates Lévy flight and crossover operators for economic dispatch problems [33].
The breadth of novel optimizers introduced in recent years underscores the vibrant state of metaheuristic research. Designers have drawn inspiration from optics and acoustics, animals and microorganisms, sports and sociocultural processes, epidemiological models, and chaotic maps. The common thread among these methods is the pursuit of a balance between exploration and exploitation through dynamic behavioral rules or hybrid combinations. Many algorithms demonstrate superior performance on benchmark problems and real-world applications, highlighting the potential of nature-inspired computation for tackling complex optimization tasks.
Beyond the single-paradigm metaheuristics surveyed above, a substantial body of work pursues hybridization, either at the algorithm level (coupling two full optimizers) or at the operator level (importing targeted mechanisms). For example, Akopov proposes a matrix-based hybrid genetic algorithm (MBHGA) tailored to agent-based models of controlled trade, demonstrating that an encoding aligned with model structure can materially improve search efficiency [34]. A complementary line is RCGA–PSO hybrids: a real-coded GA interleaved with particle-swarm updates (and, in some instances, surrogate modeling) to combine GA’s recombination with PSO’s fast social drift [35]. Earlier hybrids also embed local ACO procedures inside GA to intensify search around promising alignments—e.g., the classical GA-ACO for multiple sequence alignment—illustrating the long-standing value of coupling global recombination with pheromone-guided refinement [36].
Recent work continues to explore hybridization at scale. Learning-to-rank-driven automated hybrid design composes behaviors from multiple bases (e.g., WOA, HHO, GA) on the fly [37]; deep-RL-enhanced variants of WOA address resource scheduling in industrial operating systems by guiding WOA’s move selection with value estimates [38]; improved WOA variants introduce dynamic gain sharing and other schedules to tighten convergence without sacrificing global reach [39]. Parallel developments in new swarm designs and bio-inspired mechanisms (e.g., horned-lizard optimization) further enrich the design space that hybrids can draw from [40]. To sharpen the comparison, they classify representative algorithms by the mechanism used to balance exploration and exploitation, and maps each mechanism to its analog in RCCO.
Relative to algorithm-level hybrids such as MBHGA or RCGA–PSO, RCCO is an operator-level hybrid with a hydrology metaphor that fuses leader/core attraction (PSO-like), axis-aligned band sampling (GA/DE), opposition learning, and heavy-tailed jumps under a single, linearly decaying schedule, yielding few hyperparameters while preserving diversification. In line with recent taxonomies emphasizing mechanism-level reporting, principled parameter control, and standardized benchmarks/constraint handling [41,42], RCCO targets five gaps: low-overhead adaptivity (simple time-decays), late-stage stability (rank-weighted cloud core), escape from funnels (rare Cauchy gusts with decay), transparent constraint handling (wrap-and-reflect with penalties), and comparability (CEC-style suites and classical constrained designs).

3. Rain-Cloud Condensation Optimizer (RCCO)

Inspiration

RCCO draws an analogy between the optimization process and the life cycle of droplets inside a rain cloud. In the atmosphere, moist air rises and condenses on aerosols, forming droplets that are carried by updrafts toward a dense cloud core; entrainment mixes ambient air, while collisions cause coalescence, and sporadic turbulence injects energetic gusts. The algorithm mirrors these mechanisms in the search space: candidate solutions are droplets, the incumbent best acts as a leader, a rank-weighted average of the top fraction forms the cloud core, and stochastic perturbations act as thermal noise, entrainment, and turbulence.
Figure 1 refines this metaphor with a stylized schematic. Panel (a) shows droplets advected by buoyancy and shear toward the leader and the core within a softly shaded cloud. Panel (b) depicts domain-aware coalescence via axis-aligned band sampling between two parent droplets, an entrainment mirror across the domain center, and rare heavy-tailed gusts that propel droplets across distant regions. This visual framing motivates the mathematical operators introduced in the next section.
RCCO also seeds its initial population using a sine map, a chaotic map related to the tent map. Starting from a random point z 0 [ 0 , 1 ] , the sine map iteratively applies z k + 1 = sin ( π z k ) ; for chaotic parameters, it yields a sequence that visits the unit interval densely, similar to the tent map, which folds and stretches the interval to produce a chaotic sequence Figure 2 plots the sine map function and the orbit of one seed point. The sine map ensures that initial droplets cover the domain evenly, improving diversity during early search.
Finally, RCCO occasionally adds heavy-tailed turbulence bursts using a Cauchy distribution. The Cauchy distribution has undefined mean and variance and exhibits much heavier tails than a Gaussian distribution [10]. Figure 3 compares the probability density functions of a standard normal and a Cauchy distribution. The heavy tails increase the probability of large jumps, enabling the optimizer to escape local optima.

4. Mathematical Model

This section formalizes the update rules underlying RCCO. Let N denote the population size, T the number of iterations, and n the dimensionality of the search space. Each candidate solution is a droplet x i R n with fitness f ( x i ) = fobj ( x i ) . Lower fitness values correspond to better solutions.

4.1. Condensation: Updraft to the Leader and Core

At the start of iteration t, the droplets are sorted in ascending order of fitness. The best droplet becomes the leader x best , and the top q = max { 2 , 0.2 N } droplets form the cloud core. Weighted summation produces the core point
c ( t ) = j = 1 q w j x ( j ) j = 1 q w j ,
where ( j ) indicates the j th best droplet and w j = q j + 1 assigns greater weight to higher-ranked droplets. The core acts as a secondary attractor besides the leader.
Each droplet x i is then perturbed toward both the leader and the core. The buoyancy and shear coefficients decay over time to shift the algorithm from exploration to exploitation:
β ( t ) = 0.70 1 t T + 0.05 ξ 1 ,
σ ( t ) = 0.30 1 t T ,
where ξ 1 U ( 0 , 1 ) introduces randomness. The thermal variance vector scales with the search range and decays as
τ ( t ) = 0.04 1 t T u b l b ,
where l b and u b are lower and upper bounds. For droplet i, the condensation update is
x i = x i + β ( t ) x best x i + σ ( t ) c ( t ) x i + η ,
where η N ( 0 , diag ( τ ( t ) 2 ) ) is thermal noise. Before acceptance, the candidate is wrapped within bounds using a wrap-and-reflect operator Wrap ( · ) to model recirculation at cloud edges. If the new fitness improves upon the droplet’s current fitness, the update is accepted.

4.2. Coalescence, Entrainment, and Turbulence

After condensation, each droplet undergoes an exploration phase. Two distinct parents x p and x q are selected uniformly at random. A trial point is sampled from the coalescence band defined by the component-wise minimum and maximum of the parents:
y i = L + r U L ,
where L = min ( x p , x q ) , U = max ( x p , x q ) and r U ( 0 , 1 ) n is a uniform random vector. With probability 0.35 , an entrainment mirror is applied to model the ingestion of external air into the cloud:
y i l b + u b y i ,
and the mirrored point is used if its fitness is superior. With probability 0.18 , a turbulence gust injects heavy-tailed noise drawn from a Cauchy distribution:
y i y i + 1 t T 0.02 ( u b l b ) tan π ( u 1 2 ) ,
where u U ( 0 , 1 ) n component-wise. The tangent transform generates standard Cauchy variates. The wrap-and-reflect operator ensures the candidate remains in bounds. The update is accepted if it yields a lower fitness.
To overcome the limitation of purely time-decaying coefficients in Equations (4)–(27), we optionally adapt β ( t ) and σ ( t ) using feedback from the search state. Let the normalized population diversity be
Δ ( t ) = 1 n d = 1 n std { x i , d } i = 1 N u b d l b d .
and define a short-horizon improvement rate over a window h as
ρ ( t ) = RainCurve [ t h ] RainCurve [ t ] | RainCurve [ t h ] | + ε , s ( t ) = γ s ( t 1 ) + ( 1 γ ) 1 { ρ ( t ) 0 } .
We then form adaptive coefficients
β ˜ ( t ) = clip β ( t ) 1 λ div ( 1 Δ ( t ) ) + λ prog max { 0 , ρ ( t ) } , β min , β max , σ ˜ ( t ) = clip σ ( t ) 1 + λ div ( 1 Δ ( t ) ) + λ stag s ( t ) , σ min , σ max .
We use ( β ˜ ( t ) , σ ˜ ( t ) ) in place of ( β ( t ) , σ ( t ) ) inside Equation (28). Here, clip ( u , a , b ) = min { max { u , a } , b } enforces bounds (e.g., β min = 0.02 , β max = 0.80 , σ min = 0 , σ max = 0.50 ); typical settings h [ 5 , 10 ] , γ = 0.9 , ε = 10 12 , and small gains λ div , λ stag , λ prog [ 0 , 0.5 ] work well. Intuitively, when diversity shrinks or progress stalls, σ ˜ ( t ) increases (more exploration) and β ˜ ( t ) is restrained; when progress is strong, β ˜ ( t ) is reinforced. This state-aware control parallels DRL-guided parameter tuning reported in the metaheuristics literature [38].
We also expose the secondary operators—band sampling, mirroring, and gusts—as a portfolio Ω with selection probabilities p o ( t ) for o Ω . After each iteration, we update a smoothed reward and the selection distribution:
r o ( t ) ( 1 α ) r o ( t 1 ) + α δ ^ o ( t ) , p o ( t + 1 ) ( 1 η ) p o ( t ) + η softmax r o ( t ) τ .
where δ ^ o ( t ) is the normalized improvement credited to operator o (zero if no improvement), and α , η ( 0 , 1 ) , τ > 0 are smoothing/temperature hyperparameters. This adaptive operator selection biases RCCO toward operators that are empirically effective on the current landscape, aligning with hybrid scheduling and learning-to-rank composition strategies [37]. Both adaptive modules are optional; they recover the original fixed linear schedules for full reproducibility.

5. Pseudocode

Algorithm 1 summarizes the RCCO procedure. It uses the equations defined above to update each droplet. Condensation (lines 7–14) uses the leader and core attraction defined in Equation (28). Coalescence and entrainment (lines 18–29) draw new candidates from the band (Equation (6)) and optionally apply mirroring (Equation (7)). Turbulence bursts (line 23) follow Equation (8). After each update, the candidate is wrapped into the domain. The iteration best value is recorded in the curve RainCurve [ t ] .
Algorithm 1 Rain-Cloud Condensation Optimizer (RCCO)
Require: population size N, iterations T, bounds l b , u b , dimension n, objective f
Ensure: best rain rate f best , best droplet x best , convergence curve RainCurve
  1: Initialize N droplets { x i } via sine map seeding
  2: Evaluate fitness f i = f ( x i ) and identify x best
  3: for t = 1 to T  do
  4:       Sort droplets by f i and recompute x best                                           ▹ Condensation
  5:       Compute core point c ( t ) using Equation (26)
  6:        for   i = 1 to N  do
  7:               Compute coefficients β ( t ) and σ ( t ) from Equations (4)–(27)
  8:               Generate thermal noise η N ( 0 , diag ( τ ( t ) 2 ) )
  9:                x i x i + β ( t ) ( x best x i ) + σ ( t ) ( c ( t ) x i ) + η from Equation (28)
  10:                x i Wrap ( x i )                                                                 ▹ wrap–reflect bounds
  11:                f f ( x i )
  12:               if   f < f i   then
  13:                 x i x i ; f i f ; update x best if needed
  14:               end if
  15:        end for
  16:        for  i = 1 to Ndo                            ▹ Coalescence, entrainment, and turbulence
  17:              Select parents p , q uniformly at random
  18:              Sample y L + r ( U L ) using Equation (6)
  19:               if   rand < 0.35   then
  20:                      Apply mirror y l b + u b y as in Equation (7)
  21:               end if
  22:               if   rand < 0.18   then
  23:                     Add gust y y + ( 1 t / T ) 0.02 ( u b l b ) tan ( π ( u 1 2 ) ) (Equation (8))
  24:               end if
  25:                 y Wrap ( y )
  26:                Evaluate f y = f ( y )
  27:                 if   f y < f i   then
  28:                      x i y ; f i f y ; update x best if needed
  29:                 end if
  30:      end for
  31:      RainCurve [ t ] f best
  32: end for
To strengthen the theoretical basis for the RCCO design given in the algorithm, we note that the condensation update in Equation (28), together with the time-decaying stochastic terms from Equations (4)–(27), can be interpreted as a stochastic contraction toward a moving attractor formed by the current leader and the rank-weighted cloud core. As the buoyancy, shear, and thermal scales decay in time, the effective contraction rate increases while the noise variance decreases, which provides a principled explanation for RCCO stable late-stage exploitation and smooth convergence profiles.
The rank-weighted cloud core defined in Equation (26) is the minimizer of a weighted least-squares potential over the population. Because it aggregates information from the top-ranked droplets with explicit weights, its sampling variance is smaller than that of any individual point near the same basin. Pulling droplets simultaneously toward the leader and this low-variance core reduces estimator noise in the target direction and damps oscillations, thereby accelerating practical descent and improving robustness on narrow basins.
The sine-map initialization in the algorithm yields an ergodic, low autocorrelation sequence which, after affine rescaling to [ l b , u b ] , improves space-filling and reduces the initial covering radius of the population. This “chaos-aided” seeding increases the probability that at least one droplet starts close to a high-quality basin, a well-documented benefit in nature-inspired optimization [8].
RCCO exploration operators supply complementary mechanisms. Band sampling in Equation (6) performs geometry-aware recombination: within locally convex basins, segments between good parents stay inside the basin and advance exploitation; along rugged fronts, the same segments probe informative directions between incumbents. The mirror move in Equation (7) implements opposition-based learning in bounded domains and has been shown to increase improvement probability by testing complementary locations [43]. The Cauchy gusts in Equation (8) introduce heavy-tailed jumps for basin escape, with an amplitude that is explicitly annealed over iterations so that global exploration gradually fades into local refinement.
Taken together with wrap–reflect feasibility handling as specified in the algorithm, these components define a time-inhomogeneous Markov process that is broadly exploratory early and progressively contractive later. This view provides methodological justification—beyond metaphor—for why the weighted cloud core, sine-map seeding, band sampling, opposition-based mirroring, and annealed heavy-tailed gusts contribute to RCCO strong performance on CEC-style landscapes.

6. Movement Strategy

The movement strategy of RCCO combines attraction to the leader and core with random perturbations. Figure 4 visualizes how a droplet at position x i (blue) is pulled toward the leader x best (red) and the core c ( t ) (orange). The vectors representing buoyancy and shear from Equation (28) are shown by arrows of diminishing length as iterations progress. Thermal noise sampled from a normal distribution adds jitter to prevent premature convergence.

7. Exploration and Exploitation Behavior

RCCO balances exploration and exploitation through its two-stage update. Condensation focuses exploitation: droplets move toward the leader and weighted core, and the magnitude of buoyancy and shear coefficients decreases over time, concentrating search near promising regions. Coalescence and entrainment drive exploration: sampling along the band (Equation (6)) generates diverse combinations of parent solutions, mirroring across the domain (Equation (7)) introduces opposition-based learning [43], and turbulence bursts (Equation (8)) occasionally produce large leaps due to the heavy-tailed Cauchy distribution.
Figure 5 illustrates the contrast between the exploitation phase (left) and the exploration phase (right). During exploitation, points cluster around the leader and core (dense cloud of green points). In exploration, sampling between parents, mirror operations, and gusts scatter points widely (blue markers), enabling the algorithm to escape local minima. Over iterations, the algorithm gradually reduces exploration intensity by decaying the coefficients in Equations (4)–(27) and the weighting of turbulence bursts.
To justify the main design choices, note that the condensation step (Equation (28)) can be rewritten as x = ( 1 α t ) x + α t z t + η t , where α t = β ( t ) + σ ( t ) ( 0 , 1 ) and z t = β ( t ) x best + σ ( t ) c ( t ) / α t . Under the time-decaying noise of Equations (4)–(27), this induces a stochastic contraction toward the moving attractor z t in the sense that E x z t 2 F t ( 1 α t ) 2 x z t 2 + tr Cov ( η t ) , yielding stable late-stage exploitation. The weighted cloud core c ( t ) (Equation (26)) is the minimizer of Q ( u ) = i w i u x i 2 with rank-based weights w i , hence a low-variance estimator of the location of the promising region; if the top-k points around x have covariance Σ k , then Var [ c ( t ) ] = i w i 2 ( i w i ) 2 Σ k , which is smaller than the variance of any single droplet. Combining c ( t ) with x best therefore reduces estimator variance for z t , dampens oscillations, and accelerates descent. Chaotic sine-map seeding produces an ergodic, low-autocorrelation set after affine rescaling to [ l b , u b ] , improving space filling and reducing the initial covering radius so that at least one droplet is more likely to fall inside a good basin (consistent with chaos-aided metaheuristics [8]). Band sampling (Equation (6)) provides geometry-aware recombination: in locally convex basins, convex combinations of parents remain near the basin; in rugged zones, segments between historically good points probe informative directions. The mirror move (Equation (7)) implements opposition-based learning, which in bounded domains increases the probability of improvement by testing the complementary location of a candidate [43]. Finally, Cauchy gusts (Equation (8)) introduce a heavy-tailed chance of long jumps that helps escape deep funnels, while the amplitude factor ( 1 t / T ) anneals exploration so the dynamics become increasingly contractive. Together with wrap–reflect bounds, these components define a feasible, time-inhomogeneous Markov process that is exploratory early and progressively exploitative, providing a principled mechanism—beyond the guiding metaphor—by which RCCO balances global search with stable convergence.

8. Complexity Analysis

The computational complexity of RCCO can be derived by analyzing the cost per iteration. Let c f denote the cost of evaluating the objective function. During condensation, each of the N droplets computes coefficients and noise (constant cost) and evaluates the objective once if the update is accepted. The complexity of condensation is therefore
O N c f .
During coalescence and entrainment, each droplet samples two parents and performs up to two additional objective evaluations (for the mirrored and gusted candidates). Thus, the exploration phase has complexity. Since both phases run in each iteration, the total time complexity over T iterations is
O T N c f .
The algorithm stores N droplets of dimension n and their fitness values, leading to a memory complexity of O ( N n ) . Additional temporary vectors such as c ( t ) and random variates have lower order and can be neglected. Thus, RCCO scales linearly with population size and dimensionality and is suitable for high-dimensional problems when N is kept moderate.
Beyond this motivation, RCCO strength lies in three complementary design choices that directly target premature convergence, boundary brittleness, and inefficient exploration. First, intensification is guided by a dual pull—toward both the leader and a rank-weighted cloud core of the top quintile—which stabilizes progress and reduces over-commitment to a single anchor. Second, domain-aware coalescence within the coordinate-wise band spanned by two droplets, augmented by a mirror test across the domain center, exploits discovered structure while probing complementary regions at constant cost. Third, rare, time-damped Cauchy gusts provide inexpensive long jumps that improve basin-escape probability without explicit restarts, and wrap-and-reflect boundary handling maintains dynamic feasibility near the walls.

Compared Algorithms

In this paper, we benchmarked a diverse suite of optimization algorithms to evaluate their performance, including the following: Slime Mould Algorithm (SMA) adapts oscillatory foraging intensity to modulate direction and step size for stochastic search [44]; Gradient-Based Optimizer (GBO) couples a gradient-inspired update with a local-escaping operator to balance intensification and diversification [45]; Sand Cat Swarm Optimization (SCSO) emulates sand cat hunting (digging/low-frequency sensing) to switch between global exploration and local exploitation [46]; Whale Optimization Algorithm (WOA) models humpback bubble-net foraging via encircling and logarithmic-spiral moves to intensify around elites [47]; Jellyfish Search Optimizer (JSO) alternates passive ocean-current drift (global) with active food attraction (local) [48]; Leader–Follower Particle Swarm Optimizer (LFPSO) partitions particles into leaders and followers to enhance information sharing while preserving diversity [49]; Artificial Protozoa Optimizer (APO) abstracts protozoa foraging and reproduction behaviors for continuous optimization [50]; Golden Jackal Optimization (GJO) uses pack encircling and attacking to intensify search around promising regions [51]; Moth–Flame Optimization (MFO) guides agents along logarithmic spirals toward “flames” (best solutions) to trade off exploration and exploitation [52]; Artificial Ecosystem-Based Optimization (AEO) coordinates producer/consumer/decomposer interactions as complementary move operators [53]; Orca Predation Algorithm (OPA) implements cooperative driving, encircling, and attacking phases to balance search [54]; Walrus Optimizer (WO) leverages colony social signalling to sustain diversity while refining elites [55]; and Sea-Horse Optimizer (SHO) derives movement rules from sea-horse predation and mating patterns to navigate complex landscapes [56]; Particle Swarm Optimization (PSO) simulates social behavior of bird flocking by adjusting particle velocities toward both personal and global best positions to converge to optimal solutions [57]; Genetic Algorithm (GA) evolves a population of candidate solutions through selection, crossover, and mutation, mimicking natural evolution to search for optimal solutions [58]; Real-Coded Genetic Algorithm (RCGA) encodes solutions as real-valued vectors and applies crossover and mutation directly on continuous variables to solve continuous optimization problems [59]; and Enhanced Horned Lizard Optimization Algorithm (EHLOA) augments the standard HLOA with strategies such as round initialization, escape operators, and burst attacks to escape from local optima and effectively handle high-dimensional problems [30].

9. Assessment of the CEC2022 Benchmark Functions

Table 1 and Table 2 present the comparative evaluation of RCCO against twenty-one state-of-the-art algorithms on the full CEC2022 benchmark suite. The results highlight the consistent superiority of RCCO across unimodal, multimodal, hybrid, and composition functions.
For the unimodal functions F1–F4, RCCO attains first rank in all cases, with the lowest mean values and error measures. On F1, it records 3.00 × 10 2 with negligible variance, far outperforming SMA, SPBO, and WOA. Similarly, in F2–F4, RCCO remains the most accurate and stable, showing tight standard deviations compared to alternatives like SSOA and BOA that suffer from instability. These results demonstrate RCCO decisive exploitation capability on simple landscapes.
On the multimodal and hybrid functions F5–F8, RCCO continues to perform strongly. It ranks first on F5 with a mean of 9.00 × 10 2 and error 1.47 × 10 3 , while many algorithms such as SPBO and OHO diverge. For the rugged F6, RCCO ranks second overall, maintaining a competitive mean ( 2.68 × 10 3 ) and variance far below the 10 7 10 8 range of weaker methods. On F7, it again secures the top rank with stable convergence, and on F8, it remains competitive (rank 5), confirming robustness across challenging hybrid/composition landscapes.
Finally, in the composition functions F9–F12, RCCO consolidates its performance. It ranks first on F9 and F11, achieving the lowest mean values and minimal variance, significantly outperforming less stable algorithms such as ROA and SSOA. On F10, RCCO delivers mid-tier results (rank 5) yet still maintains better stability than high-error competitors. For F12, RCCO secures second place with a mean of 2.87 × 10 3 and low variance, almost identical to the top performer. These outcomes underline RCCO adaptability to complex landscapes that blend multiple functional properties.
In summary, RCCO demonstrates state-of-the-art performance across the CEC2022 suite. It consistently achieves first place on unimodal problems, dominates multimodal and hybrid functions with excellent accuracy and stability, and performs strongly on the most challenging composition functions. This confirms its effectiveness as a robust and reliable optimizer capable of balancing exploration and exploitation across diverse optimization landscapes.
As can be seen in Table 3, RCCO attains the best mean on 10/12 problems (F1, F3–F5, F6–F9, F11–F12), finishes runner-up on F10, and places third on F2, yielding the strongest average rank ( 1.25 ) versus PSO ( 2.83 ), EHLOA ( 3.00 ), RCGA ( 3.08 ), and GA ( 4.83 ). On unimodal F1–F4, RCCO consistently dominates (e.g., F1 mean 3.00 × 10 2 with STD = 4.48 × 10 3 vs. PSO 3.01 × 10 2 , EHLOA 3.23 × 10 2 , RCGA 2.64 × 10 4 , GA 3.50 × 10 4 ), evidencing fast and stable exploitation. On the rugged F6, RCCO achieves 2.68 × 10 3 , improving over EHLOA ( 3.31 × 10 3 ) and PSO ( 7.60 × 10 3 ) while avoiding the catastrophic errors of RCGA/GA ( 4.99 × 10 4 and 1.41 × 10 8 ). RCCO is also conspicuously stable on composite cases; for example, on F9, it matches or slightly improves the best mean ( 2.53 × 10 3 ) but with a much tighter spread ( STD = 4.69 × 10 2 , orders of magnitude below PSO’s 6.17 × 10 1 ), and on F12, it attains the lowest mean ( 2.87 × 10 3 ) with the smallest variance. The only clear exceptions are F2, where EHLOA leads by 6 % ( 4.01 × 10 2 vs. RCCO 4.25 × 10 2 ), and F10, where RCGA edges RCCO by 0.8 % ( 2.53 × 10 3 vs. 2.55 × 10 3 ). RCCO combines top accuracy with consistently low dispersion, outperforming PSO and the genetic baselines (RCGA/GA) and matching or surpassing the enhanced hybrid (EHLOA) on most functions.

10. Qualitative Assessment of the CEC2022 Benchmark Functions

For RCCO with populations N { 20 , 50 , 100 } , the search history panels show that, for all three population sizes, RCCO begins with a broad scatter of samples and then collapses rapidly into a compact cluster around the global basin. On the easier, nearly convex landscapes (F1, F3–F5, F7, F10–F12), the cluster forms close to the origin; on the more deceptive, multimodal surfaces (F6, F8–F9), several “islands’’ appear and RCCO concentrates on the best island. Increasing the population from 20 to 100 mainly densifies this cluster and smooths the spatial coverage, while the overall contraction pattern remains the same as seen in Figure 6 and Figure 7.
In the trajectory plots, the parameter traces exhibit a short transient during the first few tens of iterations for N = 20 , 50 , 100 , after which the motion becomes nearly flat, indicating early exploitation. On narrow or deceptive landscapes (notably F2 and F9), the trajectories display a handful of step-like jumps, reflecting deliberate relocations into better basins rather than random wandering. The length of the transient is similar across the three population sizes, with larger N producing slightly smoother paths.
The average-fitness curves (log scale) fall steeply at the start for all three populations—often by several orders of magnitude—followed by a slower, steady decline. A salient feature across N = 20 , 50 , 100 is that the population mean tracks the best-so-far closely, evidencing population-wide progress rather than improvement confined to a single elite. Larger populations make the mean curves smoother, but do not change the overall descent pattern.
The convergence (best-so-far) curves continue to decrease after the initial drop and show additional step reductions when RCCO hops between basins, indicating resilience to premature stagnation. This behavior is consistent for N = 20 , 50 , 100 , with larger N yielding slightly finer late-stage refinements but similar final accuracy.
Problem-wise, RCCO converges quickly and monotonically on F1, F3–F5, F7, and F10–F12 for all three population sizes, with the cloud of samples tightening near the optimum and both average and best fitness decreasing smoothly. On F2, the algorithm undergoes a few pronounced relocations before settling, leading to a stepwise convergence profile that still attains low terminal error. The most challenging multimodal cases (F6, F8, F9) highlight RCCO ability to escape local minima: the search history shows migration toward the most promising island, and the best-so-far curve keeps descending throughout the run. F11 exhibits a dramatic early reduction—several orders of magnitude—followed by steady refinement across N = 20 , 50 , 100 . For F12, all three populations reach near-zero error almost immediately.
Across populations of 20, 50, and 100, RCCO consistently demonstrates fast basin identification, purposeful exploration when needed, and strong population-level improvement, with low risk of premature convergence and particularly robust performance on the deceptive multimodal functions (F6 and F11).

10.1. Box Plots Across All Benchmark Functions

Figure 8 and Figure 9 illustrate the box-plot analysis of the Rain-Cloud Condensation Optimizer (RCCO) across all twelve CEC2022 benchmark functions, considering population sizes of 20, 50, and 100. For unimodal functions (F1–F4), the distributions show tight clustering with median values improving significantly as population size increases, confirming the optimizer’s ability to exploit the search space efficiently. In contrast, multimodal cases (F5–F8) reveal broader spreads, yet the larger populations maintain superior median performance and reduce variability, which highlights the benefit of population diversity in escaping local optima. Finally, for hybrid and composition functions (F9–F12), RCCO demonstrates robust improvements when population size is increased to 100, with sharper reductions in the final objective values and narrower variability, underscoring the importance of balancing exploration and exploitation. Collectively, the box-plot as seen in Figure 8 and Figure 9 summary validates RCCO consistent scalability and adaptability across problem categories, with population size acting as a decisive factor for stability and convergence.

10.2. Convergence Bands Across the Benchmark Suite

Figure 10 presents the convergence bands of the RCCO optimizer across all twelve CEC2022 benchmark functions. Each plot illustrates the median best objective over iterations for different population sizes, with the shaded regions denoting the interquartile ranges, thereby capturing the variability of performance across runs.
On the unimodal functions (F1–F4), RCCO exhibits consistent and rapid declines in the best objective values, indicating effective exploitation and strong convergence properties. The narrow bands further emphasize robust stability, especially for larger population sizes. In the multimodal functions (F5–F8), the optimizer maintains competitive trajectories with reduced variability compared to smaller populations, demonstrating its capacity to navigate rugged landscapes while preserving diversity. The hybrid and composition functions (F9–F12) show a broader spread in the bands, reflecting higher problem complexity, yet RCCO consistently outperforms or matches competing strategies by achieving steady improvements and avoiding premature stagnation.
Overall, the convergence band analysis underscores RCCO robustness in balancing exploration and exploitation. Its ability to sustain stable performance across unimodal, multimodal, hybrid, and compositional functions highlight its adaptability and resilience in tackling diverse optimization challenges.

10.3. Performance Profile for RCCO

Figure 11 presents the performance profile of the Rain-Cloud Condensation Optimizer (RCCO) across different population sizes on the CEC2022 benchmark suite. The horizontal axis τ denotes the performance factor relative to the best result, while the vertical axis represents the fraction of functions for which the optimizer achieves a solution within this factor. The results clearly highlight the benefits of larger population sizes. With a population of 100, RCCO consistently achieves near-complete coverage at very small τ values, underscoring its robustness and ability to maintain diversity. The medium population of 50 demonstrates a balanced trade-off between efficiency and accuracy, covering more functions effectively than the smallest population while using fewer resources than 100. In contrast, the population size of 20 lags behind, with significantly fewer functions solved near optimality, reflecting limited exploration capacity. Overall, these findings indicate that RCCO benefits strongly from larger populations, which improve global search capability and mitigate premature convergence.

10.4. Diagnostic Visualizations

Figure 12 presents the empirical cumulative distribution function (ECDF) of the final best objective values for different population sizes. The plot demonstrates that larger populations, in particular 100, consistently achieve lower objective values across a greater fraction of the benchmark functions. This leftward shift of the ECDF highlights improved reliability and robustness when a larger pool of candidate solutions is maintained.
Complementing this, Figure 13 shows the distribution of final best objective values as a function of the benchmark index. Although population size does not drastically alter outcomes on every function, the scatter lines indicate clear benefits for multimodal and composition functions, where higher diversity prevents premature convergence. Notably, functions such as F5, F9, and F11 exhibit substantial improvements with population 100 compared to smaller populations. Together, these diagnostic views confirm that RCCO gains in both robustness and accuracy when operating with larger population sizes, particularly in more complex search landscapes.

11. Adaptive Strategies for RCCO

The adaptive RCCO monitors population diversity to reshape leadership guidance, tracks acceptance rate to regulate thermal noise, and couples entrainment mirroring and turbulence bursts to the observed effectiveness and stagnation state. A diversity-/stagnation-aware condensation step with buoyancy and shear performs exploitation while keeping time-decayed exploration. When progress stalls and diversity collapses, a cloudburst partially re-seeds poor solutions and briefly boosts exploration knobs. We use ϕ t = 1 t / T for time decay, ⊙ for element-wise products, Π [ L , U ] for wrap/reflect projection to the box [ L , U ] , s : = U L for the span, and clip ( a , b , x ) = min { max { x , a } , b } .
Population diversity (monitors exploration pressure). We quantify the normalized spread across coordinates; Equation (15) drives leadership size/curvature updates and turbulence triggers.
D t = 1 d k = 1 d std { x i k ( t ) } i = 1 N ( U k L k ) + ε , s : = U L R d .
Per-iteration improvement rate (feedback signal). We estimate the fraction of accepted proposals; Equation (16) is the raw acceptance used by the thermal controller.
A t = # { accepted replacements at iter t } 2 N .
Smoothed acceptance (robust control state). We maintain an EWMA of acceptance; Equation (17) stabilizes decisions against noise.
a t = β a t 1 + ( 1 β ) A t , a 0 = a , β ( 0 , 1 ) .
One-fifth-style thermal controller (step-size adaptation). When the acceptance rate exceeds the target a , the thermal scale grows; otherwise, it shrinks (see Equation (18)):
σ t + 1 = clip σ min , σ max , σ t · u t , u t = 1.05 , a t > a , 0.95 , a t a .
Per-coordinate thermal variance (annealed noise floor). Equation (19) sets the Gaussian jitter used in condensation with a late-stage floor.
v t = max σ floor , σ t ϕ t s R d .
Mirror probability tracking (operator self-tuning). Equation (20) nudges the mirror probability toward its empirical acceptance m ^ t (clipped).
p t + 1 mir = 0.7 p t mir + 0.3 clip p min mir , p max mir , m ^ t + 0.10 , m ^ t = # { mirror accepts } # { mirror attempts } .
Turbulence probability under stagnation (exploration on demand). Equation (21) increases p tur during stagnation/low diversity and decays it otherwise.
p t + 1 tur = clip p min tur , p max tur , ρ p ( C t ) p t tur + δ p ( C t ) , ρ p = 0.90 , δ p = 0.02 , C t ( stagnation or low diversity ) ρ p = 0.95 , δ p = 0 , otherwise ,
Gust scale adaptation (strength of heavy-tailed kicks). Equation (22) coordinates the magnitude g t with the same condition C t .
g t + 1 = clip g min , g max , ρ g ( C t ) g t + δ g ( C t ) , ρ g = 1.10 , δ g = 0.002 , C t ρ g = 0.97 , δ g = 0 , ¬ C t ,
where C t : = S t > s th D t < d lo and S t counts consecutive no-improvement iterations.
Turbulence sampling (Cauchy bursts). Equation (23) injects rare, heavy-tailed moves scaled by ϕ t and g t .
( turbulence ) x x + ϕ t g t s ζ , ζ = tan π ( u 1 2 1 ) , u U ( 0 , 1 ) d .
Leader fraction adaptation (breadth of guidance). Equation (24) widens the leader set when diversity is low and narrows it when high.
q t + 1 frac = clip q min , q max , q t frac + Δ q ( D t ) , Δ q ( D ) = + 0.03 , D < 0.05 0.02 , D > 0.15 0 , else ; q t = max { 2 , round ( N q t + 1 frac ) } .
Rank-weight curvature adaptation (strength of elitism). Equation (25) sharpens weights when diversity is scarce and flattens them when abundant.
α t + 1 = clip α min , α max , α t · r α ( D t ) , r α ( D ) = 1.10 , D < 0.05 0.90 , D > 0.15 1 , else .
Cloud core (diversity-aware centroid of leaders). Equation (26) forms the core as an α t -curved rank-weighted mean over the best q t droplets.
c t = j = 1 q t w j x t ( j ) j = 1 q t w j , w j = q t j + 1 α t ,
where x t ( j ) is the j-th best droplet.
Buoyancy and shear (time-/state-modulated pulls). Equation (27) strengthens pulls during stagnation and when diversity is low, with time decay ϕ t .
b t = clip ( 0.05 , 0.90 , 0.50 ϕ t + 0.20 ϕ t B t + 0.05 ξ t ) , s t = clip ( 0.05 , 0.60 , 0.22 ϕ t + 0.10 ϕ t I [ D t < 0.08 ] ) ,
with B t = 1 + 0.5 I [ S t > 7 ] + 0.5 I [ D t < 0.05 ] and ξ t U ( 0 , 1 ) .
Condensation update (guided Gaussian exploitation). Equation (28) moves each droplet toward the leader and core with Gaussian thermal noise and wraps to the box.
x i Π [ L , U ] x i + b t ( x t x i ) + s t ( c t x i ) + ε t , ε t N 0 , diag ( v t 2 ) ,
where x t is the current best droplet.
Cloudburst (soft restart under collapse). Equation (29) partially re-seeds the worst droplets with a uniform/local Gaussian mixture when S t is large and D t is small.
If S t s burst , D t < d burst and cooldown = 0 : n reset = 0.30 N , n uni = 0.60 n reset , x U ( L , U ) for the first n uni worst droplets , x N x t , ( 0.15 s ) 2 for the remainder , apply Π [ L , U ] to all re seeded points .
Post-burst nudges (temporary exploration boost). Equation (30) briefly increases turbulence and thermal scale (with clipping to bounds).
p t + 1 tur clip p min tur , p max tur , p t + 1 tur + 0.05 , g t + 1 clip g min , g max , g t + 1 + 0.005 , σ t + 1 clip σ min , σ max , 1.10 σ t + 1 .
As can be seen in Table 4, the adaptive strategy delivers selective gains rather than uniform improvements: it outperforms the baseline on 4/12 functions—F2, F6, F8, and F12—most notably on F6 (mean 32.53 % , std 87.30 % ) and F2 (mean 2.90 % , std 88.08 % ); F8 shows a negligible mean gain ( 0.005 % ) with slightly higher dispersion ( + 5.86 % ), and F12 yields a small mean gain ( 0.033 % ) with a sizable stability benefit (std 40.09 % ). Conversely, the baseline dominates on 8/12 functions, with especially large variance inflation on F9 and F10 (std rising from 0.07 to 65 ), consistent with turbulence bursts overshooting on smoother landscapes; degradations on F1–F5 and F7 are modest in mean ( + 0.12 % to + 1.06 % , mixed std). Overall, the average rank shifts from 1.33 (RCCO) to 1.67 (RCCO_Adaptive; lower is better), suggesting the adaptive mechanisms chiefly help rugged, high-variance problems (e.g., F6) and may warrant conservative gust caps or higher stagnation thresholds on smooth cases.

12. Engineering Design Problems

The complete mathematical formulations for these engineering Design Problems are mention in Appendix A.

12.1. Cantilever Stepped Beam

The cantilever stepped beam design problem seeks optimal heights x 1 , , x 5 and widths x 6 , , x 10 for five beam segments such that the total volume of the beam is minimized while satisfying stress and deflection constraints. Each segment of length 100 cm carries a load P at the free end. The optimization variables are bounded between 1 and 5 cm for the heights and between 30 and 65 cm for the widths. The objective and constraints are given below.
Figure 14 depicts a five-segment cantilever with a concentrated load at the free end.
The optimization problem can be stated as follows:
min x f ( x ) = 100 i = 1 5 x i x i + 5 , s . t . g 1 ( x ) = 100 P x 5 x 10 2 14 , 000 0 , g 2 ( x ) = 200 P x 4 x 9 2 14 , 000 0 , g 3 ( x ) = 300 P x 3 x 8 2 14 , 000 0 , g 4 ( x ) = 400 P x 2 x 7 2 14 , 000 0 , g 5 ( x ) = 500 P x 1 x 6 2 14 , 000 0 , g 6 ( x ) = x 10 x 5 20 0 , g 7 ( x ) = x 9 x 4 20 0 , g 8 ( x ) = x 8 x 3 20 0 , g 9 ( x ) = x 7 x 2 20 0 , g 10 ( x ) = x 6 x 1 20 0 , g 11 ( x ) = P l 3 3 E 1 I 5 + 7 I 4 + 19 I 3 + 37 I 2 + 61 I 1 2.7 0 .
Table 5 summarizes the optimization results for the cantilever stepped beam. The RCCO algorithm achieved the lowest mean volume (63,465) with a relatively small standard deviation (589), demonstrating robust performance. It outperformed the second best optimizer (POA) by about 646 units (roughly a 1 % improvement) and produced a significantly better objective value than the average of the remaining algorithms. The RCCO design variables lie in the middle of their bounds, indicating a balanced beam profile.

12.2. Pressure Vessel

The pressure-vessel design problem requires choosing the shell thickness x 1 , head thickness x 2 , inner radius x 3 , and shell length x 4 such that the manufacturing cost of a cylindrical vessel with hemispherical heads is minimized. The steel plates available for the shell and heads come in increments of 0.0625 in. The objective function includes material, forming, and welding costs, subject to stresses and manufacturing constraints on thicknesses and volume.
Figure 15 sketches a thin-walled pressure vessel with a cylindrical shell of length L = x 4 , inner radius R = x 3 , shell thickness T s = x 1 , and head thickness T h = x 2 .
The cost function and constraints are given by [60]:
min x f ( x ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3 , s . t . g 1 ( x ) = x 1 + 0.0193 x 3 0 , g 2 ( x ) = x 2 + 0.00954 x 3 0 , g 3 ( x ) = π x 3 2 x 4 4 3 π x 3 3 + 1 , 296 , 000 0 , g 4 ( x ) = x 4 240 0 .
Table 6 contains the numerical results. The RCCO optimizer delivered the lowest mean cost (about 6077) with the smallest standard deviation (82). Its cost is roughly 29 units cheaper than the next best algorithm (ChOA), a relative improvement of nearly 0.5 % , and it significantly outperforms the average of the remaining methods. Moreover, the RCCO solution respects the discrete thickness increments and achieves a balanced vessel design with radius x 3 43 and length x 4 165 .

12.3. Planetary Gear Train

A planetary gear train consists of a ring gear with R teeth, a sun gear with S teeth, and n p identical planet gears with P teeth each. The ring gear meshes internally with the planets while the planets mesh externally with the sun gear. The design objective considered here is to match a target transmission ratio γ target by choosing integer values for R , S , P , and n p that satisfy meshing and spacing requirements. The gear ratio for a fixed ring gear is γ = S / ( R + S ) , as described in Figure 16.
The meshing condition requires that R equals the sum of the sun teeth and twice the planet teeth, R = S + 2 P . Fixing the ring gear implies a transmission ratio γ = S / ( R + S ) [61]. To match a target ratio γ target , we minimize the squared deviation
min R , S , P , n p f ( x ) = S R + S γ target 2
subject to R = S + 2 P and to the planet spacing requirement that R + S is evenly divisible by the number of planets n p [61]. The variables R , S , P , n p are constrained to positive integers within prescribed bounds.
The optimization results in Table 7 show that the RCCO algorithm attained the smallest mean objective value ( 0.52585 ), surpassing the next best algorithm (WOA) by about 0.00136 ( 0.26 % ). Although the differences between methods are relatively small, RCCO also exhibited a low standard deviation, indicating consistency. Its selected gear tooth counts ( x 1 23.6 , x 2 19.4 , x 3 30.4 ) and planet number yield a ratio closest to the target.

12.4. Ten-Bar Planar Truss

The ten-bar planar truss is a classical benchmark problem for weight minimization. Ten bars of known lengths L j are connected to form a plane truss subject to two load cases. The design variables are the cross-sectional areas x j ( 0.1 x j 35 in 2 ) of each bar. The objective is to minimize the total weight ρ j x j L j while satisfying stress limits | σ j | 25 ksi and nodal displacement limits | δ i | 2 in , as described in [62]. The Young’s modulus is E = 10 , 000 ksi and the material density is 0.1 lb / in 3 [62].
Figure 17 illustrates a planar truss comprising six nodes and ten bars. This sketch conveys the idea of multiple interconnected bars and is not drawn to scale.
The optimization formula is as follows:
min x f ( x ) = ρ j = 1 10 x j L j , s . t . 0.1 x j 35 in 2 , j = 1 , , 10 , | σ j | 25 ksi , j = 1 , , 10 , | δ i | 2 in , for each free node i ,
From Table 8, we see that the RCCO algorithm obtains the lowest mean weight ( 631.15 ). It improves upon the runner-up (SCA) by approximately 6.9 units ( 1.09 % ) and beats the average performance of the remaining algorithms by more than 50 % . The RCCO design exhibits moderate cross-sectional areas across most bars, indicating an efficient weight distribution within the allowable bounds.

12.5. Three-Bar Truss

In the three-bar truss design problem, the cross-sectional areas of two members are optimized: x 1 controls the area of the two diagonal bars and x 2 controls the area of the central vertical bar. The structure supports a vertical load P at its apex. The aim is to minimize the volume while satisfying stress constraints in each member. The heights and loads are fixed ( H = 100 cm , P = 2 kN / cm 2 ). Both design variables are bounded between 0 and 1 [60].
A three-bar truss is shown in Figure 18. Two inclined members of area x 1 support the top node, and a vertical member of area x 2 completes the triangular structure.
The volume to be minimized is
min x f ( x ) = 2 2 x 1 + x 2 100 , g 1 ( x ) = 2 2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 2 0 , g 2 ( x ) = 2 x 2 2 x 1 2 + 2 x 1 x 2 2 0 , g 3 ( x ) = 2 x 1 + 2 x 2 2 0 , 0 x 1 , x 2 1 , σ = 2 kN / cm 2 .
According to Table 9, the RCCO algorithm achieves the smallest mean volume ( 263.9036 ), narrowly beating ZOA by 0.0035 units. Although the improvement is tiny in absolute terms (approximately 0.0013 % ), RCCO also displays the smallest standard deviation, indicating high consistency. The optimal design variables ( x 1 0.79 , x 2 0.40 ) satisfy all stress constraints while minimizing material usage.

13. Conclusions

This paper presented the Rain-Cloud Condensation Optimizer (RCCO), a hydrology- inspired metaheuristic that operationalizes condensation, coalescence, entrainment, and turbulence as complementary search operators. A dual-attractor mechanism—pulling droplets toward both the global leader and a rank-weighted cloud core—provides stable intensification, while band sampling with mirroring and occasional heavy-tailed gusts sustain diversity and offer inexpensive basin escape. Sine-map seeding strengthens early coverage; a wrap–and–reflect rule maintains feasibility at the boundaries. RCCO exposes a few hyperparameters and exhibits linear time and memory growth with population size and problem dimension.
Extensive tests on the CEC2022 suite show that RCCO achieves competitive-to-superior accuracy with low variance and strong stability across unimodal, multimodal, hybrid, and composition functions, including high-dimensional cases, while five engineering studies (cantilever stepped beam, pressure vessel, planetary gear train, ten-bar planar truss, and three-bar truss) confirm effectiveness under practical constraints. Future work includes formal convergence analysis, adaptive/parameter-free variants, multiobjective and discrete extensions, and scalable parallel implementations.

14. Source Code

The PRA source code is available for download from the following link: https://www.mathworks.com/matlabcentral/fileexchange/182053-phoenix-rebirth-algorithm (accessed on 6 September 2025).

Author Contributions

Conceptualization, H.N.F.; Methodology, S.F.; Validation, A.S.; Formal analysis, H.N.F.; Writing original draft, A.S.; Writing review & editing, A.H.; Project administration, A.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Engineering Design Formulas

The objective is to minimize the volume of a five-segment cantilever beam with segment heights x 1 x 5 and widths x 6 x 10 . Each segment has length l i = 100 cm . The optimization problem reads
min x f ( x ) = i = 1 5 x i x i + 5 l i , subject to g 1 ( x ) = P l 5 x 5 x 10 2 14 , 000 0 , g 2 ( x ) = P ( l 5 + l 4 ) x 4 x 9 2 14 , 000 0 , g 3 ( x ) = P ( l 5 + l 4 + l 3 ) x 3 x 8 2 14 , 000 0 , g 4 ( x ) = P ( l 5 + l 4 + l 3 + l 2 ) x 2 x 7 2 14 , 000 0 , g 5 ( x ) = P ( l 5 + l 4 + l 3 + l 2 + l 1 ) x 1 x 6 2 14 , 000 0 , g 6 ( x ) = x 10 x 5 20 0 , g 7 ( x ) = x 9 x 4 20 0 , g 8 ( x ) = x 8 x 3 20 0 , g 9 ( x ) = x 7 x 2 20 0 , g 10 ( x ) = x 6 x 1 20 0 , g 11 ( x ) = P l 3 3 E 1 I 5 + 7 I 4 + 19 I 3 + 37 I 2 + 61 I 1 2.7 0 ,
The pressure-vessel design problem seeks the shell thickness ( x 1 ), head thickness ( x 2 ), inner radius ( x 3 ), and shell length ( x 4 ) that minimize the manufacturing cost. The cost function is
min x f ( x ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3 , s . t . g 1 ( x ) = x 1 + 0.0193 x 3 0 , g 2 ( x ) = x 2 + 0.00954 x 3 0 , g 3 ( x ) = π x 3 2 x 4 4 3 π x 3 3 + 1 , 296 , 000 0 , g 4 ( x ) = x 4 240 0 .
The decision variables obey 0.0625 x 1 , x 2 6.1875 (in steps of 0.0625 ), and 10 x 3 , x 4 200 .

Appendix A.1. Planetary Gear Train

A simple model of a planetary gear train consists of a ring gear with R teeth, a sun gear with S teeth, and several planet gears, each with P teeth. The geometry requires that the ring gear have exactly the sum of the sun gear teeth plus twice the planet gear teeth,
g 1 ( x ) = R ( S + 2 P ) = 0 .
as described in [61]. When the ring gear is fixed, the turns ratio between the sun gear and the carrier ( T s and T y , respectively) obeys
( R + S ) T y = S T s .
leading to a transmission ratio of T y / T s = S / ( R + S ) [61]. To match a prescribed target ratio γ target , one may minimize the squared deviation
min R , S , P , n p f ( x ) = S R + S γ target 2 .
subject to the meshing requirement g 1 = 0 above and the planet spacing constraint that R + S is an integer multiple of the number of planets n p [61]. The variables R , S , P , and n p are restricted to positive integers within practical bounds.

Appendix A.2. Ten-Bar Planar Truss

The ten-bar planar truss problem aims to select cross-sectional areas x 1 , , x 10 for each bar to minimize the total structural weight. Assuming a constant material density ρ , the weight is proportional to
min x f ( x ) = ρ j = 1 10 x j L j .
where L j denotes the known length of bar j and the design variables are bounded by side constraints 0.1 x j 35 in 2 [62]. Structural safety is enforced through stress and displacement constraints:
| σ j | 25 ksi , j = 1 , , 10 , | δ i | 2 in , for each free node i .
Two load cases with forces P 1 and P 2 are considered during analysis [62].

Appendix A.3. Three-Bar Truss

For the three-bar truss, the design variables are the cross-sectional areas x 1 and x 2 , with A 1 = A 3 = x 1 and A 2 = x 2 . The objective is to minimize the structural volume:
min x f ( x ) = 2 2 x 1 + x 2 H .
where H = 100 cm [60]. The stress constraints are
g 1 ( x ) = P 2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 σ 0 , g 2 ( x ) = P x 2 2 x 1 2 + 2 x 1 x 2 σ 0 , g 3 ( x ) = P x 1 + 2 x 2 σ 0 .
with bounds 0 x 1 1 and 0 x 2 1 . Here, P = 2 kN / cm 2 and σ = 2 kN / cm 2 [60].

References

  1. Arora, R.K. Optimization: Algorithms and Applications; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
  2. Heermann, D.W. Deterministic methods. In Computer Simulation Methods in Theoretical Physics; Springer: Berlin/Heidelberg, Germany, 1986; pp. 13–50. [Google Scholar]
  3. Ólafsson, S. Metaheuristics. In Handbooks in Operations Research and Management Science; Elsevier: Amsterdam, The Netherlands, 2006; Volume 13, pp. 633–654. [Google Scholar]
  4. Fagan, F.; Van Vuuren, J.H. A unification of the prevalent views on exploitation, exploration, intensification and diversification. Int. J. Metaheuristics 2013, 2, 294–327. [Google Scholar] [CrossRef]
  5. Adam, S.P.; Alexandropoulos, S.A.N.; Pardalos, P.M.; Vrahatis, M.N. No free lunch theorem: A review. In Approximation and Optimization: Algorithms, Complexity and Applications; Springer: Cham, Switzerland, 2019; pp. 57–82. [Google Scholar]
  6. Bartz-Beielstein, T.; Branke, J.; Mehnen, J.; Mersmann, O. Evolutionary algorithms. Data Min. Knowl. Discov. 2014, 4, 178–195. [Google Scholar] [CrossRef]
  7. Chakraborty, A.; Kar, A.K. Swarm intelligence: A review of algorithms. In Nature-Inspired Computing and Optimization; Springer: Cham, Switzerland, 2017; pp. 475–494. [Google Scholar]
  8. Siddique, N.H.; Adeli, H. Nature-Inspired Computing: Physics and Chemistry-Based Algorithms; Chapman and Hall/CRC: Boca Raton, FL, USA, 2017. [Google Scholar]
  9. Črepinšek, M.; Liu, S.H.; Mernik, M. Exploration and exploitation in evolutionary algorithms: A survey. ACM Comput. Surv. (CSUR) 2013, 45, 1–33. [Google Scholar] [CrossRef]
  10. Cui, Y.; Ruan, L.; Dong, H.C.; Li, Q.; Wu, Z.; Zeng, T.; Fan, F.L. Cloud-rain: Point cloud analysis with reflectional invariance. arXiv 2023, arXiv:2305.07814. [Google Scholar] [CrossRef]
  11. Zhou, B.; Zhang, H.; Han, S.; Ji, X. Crashworthiness analysis and optimization of a novel thin-walled multi-cell structure inspired by bamboo. Structures 2024, 59, 105827. [Google Scholar] [CrossRef]
  12. Almazroi, A.A.; Hassan, C.A.U. Nature-inspired solutions for energy sustainability using novel optimization methods. PLoS ONE 2023, 18, e0288490. [Google Scholar] [CrossRef]
  13. Janizadeh, S.; Thi Kieu Tran, T.; Bateni, S.M.; Jun, C.; Kim, D.; Trauernicht, C.; Heggy, E. Advancing the LightGBM approach with three novel nature-inspired optimizers for predicting wildfire susceptibility in Kaua’i and Moloka’i Islands, Hawaii. Expert Syst. Appl. 2024, 258, 124963. [Google Scholar] [CrossRef]
  14. Lakshmi, H. Demand Side Management Using a Novel Nature-Inspired Pelican Optimization Algorithm in a Smart Grid Environment. Int. J. Electr. Electron. Eng. 2024, 11, 238–246. [Google Scholar] [CrossRef]
  15. Subha, E.; Jothi Prakash, V.; Antran Vijay, S.A. A novel arctic fox survival strategy inspired optimization algorithm. J. Comb. Optim. 2025, 49, 1. [Google Scholar] [CrossRef]
  16. Abdel-Basset, M.; Mohamed, R.; Sallam, K.M.; Chakrabortty, R.K. Light Spectrum Optimizer: A Novel Physics-Inspired Metaheuristic Optimization Algorithm. Mathematics 2022, 10, 3466. [Google Scholar] [CrossRef]
  17. Panigrahi, B.S.; Nagarajan, N.; Prasad, K.D.V.; Sathya; Salunkhe, S.S.; Kumar, P.D.; Kumar, M.A. Novel nature-inspired optimization approach-based svm for identifying the android malicious data. Multimed. Tools Appl. 2024, 83, 71579–71597. [Google Scholar] [CrossRef]
  18. Maroosi, A.; Muniyandi, R.C. A novel membrane-inspired multiverse optimizer algorithm for quality of service-aware cloud web service composition with service level agreements. Int. J. Commun. Syst. 2023, 36, e5483. [Google Scholar] [CrossRef]
  19. Diab, H.Y.; Abdelsalam, M. A Novel Technique for the Optimization of Energy Cost Management and Operation of Microgrids Inspired from the Behavior of Egyptian Stray Dogs. Inventions 2024, 9, 88. [Google Scholar] [CrossRef]
  20. Chandran, V.; Mohapatra, P. A novel reinforcement learning-inspired tunicate swarm algorithm for solving global optimization and engineering design problems. J. Ind. Manag. Optim. 2025, 21, 565–612. [Google Scholar] [CrossRef]
  21. Lang, Y.; Gao, Y. Dream Optimization Algorithm (DOA): A novel metaheuristic optimization algorithm inspired by human dreams and its applications to real-world engineering problems. Comput. Methods Appl. Mech. Eng. 2025, 436, 117718. [Google Scholar] [CrossRef]
  22. Vais, R.I.; Sahay, K.; Chiranjeevi, T.; Devarapalli, R.; Knypiński, Ł. Parameter Extraction of Solar Photovoltaic Modules Using a Novel Bio-Inspired Swarm Intelligence Optimisation Algorithm. Sustainability 2023, 15, 8407. [Google Scholar] [CrossRef]
  23. Omari, M.; Kaddi, M.; Salameh, K.; Alnoman, A.; Benhadji, M. Atomic Energy Optimization: A Novel Meta-Heuristic Inspired by Energy Dynamics and Dissipation. IEEE Access 2025, 13, 2801–2828. [Google Scholar] [CrossRef]
  24. Ji, J.; Wu, T.; Yang, C. Neural population dynamics optimization algorithm: A novel brain-inspired meta-heuristic method. Knowl.-Based Syst. 2024, 300, 112194. [Google Scholar] [CrossRef]
  25. Salimi, K.; Dadashzadeh, S.; Aghaie, M. Optimization of isotopic binary and multi-component separation cascades using a novel nature-inspired horse herd algorithm. Sep. Sci. Technol. 2023, 58, 2988–3013. [Google Scholar] [CrossRef]
  26. Abdel-Basset, M.; El-Shahat, D.; Jameel, M.; Abouhawwash, M. Exponential distribution optimizer (EDO): A novel math-inspired algorithm for global optimization and engineering problems. Artif. Intell. Rev. 2023, 56, 9329–9400. [Google Scholar] [CrossRef]
  27. Sherif, A.; Haci, H. A Novel Bio-Inspired Energy Optimization for Two-Tier Wireless Communication Networks: A Grasshopper Optimization Algorithm (GOA)-Based Approach. Electronics 2023, 12, 1216. [Google Scholar] [CrossRef]
  28. Kanagasabai, L. Novel mate preferences in human beings inspired optimization and hybridized pomsky-pygoscelis antarcticus-phengodes swarm algorithm. Suranaree J. Sci. Technol. 2024, 31, 1–13. [Google Scholar] [CrossRef]
  29. Wang, J.; Yang, B.; Chen, Y.; Zeng, K.; Zhang, H.; Shu, H.; Chen, Y. Novel phasianidae inspired peafowl (Pavo muticus/cristatus) optimization algorithm: Design, evaluation, and SOFC models parameter estimation. Sustain. Energy Technol. Assess. 2022, 50, 101825. [Google Scholar] [CrossRef]
  30. Peraza-Vázquez, H.; Peña-Delgado, A.; Merino-Treviño, M.; Morales-Cepeda, A.B.; Sinha, N. A novel metaheuristic inspired by horned lizard defense tactics. Artif. Intell. Rev. 2024, 57, 59. [Google Scholar] [CrossRef]
  31. Kanagasabai, L. Novel Enriched Basil Seed Optimization, Little Child Imagination and Learning Inspired, Malignant Neoplasm of Uterine Algorithm. Int. J. Autom. Smart Technol. 2023, 13, 1–15. [Google Scholar] [CrossRef]
  32. Alomari, S.; Kaabneh, K.; AbuFalahah, I.; Gochhait, S.; Leonova, I.; Montazeri, Z.; Dehghani, M.; Eguchi, K. Carpet Weaver Optimization: A Novel Simple and Effective Human-Inspired Metaheuristic Algorithm. Int. J. Intell. Eng. Syst. 2024, 17, 230–242. [Google Scholar] [CrossRef]
  33. Arya, P.; Pandey, A.K.; Gopal Krishna Patro, S.; Tiwari, K.; Panigrahi, N.; Naveed, Q.N.; Lasisi, A.; Khan, W.A. MSCMGTB: A Novel Approach for Multimodal Social Media Content Moderation Using Hybrid Graph Theory and Bio-Inspired Optimization. IEEE Access 2024, 12, 73700–73718. [Google Scholar] [CrossRef]
  34. Akopov, A.S. MBHGA: A Matrix-Based Hybrid Genetic Algorithm for Solving an Agent-Based Model of Controlled Trade Interactions. IEEE Access 2025, 13, 26843–26863. [Google Scholar] [CrossRef]
  35. Akopov, A.S.; Beklaryan, A.L.; Zhukova, A.A. Optimization of Characteristics for a Stochastic Agent-Based Model of Goods Exchange with the Use of Parallel Hybrid Genetic Algorithm. Cybern. Inf. Technol. 2023, 23, 87–104. [Google Scholar] [CrossRef]
  36. Lee, Z.J.; Su, S.F.; Chuang, C.C.; Liu, K.H. Genetic algorithm with ant colony optimization (GA-ACO) for multiple sequence alignment. Appl. Soft Comput. 2008, 8, 55–78. [Google Scholar] [CrossRef]
  37. Xue, X.; Shu, T.; Xia, J. Automated Generation of Hybrid Metaheuristics Using Learning-to-Rank. Algorithms 2025, 18, 316. [Google Scholar] [CrossRef]
  38. Shu, T.; Pan, Z.; Ding, Z.; Zu, Z. Resource scheduling optimization for industrial operating system using deep reinforcement learning and WOA algorithm. Expert Syst. Appl. 2024, 255, 124765. [Google Scholar] [CrossRef]
  39. Liang, Z.; Shu, T.; Ding, Z. A Novel Improved Whale Optimization Algorithm for Global Optimization and Engineering Applications. Mathematics 2024, 12, 636. [Google Scholar] [CrossRef]
  40. Hu, G.; Huang, F.; Chen, K.; Wei, G. MNEARO: A meta swarm intelligence optimization algorithm for engineering applications. Comput. Methods Appl. Mech. Eng. 2024, 419, 116664. [Google Scholar] [CrossRef]
  41. Sharma, P.; Raju, S. Metaheuristic optimization algorithms: A comprehensive overview and classification of benchmark test functions. Soft Comput.-A Fusion Found. Methodol. Appl. 2024, 28, 3123. [Google Scholar] [CrossRef]
  42. Pan, J.S.; Hu, P.; Snášel, V.; Chu, S.C. A survey on binary metaheuristic algorithms and their engineering applications. Artif. Intell. Rev. 2023, 56, 6101–6167. [Google Scholar] [CrossRef]
  43. Tizhoosh, H.R. Opposition-Based Learning: A New Scheme for Machine Intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation (CIMCA), Vienna, Austria, 28–30 November 2005; IEEE: New York, NY, USA, 2005; Volume 1, pp. 695–701. [Google Scholar] [CrossRef]
  44. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  45. Ahmadianfar, I.; Bozorg-Haddad, O.; Chu, X. Gradient-Based Optimizer: A New Metaheuristic Optimization Algorithm. Inf. Sci. 2020, 540, 131–159. [Google Scholar] [CrossRef]
  46. Seyyedabbasi, A.; Kiani, F. Sand Cat swarm optimization: A nature-inspired algorithm to solve global optimization problems. Eng. Comput. 2023, 39, 1601–1637. [Google Scholar] [CrossRef]
  47. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  48. Chou, J.S.; Truong, D.N. A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl. Math. Comput. 2021, 389, 125535. [Google Scholar] [CrossRef]
  49. Wang, C.; Wang, Z.; Han, Q.L.; Han, F.; Dong, H. Novel Leader-Follower-Based Particle Swarm Optimizer Inspired by Multiagent Systems: Algorithm, Experiments, and Applications. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 1322–1334. [Google Scholar] [CrossRef]
  50. Wang, X.; Snášel, V.; Mirjalili, S.; Pan, J.S.; Kong, L.; Shehadeh, H.A. Artificial Protozoa Optimizer (APO): A novel bio-inspired metaheuristic algorithm for engineering optimization. Knowl.-Based Syst. 2024, 295, 111737. [Google Scholar] [CrossRef]
  51. Chopra, N.; Mohsin Ansari, M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  52. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  53. Zhao, W.; Wang, L.; Zhang, Z. Artificial ecosystem-based optimization: A novel nature-inspired meta-heuristic algorithm. Neural Comput. Appl. 2020, 32, 9383–9425. [Google Scholar] [CrossRef]
  54. Jiang, Y.; Wu, Q.; Zhu, S.; Zhang, L. Orca predation algorithm: A novel bio-inspired algorithm for global optimization problems. Expert Syst. Appl. 2022, 188, 116026. [Google Scholar] [CrossRef]
  55. Han, M.; Du, Z.; Yuen, K.F.; Zhu, H.; Li, Y.; Yuan, Q. Walrus optimizer: A novel nature-inspired metaheuristic algorithm. Expert Syst. Appl. 2024, 239, 122413. [Google Scholar] [CrossRef]
  56. Zhao, S.; Zhang, T.; Ma, S.; Wang, M. Sea-horse optimizer: A novel nature-inspired meta-heuristic for global optimization problems. Appl. Intell. 2023, 53, 11833–11860. [Google Scholar] [CrossRef]
  57. Kennedy, J.; Eberhart, R.C. Particle Swarm Optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  58. Holland, J.H. Adaptation in Natural and Artificial Systems; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  59. Herrera, F.; Lozano, M.; Verdegay, J.L. Tackling real-coded genetic algorithms: Operators and tools. Artif. Intell. Rev. 1998, 12, 265–319. [Google Scholar] [CrossRef]
  60. NEORL Team. NEORL: Neural Evolutionary Optimization and Reinforcement Learning Framework. 2023. Available online: https://neorl.readthedocs.io/en/latest/ (accessed on 11 September 2025).
  61. Wandel, M. Planetary Gears: Principles and Functions. 2023. Available online: https://woodgears.ca/gear/planetary.html (accessed on 11 September 2025).
  62. XLOptimizer Team. XLOptimizer: Optimization Software for Engineering Design. 2023. Available online: https://www.xloptimizer.com/ (accessed on 11 September 2025).
Figure 1. Stylized RCCO metaphor: (a) droplets rise toward the leader and core due to updrafts; (b) droplets interact through coalescence, mirror across the domain, and are perturbed by turbulence.
Figure 1. Stylized RCCO metaphor: (a) droplets rise toward the leader and core due to updrafts; (b) droplets interact through coalescence, mirror across the domain, and are perturbed by turbulence.
Eng 06 00281 g001
Figure 2. The sine map z k + 1 = sin ( π z k ) used to seed the population.
Figure 2. The sine map z k + 1 = sin ( π z k ) used to seed the population.
Eng 06 00281 g002
Figure 3. Comparison of a standard normal and standard Cauchy density.
Figure 3. Comparison of a standard normal and standard Cauchy density.
Eng 06 00281 g003
Figure 4. Condensation movement of a droplet.
Figure 4. Condensation movement of a droplet.
Eng 06 00281 g004
Figure 5. Exploration vs. exploitation in RCCO.
Figure 5. Exploration vs. exploitation in RCCO.
Eng 06 00281 g005
Figure 6. Qualitative analysis (search history, trajectory, avergae fitness, and convergence) of the RCCO optimizer on functions F1–F6.
Figure 6. Qualitative analysis (search history, trajectory, avergae fitness, and convergence) of the RCCO optimizer on functions F1–F6.
Eng 06 00281 g006
Figure 7. Qualitative analysis (search history, trajectory, avergae fitness, and convergence) of the RCCO optimizer on functions F7–F12.
Figure 7. Qualitative analysis (search history, trajectory, avergae fitness, and convergence) of the RCCO optimizer on functions F7–F12.
Eng 06 00281 g007
Figure 8. Box plots summarizing the final best objective values for populations 20, 50, and 100 on functions F1–F6.
Figure 8. Box plots summarizing the final best objective values for populations 20, 50, and 100 on functions F1–F6.
Eng 06 00281 g008
Figure 9. Box plots of final best objective values for populations 20, 50, and 100 on functions F7–F12.
Figure 9. Box plots of final best objective values for populations 20, 50, and 100 on functions F7–F12.
Eng 06 00281 g009
Figure 10. RCCO optimizer convergence across 12 functions.
Figure 10. RCCO optimizer convergence across 12 functions.
Eng 06 00281 g010
Figure 11. Performance profile of RCCO population sizes across the twelve benchmark functions. The horizontal axis τ denotes the performance factor relative to the best result, while the vertical axis shows the proportion of functions for which the population’s final objective is within a factor τ of the best.
Figure 11. Performance profile of RCCO population sizes across the twelve benchmark functions. The horizontal axis τ denotes the performance factor relative to the best result, while the vertical axis shows the proportion of functions for which the population’s final objective is within a factor τ of the best.
Eng 06 00281 g011
Figure 12. Empirical cumulative distribution function of final best objective values for populations 20, 50, and 100 across all functions. Points further to the left indicate better performance.
Figure 12. Empirical cumulative distribution function of final best objective values for populations 20, 50, and 100 across all functions. Points further to the left indicate better performance.
Eng 06 00281 g012
Figure 13. Final best objective vs. function index by population size.
Figure 13. Final best objective vs. function index by population size.
Eng 06 00281 g013
Figure 14. Ten-segment cantilever stepped beam. Each rectangle represents a segment of width x i + 10 and height x i ; the right end carries a downward load P.
Figure 14. Ten-segment cantilever stepped beam. Each rectangle represents a segment of width x i + 10 and height x i ; the right end carries a downward load P.
Eng 06 00281 g014
Figure 15. Pressure vessel with cylindrical shell (length L) and hemispherical heads. Inner radius R; shell thickness T s ; head thickness T h .
Figure 15. Pressure vessel with cylindrical shell (length L) and hemispherical heads. Inner radius R; shell thickness T s ; head thickness T h .
Eng 06 00281 g015
Figure 16. Planetary gear train schematic showing ring, sun, and planet gears with the carrier.
Figure 16. Planetary gear train schematic showing ring, sun, and planet gears with the carrier.
Eng 06 00281 g016
Figure 17. Ten-bar planar truss with distinct styling for bottom, top, vertical, and diagonal members.
Figure 17. Ten-bar planar truss with distinct styling for bottom, top, vertical, and diagonal members.
Eng 06 00281 g017
Figure 18. Three-bar truss with diagonals of area x 1 and a vertical member of area x 2 . The left support is pinned and the right support is a roller. A vertical load P is applied at the top node.
Figure 18. Three-bar truss with diagonals of area x 1 and a vertical member of area x 2 . The left support is pinned and the right support is a roller. A vertical load P is applied at the top node.
Eng 06 00281 g018
Table 1. Comprehensive quantitative assessment of the CEC2022 benchmark functions, run = 30, iterations = 1000, part 1.
Table 1. Comprehensive quantitative assessment of the CEC2022 benchmark functions, run = 30, iterations = 1000, part 1.
FunctionMeasureRCCOSMAGBOSCSODOASPBOTTHHOHHOSSOABOASCA
F1Rank1211245223219156
mean3.00 × 1022.32 × 1047.11 × 1037.52 × 1021.52 × 1033.01 × 1043.08 × 1023.02 × 1021.43 × 1048.29 × 1031.63 × 103
error measure1.60 × 10−22.29 × 1046.81 × 1034.52 × 1021.22 × 1032.98 × 1047.74 × 1001.54 × 1001.40 × 1047.99 × 1031.33 × 103
Std.4.48 × 10−31.25 × 1043.74 × 1039.10 × 1021.80 × 1037.61 × 1031.48 × 1017.71 × 10−18.58 × 1032.68 × 1031.19 × 103
F2Rank18103121892202111
mean4.25 × 1024.41 × 1024.65 × 1024.28 × 1024.83 × 1021.17 × 1034.49 × 1024.27 × 1021.39 × 1032.23 × 1034.65 × 102
error measure2.54 × 1014.14 × 1016.51 × 1012.81 × 1018.35 × 1017.67 × 1024.94 × 1012.67 × 1019.85 × 1021.83 × 1036.52 × 101
Std.3.35 × 1013.47 × 1016.46 × 1013.04 × 1011.17 × 1022.57 × 1026.95 × 1013.22 × 1014.21 × 1028.68 × 1022.12 × 101
F3Rank18651022131220147
mean6.05 × 1026.19 × 1026.19 × 1026.12 × 1026.24 × 1026.68 × 1026.34 × 1026.28 × 1026.58 × 1026.36 × 1026.19 × 102
error measure4.90 × 1001.94 × 1011.87 × 1011.21 × 1012.44 × 1016.81 × 1013.35 × 1012.82 × 1015.75 × 1013.61 × 1011.91 × 101
Std.4.60 × 1001.20 × 1011.01 × 1019.34 × 1001.20 × 1011.03 × 1011.17 × 1011.32 × 1018.61 × 1008.36 × 1003.25 × 100
F4Rank11311582246211915
mean8.16 × 1028.35 × 1028.34 × 1028.25 × 1028.28 × 1028.97 × 1028.24 × 1028.26 × 1028.60 × 1028.49 × 1028.39 × 102
error measure1.63 × 1013.51 × 1013.45 × 1012.52 × 1012.82 × 1019.75 × 1012.41 × 1012.61 × 1016.05 × 1014.85 × 1013.86 × 101
Std.1.25 × 1011.03 × 1018.57 × 1007.17 × 1001.39 × 1011.34 × 1017.56 × 1009.60 × 1008.80 × 1007.09 × 1006.54 × 100
F5Rank11997622131520114
mean9.00 × 1021.48 × 1031.11 × 1031.08 × 1031.07 × 1033.70 × 1031.32 × 1031.37 × 1031.56 × 1031.28 × 1039.85 × 102
error measure1.47 × 10−35.84 × 1022.08 × 1021.76 × 1021.66 × 1022.80 × 1034.16 × 1024.65 × 1026.65 × 1023.82 × 1028.47 × 101
Std.8.05 × 10−45.24 × 1021.66 × 1021.62 × 1021.23 × 1026.39 × 1021.65 × 1021.36 × 1022.06 × 1029.42 × 1013.26 × 101
F6Rank21012812193201715
mean2.68 × 1035.11 × 1033.69 × 1044.57 × 1032.48 × 1033.37 × 1084.76 × 1033.18 × 1032.29 × 1081.10 × 1071.92 × 106
error measure8.82 × 1023.31 × 1033.51 × 1042.77 × 1036.85 × 1023.37 × 1082.96 × 1031.38 × 1032.29 × 1081.10 × 1071.91 × 106
Std.1.13 × 1032.29 × 1034.41 × 1042.22 × 1032.26 × 1032.91 × 1082.67 × 1031.60 × 1033.33 × 1081.57 × 1071.31 × 106
F7Rank17133422141121169
mean2.03 × 1032.05 × 1032.07 × 1032.04 × 1032.04 × 1032.16 × 1032.07 × 1032.06 × 1032.14 × 1032.09 × 1032.06 × 103
error measure2.65 × 1015.24 × 1017.07 × 1013.75 × 1014.13 × 1011.61 × 1027.09 × 1016.07 × 1011.35 × 1028.81 × 1015.62 × 101
Std.7.69 × 1002.84 × 1012.23 × 1011.60 × 1011.61 × 1014.08 × 1013.77 × 1012.53 × 1012.43 × 1011.38 × 1016.01 × 100
F8Rank51511213218620189
mean2.23 × 1032.25 × 1032.24 × 1032.23 × 1032.24 × 1032.36 × 1032.23 × 1032.23 × 1032.35 × 1032.27 × 1032.23 × 103
error measure2.76 × 1014.79 × 1013.72 × 1012.61 × 1014.50 × 1011.64 × 1023.19 × 1012.96 × 1011.53 × 1027.13 × 1013.24 × 101
Std.2.35 × 1004.39 × 1019.08 × 1006.89 × 1005.75 × 1018.51 × 1011.04 × 1011.05 × 1011.02 × 1025.33 × 1012.90 × 100
F9Rank1148641810222205
mean2.53 × 1032.61 × 1032.57 × 1032.57 × 1032.56 × 1032.75 × 1032.58 × 1032.55 × 1032.81 × 1032.76 × 1032.57 × 103
error measure2.29 × 1023.13 × 1022.74 × 1022.71 × 1022.57 × 1024.47 × 1022.80 × 1022.52 × 1025.14 × 1024.64 × 1022.65 × 102
Std.4.69 × 10−25.01 × 1013.67 × 1015.03 × 1015.31 × 1018.83 × 1014.31 × 1013.30 × 1015.61 × 1016.75 × 1011.87 × 101
F10Rank514649127102112
mean2.55 × 1032.60 × 1032.55 × 1032.54 × 1032.56 × 1032.59 × 1032.56 × 1032.57 × 1032.81 × 1032.50 × 1032.51 × 103
error measure1.47 × 1021.98 × 1021.54 × 1021.39 × 1021.64 × 1021.87 × 1021.57 × 1021.68 × 1024.09 × 1021.03 × 1021.09 × 102
Std.6.39 × 1011.16 × 1026.68 × 1015.96 × 1017.02 × 1013.78 × 1017.15 × 1016.96 × 1014.27 × 1022.50 × 1003.17 × 101
F11Rank11311252012321156
mean2.66 × 1032.85 × 1032.81 × 1032.75 × 1032.77 × 1033.70 × 1032.82 × 1032.75 × 1033.76 × 1033.13 × 1032.77 × 103
error measure6.05 × 1012.48 × 1022.13 × 1021.48 × 1021.69 × 1021.10 × 1032.20 × 1021.52 × 1021.16 × 1035.28 × 1021.70 × 102
Std.1.34 × 1021.68 × 1021.20 × 1021.55 × 1021.08 × 1024.09 × 1023.09 × 1021.14 × 1024.29 × 1023.81 × 1021.06 × 101
F12Rank27611210151421163
mean2.87 × 1032.88 × 1032.88 × 1032.87 × 1032.89 × 1032.89 × 1032.91 × 1032.90 × 1033.05 × 1032.92 × 1032.87 × 103
error measure1.68 × 1021.78 × 1021.75 × 1021.66 × 1021.87 × 1021.87 × 1022.14 × 1021.98 × 1023.51 × 1022.15 × 1021.69 × 102
Std.2.70 × 1001.77 × 1011.81 × 1012.64 × 1003.45 × 1018.95 × 1003.88 × 1013.37 × 1016.53 × 1012.99 × 1011.54 × 100
Table 2. Comprehensive quantitative assessment of the CEC2022 benchmark functions, run = 30, iterations = 1000, part 2.
Table 2. Comprehensive quantitative assessment of the CEC2022 benchmark functions, run = 30, iterations = 1000, part 2.
FunctionMeasureAOAGJOWOARSASHOFLOROAChimpSHIOOHOHGSO
F1Rank1791814816137102011
mean8.83 × 1032.73 × 1031.27 × 1048.24 × 1032.19 × 1038.61 × 1037.96 × 1031.92 × 1034.24 × 1031.90 × 1044.27 × 103
error measure8.53 × 1032.43 × 1031.24 × 1047.94 × 1031.89 × 1038.31 × 1037.66 × 1031.62 × 1033.94 × 1031.87 × 1043.97 × 103
Std.5.28 × 1032.28 × 1036.40 × 1033.14 × 1032.26 × 1031.43 × 1032.23 × 1037.61 × 1022.36 × 1031.26 × 1041.25 × 103
F2Rank164717519151462213
mean7.13 × 1024.33 × 1024.40 × 1021.00 × 1034.36 × 1021.36 × 1036.47 × 1025.84 × 1024.37 × 1023.08 × 1034.87 × 102
error measure3.13 × 1023.33 × 1014.01 × 1016.00 × 1023.56 × 1019.64 × 1022.47 × 1021.84 × 1023.73 × 1012.68 × 1038.74 × 101
Std.1.94 × 1022.52 × 1017.77 × 1014.50 × 1023.71 × 1015.17 × 1021.94 × 1021.04 × 1023.25 × 1011.08 × 1031.49 × 101
F3Rank162151941817113219
mean6.39 × 1026.05 × 1026.37 × 1026.49 × 1026.09 × 1026.47 × 1026.40 × 1026.26 × 1026.06 × 1026.61 × 1026.24 × 102
error measure3.91 × 1015.09 × 1003.68 × 1014.87 × 1019.05 × 1004.71 × 1014.02 × 1012.58 × 1015.61 × 1006.10 × 1012.36 × 101
Std.6.69 × 1003.65 × 1001.46 × 1019.32 × 1004.42 × 1007.63 × 1001.13 × 1018.32 × 1007.28 × 1003.79 × 1007.04 × 100
F4Rank971420317161021812
mean8.31 × 1028.27 × 1028.38 × 1028.51 × 1028.21 × 1028.45 × 1028.39 × 1028.34 × 1028.18 × 1028.45 × 1028.35 × 102
error measure3.11 × 1012.74 × 1013.80 × 1015.11 × 1012.13 × 1014.48 × 1013.92 × 1013.42 × 1011.76 × 1014.48 × 1013.50 × 101
Std.8.66 × 1008.25 × 1001.53 × 1018.50 × 1005.49 × 1007.26 × 1008.02 × 1006.26 × 1009.72 × 1004.91 × 1002.87 × 100
F5Rank143101881716122215
mean1.32 × 1039.54 × 1021.26 × 1031.46 × 1031.09 × 1031.39 × 1031.37 × 1031.30 × 1039.52 × 1021.62 × 1031.01 × 103
error measure4.23 × 1025.37 × 1013.63 × 1025.59 × 1021.88 × 1024.87 × 1024.68 × 1024.02 × 1025.23 × 1017.23 × 1021.11 × 102
Std.1.70 × 1023.83 × 1011.96 × 1028.78 × 1011.51 × 1022.00 × 1022.41 × 1021.71 × 1028.64 × 1017.86 × 1013.26 × 101
F6Rank411619718131452216
mean3.88 × 1036.64 × 1034.15 × 1036.37 × 1074.29 × 1033.47 × 1071.12 × 1061.13 × 1063.96 × 1037.90 × 1082.14 × 106
error measure2.08 × 1034.84 × 1032.35 × 1036.37 × 1072.49 × 1033.47 × 1071.12 × 1061.13 × 1062.16 × 1037.90 × 1082.14 × 106
Std.1.70 × 1032.18 × 1032.31 × 1033.60 × 1071.45 × 1034.49 × 1072.49 × 1069.14 × 1052.06 × 1038.03 × 1081.52 × 106
F7Rank175101921815862012
mean2.10 × 1032.04 × 1032.06 × 1032.12 × 1032.03 × 1032.10 × 1032.09 × 1032.05 × 1032.05 × 1032.12 × 1032.07 × 103
error measure1.00 × 1024.46 × 1015.99 × 1011.19 × 1022.72 × 1011.03 × 1028.72 × 1015.36 × 1014.60 × 1011.19 × 1026.97 × 101
Std.2.74 × 1012.11 × 1012.52 × 1011.98 × 1011.15 × 1012.45 × 1013.92 × 1018.98 × 1002.29 × 1011.15 × 1011.02 × 101
F8Rank173101411612194227
mean2.26 × 1032.23 × 1032.24 × 1032.25 × 1032.22 × 1032.25 × 1032.24 × 1032.28 × 1032.23 × 1032.44 × 1032.23 × 103
error measure5.51 × 1012.62 × 1013.59 × 1014.69 × 1012.34 × 1015.12 × 1013.81 × 1018.15 × 1012.73 × 1012.44 × 1023.17 × 101
Std.5.00 × 1012.36 × 1007.20 × 1007.83 × 1001.92 × 1002.37 × 1011.21 × 1016.04 × 1012.96 × 1001.32 × 1023.29 × 100
F9Rank1537171119169122113
mean2.69 × 1032.55 × 1032.57 × 1032.74 × 1032.58 × 1032.76 × 1032.70 × 1032.58 × 1032.60 × 1032.79 × 1032.60 × 103
error measure3.89 × 1022.54 × 1022.72 × 1024.37 × 1022.85 × 1024.58 × 1023.97 × 1022.77 × 1023.03 × 1024.94 × 1023.04 × 102
Std.3.36 × 1012.35 × 1014.21 × 1016.94 × 1012.46 × 1014.12 × 1013.19 × 1012.89 × 1013.99 × 1015.74 × 1013.34 × 101
F10Rank17813181119152016223
mean2.64 × 1032.56 × 1032.59 × 1032.65 × 1032.57 × 1032.65 × 1032.61 × 1032.73 × 1032.63 × 1032.98 × 1032.52 × 103
error measure2.43 × 1021.58 × 1021.91 × 1022.49 × 1021.72 × 1022.55 × 1022.12 × 1023.34 × 1022.26 × 1025.77 × 1021.19 × 102
Std.1.75 × 1026.55 × 1011.48 × 1029.88 × 1016.02 × 1011.16 × 1022.18 × 1024.95 × 1022.49 × 1023.04 × 1024.34 × 101
F11Rank179101641914187228
mean3.27 × 1032.80 × 1032.81 × 1033.14 × 1032.75 × 1033.66 × 1033.04 × 1033.28 × 1032.77 × 1034.07 × 1032.78 × 103
error measure6.65 × 1022.05 × 1022.13 × 1025.42 × 1021.55 × 1021.06 × 1034.35 × 1026.80 × 1021.72 × 1021.47 × 1031.83 × 102
Std.3.61 × 1021.44 × 1021.54 × 1023.69 × 1021.44 × 1024.91 × 1021.81 × 1021.74 × 1021.32 × 1023.45 × 1022.37 × 101
F12Rank194111892017582213
mean3.00 × 1032.87 × 1032.89 × 1032.93 × 1032.88 × 1033.05 × 1032.92 × 1032.87 × 1032.88 × 1033.18 × 1032.90 × 103
error measure3.03 × 1021.70 × 1021.87 × 1022.33 × 1021.85 × 1023.47 × 1022.16 × 1021.70 × 1021.78 × 1024.84 × 1021.97 × 102
Std.6.37 × 1018.63 × 1002.14 × 1016.67 × 1011.56 × 1017.72 × 1014.02 × 1011.02 × 1011.34 × 1011.10 × 1026.66 × 100
Table 3. Quantitative assessment of the CEC2022 benchmark functions, run = 30, iterations = 1000, part 3.
Table 3. Quantitative assessment of the CEC2022 benchmark functions, run = 30, iterations = 1000, part 3.
FunctionStatisticsRCCOPSOEHLOARCGAGA
F1Meam3.00 × 1023.01 × 1023.23 × 1022.64 × 1043.50 × 104
ERROR1.60 × 10−28.53 × 10−12.29 × 1012.61 × 1043.47 × 104
STD4.48 × 10−38.49 × 10−15.11 × 1017.47 × 1031.14 × 104
Rank12345
F2Meam4.25 × 1024.31 × 1024.01 × 1024.08 × 1026.96 × 102
ERROR2.54 × 1013.15 × 1019.01 × 10−18.20 × 1002.96 × 102
STD3.35 × 1012.22 × 1011.77 × 1004.07 × 1001.78 × 102
Rank34125
F3Meam6.05 × 1026.06 × 1026.51 × 1026.05 × 1026.54 × 102
ERROR4.90 × 1005.58 × 1005.07 × 1015.41 × 1005.43 × 101
STD4.60 × 1003.04 × 1001.41 × 1012.25 × 1001.47 × 101
Rank13425
F4Meam8.16 × 1028.18 × 1028.32 × 1028.38 × 1028.56 × 102
ERROR1.63 × 1011.83 × 1013.18 × 1013.77 × 1015.65 × 101
STD1.25 × 1011.21 × 1011.84 × 1011.56 × 1016.32 × 100
Rank12345
F5Meam9.00 × 1029.01 × 1021.30 × 1031.57 × 1031.15 × 103
ERROR1.47 × 10−31.25 × 1004.04 × 1026.73 × 1022.47 × 102
STD8.05 × 10−41.69 × 1007.48 × 1013.39 × 1027.36 × 101
Rank12453
F6Meam2.68 × 1037.60 × 1033.31 × 1034.99 × 1041.41 × 108
ERROR8.82 × 1025.80 × 1031.51 × 1034.81 × 1041.41 × 108
STD1.13 × 1031.29 × 1032.65 × 1035.08 × 1041.33 × 108
Rank13245
F7Meam2.03 × 1032.03 × 1032.07 × 1032.04 × 1032.11 × 103
ERROR2.65 × 1012.86 × 1017.50 × 1014.19 × 1011.11 × 102
STD7.69 × 1001.67 × 1001.73 × 1013.93 × 1011.59 × 101
Rank12435
F8Meam2.23 × 1032.24 × 1032.26 × 1032.23 × 1032.30 × 103
ERROR2.76 × 1014.00 × 1015.96 × 1012.90 × 1011.01 × 102
STD2.35 × 1008.70 × 1005.46 × 1011.66 × 1017.39 × 101
Rank13425
F9Meam2.53 × 1032.59 × 1032.53 × 1032.59 × 1032.71 × 103
ERROR2.29 × 1022.88 × 1022.30 × 1022.90 × 1024.13 × 102
STD4.69 × 10−26.17 × 1014.45 × 10−14.70 × 1015.18 × 101
Rank13245
F10Meam2.55 × 1032.61 × 1032.55 × 1032.53 × 1032.73 × 103
ERROR1.47 × 1022.09 × 1021.54 × 1021.30 × 1023.26 × 102
STD6.39 × 1011.12 × 1027.30 × 1016.41 × 1011.32 × 102
Rank24315
F11Meam2.66 × 1032.83 × 1032.75 × 1032.79 × 1033.57 × 103
ERROR6.05 × 1012.33 × 1021.50 × 1021.89 × 1029.75 × 102
STD1.34 × 1026.18 × 1011.06 × 1028.47 × 1012.83 × 102
Rank14235
F12Meam2.87 × 1032.89 × 1032.89 × 1032.89 × 1033.00 × 103
ERROR1.68 × 1021.86 × 1021.89 × 1021.89 × 1023.04 × 102
STD2.70 × 1005.27 × 1002.01 × 1011.95 × 1012.08 × 101
Rank12435
Table 4. RCCO and RCCO adaptive results, iterations = 500, run = 30.
Table 4. RCCO and RCCO adaptive results, iterations = 500, run = 30.
FunctionStatistisRCCORCCO_AdaptiveFunctionStatistisRCCORCCO_Adaptive
F1mean300.054300.662F7mean2032.5222036.044
std0.0140.328 std10.2688.496
rank12 rank12
F2mean414.835402.786F8mean2227.0812226.961
std32.6653.894 std3.9454.177
rank21 rank21
F3mean604.179610.566F9mean2529.4572558.999
std1.9658.407 std0.07865.533
rank12 rank12
F4mean812.937815.809F10mean2500.3242546.972
std4.8243.604 std0.06963.855
rank12 rank12
F5mean900.123901.163F11mean2720.9352925.226
std0.2492.193 std164.449209.373
rank12 rank12
F6mean3298.0322225.338F12mean2867.8622866.919
std1682.015213.640 std3.5002.097
rank21 rank21
Table 5. Optimization results for the cantilever stepped beam.
Table 5. Optimization results for the cantilever stepped beam.
AlgorithmMeanStdSemBestx1x2x3x4x5x6x7x8x9x10Ranking
RCCO63,464.771537589.457411416.80933363,047.9622043.0891782.8758072.5271692.1809681.70313361.76699557.28844250.16319943.53411531.2379811
POA64,110.544965105.41685574.54097364,036.0039923.1171012.7431972.6229202.3360561.75148062.22746154.36339251.73522045.79127031.1695972
ChOA64,722.451318954.265785674.76780864,047.6835103.0833602.8487622.6083302.2924841.89259961.10612456.03001651.64372042.49824531.8705023
ZOA65,964.8931231405.325556993.71523064,971.1778932.9974422.8906342.6486672.3455862.15639759.55855256.80645951.90979542.53439332.3326264
MPA71,321.4979131107.395505783.04687170,538.4510422.9172642.7640912.6294292.3441873.21475858.14104255.14450452.53339246.42933842.4216835
MFO71,935.20884311,264.7362197965.37136963,969.8374743.0070332.9295422.7816702.1915441.53127459.71369557.93074755.02726941.71498630.0004036
TTHHO72,509.998717794.494594561.79251571,948.2062023.7492533.4470572.7797182.6450392.29216155.04191451.38595152.61948742.09730434.1901397
ROA78,652.3253572471.8021121747.82803576,904.4973222.9124222.7484802.9873972.9853842.99921656.46112054.68960249.75996252.24805849.8996388
FLO80,258.7348472982.4717372108.92599078,149.8088573.3062852.9198173.4566382.8547613.13439854.05824354.21031349.54686346.25115945.0422559
SHO82,051.01208416,238.88507711,482.62575770,568.3863273.0244093.3538142.9125332.7750721.99269956.58924052.32097554.53434047.90678830.00000010
SMA86,059.9871774788.7436693386.15312282,673.8340553.1055363.6080382.7237644.5147552.92169857.22639050.90237548.57126940.84192750.88617311
SCA86,715.612485461.138646326.07426486,389.5382213.3995333.4683983.3498404.5573883.35958757.57093065.00000040.50110841.47239735.14038312
WOA87,037.5933556789.2802334800.74609282,236.8472634.1142123.6297943.9872464.0140202.68673255.08049749.37671346.37385935.69354732.88371913
TSO92,027.4114806245.2602024416.06583987,611.3456423.7766733.1382275.0000002.5781822.44794555.43305162.68917654.69506230.00000048.69687914
SSOA103,403.55044721,230.48276015,012.21832888,391.3321193.7957003.3200024.4911283.2055243.69697565.00000052.73393948.05800342.23090830.00000015
RSA121,050.01079918,469.17463713,059.678629107,990.3321694.2284554.9349764.5962774.9182783.24764657.00634748.02860045.90673941.34166457.73555616
BOA385,248.270040393,207.474160278,039.671392107,208.5986483.4702362.4443954.7934263.7958224.03523861.50521947.36591254.24274861.04397162.23838317
Table 6. Optimization results for the pressure vessel.
Table 6. Optimization results for the pressure vessel.
AlgorithmMeanStdSemBestx1x2x3x4Ranking
RCCO6077.26441482.23725858.1505236019.1138910.8339220.41408843.081245165.2185321
ChOA6106.46195388.44507162.5401096043.9218440.8346450.42128343.131769164.7269242
TTHHO6389.28307680.42097956.8662206332.4168570.9371640.48992248.516113110.5719833
MFO6468.768889308.436174218.0973106250.6715780.9504790.46990749.246169104.4488024
MPA6728.932414902.838270638.4030636090.5293510.8717930.43336245.133758142.7826235
ZOA6850.709547259.463450183.4683656667.2411821.0877900.53970156.33733554.8791246
BOA7431.184064318.695092225.3514607205.8326031.1234680.56537255.30212964.3715577
SHO7457.610850221.814416156.8464787300.7643731.0924710.64452356.00177056.8726728
SCA8563.674494527.814284373.2210598190.4534340.9450050.66917841.815364195.7939829
FLO9325.824870253.961125179.5776349146.2472361.1996750.85044852.88778677.33427010
ROA10,007.931045392.858262277.7927419730.1383041.0907171.07025555.87309659.24965511
WOA10,691.2905724116.7349722910.9712157780.3193571.2218130.49129350.61003593.57870212
TSO12,007.7853877287.1830345152.8165396854.9688480.9596590.59725149.208030104.75538713
SMA12,727.8754198309.5685955875.7523026852.1231170.9005510.51830645.920084147.31606914
RSA17,493.8139766137.3413454339.75568413,154.0582931.3870561.35540161.74347627.09977615
SSOA31,657.9593955171.0794783656.50536528,001.4540301.6663173.57695654.34977195.46166916
Table 7. Optimization results for the Planetary Gear Train.
Table 7. Optimization results for the Planetary Gear Train.
AlgorithmMeanStdSemBestx1x2x3x4x5x6x7x8x9Ranking
RCCO0.5258510.0027750.0019620.52388923.57361019.37389230.36133426.89227943.66176397.5399071.3540244.0527525.0487371
WOA0.5272100.0022930.0016210.52558966.04537050.35590834.33897632.07866835.651543116.0598432.9857575.1347336.4900002
ZOA0.5275750.0017130.0012110.52636449.14066842.71888126.63582322.03505836.06606879.8183881.0987212.6665422.3370373
TTHHO0.5279860.0022940.0016220.52636449.74189346.65717429.27389921.70262351.49000079.7325153.4900004.3278621.5816874
MFO0.5290690.0053580.0037880.52528083.67583136.34251117.83576630.31060351.490000108.6417062.4667592.3662586.4900005
ROA0.5292460.0011420.0008080.52843866.36329948.84401627.54271527.19289446.09523197.6960993.1491592.9980785.8575826
MPA0.5295010.0047470.0033570.52614431.93100038.75232641.33727323.59685330.32657087.4334151.9933263.5957425.5806087
TSO0.5304850.0052020.0036790.52680769.74179752.52848333.54946032.01073025.486341116.1566371.7276702.0916832.4220518
FLO0.5329370.0058290.0041220.52881632.18958723.53600722.86489621.80662819.42667379.9676001.1436982.3954562.5647889
ChOA0.5333160.0020170.0014270.53189077.87821726.64453214.84815130.82070732.779399111.5679330.9376594.8260152.74628310
SCA0.5475430.0052280.0036970.54384639.78512330.89373527.84985625.76370922.99448994.5033280.5149290.5100005.59784011
SHO0.5540630.0240480.0170040.53705923.94332013.51000013.51000016.51000013.51000062.2480810.6783150.9225250.51000012
BOA0.5876600.0715610.0506010.53705916.51000021.59122030.50915716.51000029.72449062.4506891.8140783.5793252.91922613
SMA0.5963980.0743200.0525520.54384641.04146337.98194233.98440826.22907417.58338295.3965041.4566136.2144105.56922014
RSA0.6553650.1403320.0992300.55613522.92629813.51000013.51000016.51000013.51000062.3297492.2164671.2333042.78349915
SSOA0.6984770.1111510.0785960.61988122.12095021.18718222.67673116.51000027.15769859.9956620.5100000.5100005.15536416
Table 8. Optimization results for the ten-bar planar truss.
Table 8. Optimization results for the ten-bar planar truss.
AlgorithmMeanStdSemBestx1x2x3x4x5x6x7x8x9x10Ranking
RCCO631.1536814.2935373.035989628.1176920.3808644.0652890.3068824.6913800.2831343.3880490.2780630.2505290.2892530.1826051
SCA638.09848628.20966619.947246618.1512400.1000003.6704920.3171074.5890500.1733534.2950070.1567250.1000000.1099820.1565812
POA690.14398672.14351351.013167639.1308180.5407143.6161750.1006636.5036370.1544013.1794420.1007640.1092440.1000000.4281073
TTHHO772.24700465.38746246.235918726.0110860.8414812.7247361.3730853.7846041.5413222.9141460.8343340.8259201.6844580.1000004
BOA796.89234556.83475040.188237756.7041080.8439662.7903741.0562504.9094640.8778712.7475650.8373452.3477551.0597210.4542175
ZOA822.19506727.28231619.291510802.9035572.0879072.7107181.0651553.8961811.3929192.5260361.6810271.0788161.4213581.2554586
MFO829.542987326.629532230.961957598.5810300.1000003.7426700.1131645.1144990.1000003.8300090.1017780.1001290.1000000.1000007
RSA842.87295937.78297026.716595816.1563640.3950195.1364650.2093965.7795860.1417583.3458760.4168591.0707851.1387531.3775598
FLO910.2480252.5233561.784282908.4637432.2561761.6758141.6748152.7998372.1622252.5609261.8538272.8112912.7209311.3749929
ROA968.64348837.79597226.725788941.9177011.9789961.9616932.6278241.9434611.9309982.5976222.5939151.9516022.6667942.13462210
SHO1043.382325188.268059133.125621910.2567040.7205992.5626231.6494572.9195431.8113263.6955520.5709972.7168032.6399481.97233611
WOA1059.412985640.303046452.762626606.6503590.1000003.6541860.1000006.2102530.1000003.3023250.1000000.1000000.1000000.12031012
MPA1699.17248866.03387646.6930011652.4794862.5231840.6500688.2239723.5566341.3271646.0589120.3774393.0768523.3796239.99319913
TSO2092.943732478.167140338.1152271754.8285052.0511390.1000003.5449162.7083410.1000005.8806530.10000018.9596190.10000011.21352714
SSOA3022.204053105.41505174.5396982947.6643550.10000013.4624040.10000015.6872360.10000013.9813751.3911205.6132021.56628618.42751515
SMA4206.465171438.151966309.8202273896.64494423.5236385.7109735.5072484.3257082.90042025.2147870.15351712.6733860.65389011.28413116
Table 9. Optimization results for the three-bar truss.
Table 9. Optimization results for the three-bar truss.
AlgorithmMeanStdSemBest_Objectivex1x2Ranking
RCCO263.9036110.0011420.000808263.9028030.7915460.4001971
ZOA263.9071040.0118830.008403263.8987010.7868460.4134502
ChOA263.9718120.0292870.020709263.9511040.7929620.3966753
MFO263.9879630.1175950.083152263.9048110.7852030.4181594
SCA264.0867530.2174900.153788263.9329640.7881310.4101605
TTHHO264.2862940.4362330.308464263.9778300.7783120.4383806
BOA264.6088540.4617660.326518264.2823360.7870560.4166937
FLO264.8275550.2877830.203493264.6240620.7597330.4973908
SHO265.3497860.2633870.186243265.1635430.8332980.2947149
SSOA265.4777360.5881680.415898265.0618380.7566200.51057310
ROA265.9231041.1237310.794598265.1285070.7583600.50632011
WOA266.1184372.4544431.735553264.3828840.8155860.33700212
RSA267.2927510.1866170.131958267.1607930.8344360.31146713
SMA268.5228423.0469322.154506266.3683360.8449260.27387214
TSO273.50012413.2124159.342588264.1575360.7704110.46252515
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fakhouri, S.; Hudaib, A.; Sleit, A.; Fakhouri, H.N. Rain-Cloud Condensation Optimizer: Novel Nature-Inspired Metaheuristic for Solving Engineering Design Problems. Eng 2025, 6, 281. https://doi.org/10.3390/eng6100281

AMA Style

Fakhouri S, Hudaib A, Sleit A, Fakhouri HN. Rain-Cloud Condensation Optimizer: Novel Nature-Inspired Metaheuristic for Solving Engineering Design Problems. Eng. 2025; 6(10):281. https://doi.org/10.3390/eng6100281

Chicago/Turabian Style

Fakhouri, Sandi, Amjad Hudaib, Azzam Sleit, and Hussam N. Fakhouri. 2025. "Rain-Cloud Condensation Optimizer: Novel Nature-Inspired Metaheuristic for Solving Engineering Design Problems" Eng 6, no. 10: 281. https://doi.org/10.3390/eng6100281

APA Style

Fakhouri, S., Hudaib, A., Sleit, A., & Fakhouri, H. N. (2025). Rain-Cloud Condensation Optimizer: Novel Nature-Inspired Metaheuristic for Solving Engineering Design Problems. Eng, 6(10), 281. https://doi.org/10.3390/eng6100281

Article Metrics

Back to TopTop