1. Introduction
Optimization lies at the core of scientific discovery and engineering design, where the goal is to identify decision variables that minimize or maximize an objective under constraints [
1]. Real-world problems often exhibit nonconvex landscapes, high dimensionality, nonlinearity, noise, and expensive evaluations. Classical deterministic methods—while powerful for smooth, convex problems with available gradients—struggle when objectives are black-box, discontinuous, or multi-modal [
2]. In such cases, stochastic search methods are preferred because they require minimal assumptions about the objective and can flexibly incorporate constraints, multiple objectives, and domain heuristics.
Metaheuristics are high-level search strategies that orchestrate neighborhood exploration, adaptive sampling, and information sharing to guide a population (or trajectory) toward high-quality solutions [
3]. They emphasize two complementary forces—diversification (exploration) and intensification (exploitation)—and use randomness to avoid bias and premature convergence [
4]. The No Free Lunch (NFL) results assert that no single optimizer dominates across all problems, motivating a continuing need for methods that offer robust performance across classes of problems and that can be tailored to specific structures (e.g., separability, sparsity, or expensive constraints) [
5]. Contemporary metaheuristics further incorporate parameter control, adaptive memory, restart strategies, and surrogate modeling to improve efficiency and reliability.
Within metaheuristics, nature-inspired methods have become a prominent family. These include evolutionary algorithms [
6] (e.g., genetic algorithms and evolution strategies), swarm intelligence [
7] (e.g., particle swarm optimization and ant colony optimization), physics- and chemistry-inspired approaches [
8] (e.g., simulated annealing and electromagnetic-like mechanisms), and ecology-/epidemiology-inspired dynamics. Properly designed, nature-inspired algorithms offer intuitive metaphors for information flow, local and global guidance, and population diversity management. At the same time, rigorous algorithmic design—clear operator definitions, principled parameterization, computational complexity analysis, and transparent benchmarking—remains crucial to ensure that metaphors translate into genuine search advantages.
Despite substantial progress, longstanding challenges persist: balancing exploration and exploitation in rugged landscapes; preventing stagnation and loss of diversity; scaling to higher dimensions; handling constraints efficiently; and maintaining performance in noisy or dynamically changing objectives [
9]. Many practical problems also exhibit funneling structures and clustered basins, suggesting potential gains from multi-scale search and state-dependent step adaptation. These observations motivate the development of a new, hydrological cycle-inspired metaheuristic that operationalizes evaporation, condensation, cloud drift, and precipitation as coordinated search mechanisms.
This work introduces the Rain-Cloud Condensation Optimizer (RCCO) [
10], a novel nature-inspired metaheuristic grounded in the microphysics of cloud formation and rainfall. RCCO models candidate solutions as moisture parcels (droplets) that evolve through phases analogous to evaporation, condensation, coalescence, cloud drift, and precipitation. Each phase is mapped to a specific search function: initialization and large-step diversification, elite-guided attraction with adaptive step size, information mixing via pairwise differentials, directional bias at the population level, and restart-like reinjection near promising regions. A state variable—supersaturation—regulates the transition between phases, providing an adaptive, problem-agnostic mechanism to balance exploration and exploitation.
The key contribution of this work is a hydrology-inspired optimizer that was deliberately designed to reduce tuning burden while addressing common pathologies of population search. By tying all search pressures to simple, linearly decaying schedules in , RCCO preserves adaptivity with only a minimal hyperparameter set (N, T, and two operator probabilities), thereby lowering the barrier to practical deployment across heterogeneous tasks where problem-specific calibration is costly or impractical. RCCO effectiveness on CEC2022 benchmarks (unimodal, multi-modal, hybrid, and composite), including scalable, high-dimensional cases.
2. Related Work
The past decade has witnessed the introduction of numerous metaheuristic optimizers that draw inspiration from natural, biological, and mathematical processes [
11]. Such algorithms have been designed to cope with the nonconvex and multi-modal nature of modern engineering problems, often emulating cooperative behavior in animal groups or abstracting the physics of complex phenomena [
12]. In one of the earliest examples from our sample, the Optics-Inspired Optimization algorithm (OIO) treats the search space as a mirror whose peaks and valleys are reflected to construct a concave counterpart, thereby guiding candidate solutions towards promising regions [
13]. Later work adapted concepts from bird navigation: the high-level target navigation pigeon-inspired optimization (HTNPIO) uses strategies such as selective mutation and enhanced landmark search to speed convergence and escape local minima [
14]. Similar ideas were used in the tree seed optimization technique, where the dispersal behavior of seeds determines how features are selected for support vector machines to detect malicious Android data [
15].
Many optimizers explicitly mimic the survival strategies of animals. The gazelle optimization algorithm models the escape and pursuit phases of gazelles confronted with predators; by alternating between exploration and exploitation, it shows competitive performance on benchmark functions [
16]. A complementary example is the nutcracker optimizer, which translates the mechanism by which nutcrackers crack shells into a rescheduling method for congestion management in power systems [
17]. Another bio-inspired design uses the proliferation of cancer cells: the Liver Cancer Algorithm (LCA) simulates tumor growth and takeover processes to balance local and global searches [
18]. Even microorganisms have inspired algorithms: the coronavirus metaheuristic algorithm (CMOA) uses metabolic transformation under various conditions to model candidate interactions and preserves diversity to avoid premature convergence [
19].
Predatory behavior continues to motivate new search schemes. The migration search algorithm imitates the leader–follower dynamics within animal groups and divides the population into leaders, followers, and joiners to enhance information dissemination [
20]. The bacterial foraging optimization algorithm has been adapted to optimize the cantilever beam parameters by emulating the chemotactic search patterns of bacteria [
21]. Equally intriguing are the predator–prey models inspired by marine life. The orca predation algorithm assigns different weights to driving, encircling, and attacking phases in order to balance exploration and exploitation [
22]. The Humboldt squid optimization algorithm uses attacks and fish-school escape patterns to orchestrate cooperation of subpopulations [
23]. The Walrus optimizer adapts social signalling behaviors in walrus colonies to tune the trade-off between intensification and diversification [
24].
Other animal-inspired approaches include the boosted African vulture optimization algorithm, which incorporates opposite learning and dynamic chaotic scaling [
25]. The gooseneck barnacle optimization algorithm abstracts the hermaphroditic mating cycle of barnacles [
26]. The Hunter algorithm models cooperative hunting to localize multiple tumours in biomedical imaging [
27]. The squirrel search algorithm imitates the gliding behavior of squirrels to perform dynamic foraging [
28].
Swarm-intelligence algorithms take inspiration from collective motion. The jellyfish search optimizer uses ocean current patterns and food attraction to steer solutions [
29]. The tiki-taka algorithm models passing and positioning in football games to maintain ball possession and explore the search space [
30]. Social interaction is also modeled in the membrane-inspired multiverse optimizer, where each membrane evolves in its own subpopulation and shares best solutions with others [
31]. The leader–follower particle swarm optimizer (LFPSO) divides particles into leaders and followers to maintain diversity [
32]. The modified krill herd algorithm integrates Lévy flight and crossover operators for economic dispatch problems [
33].
The breadth of novel optimizers introduced in recent years underscores the vibrant state of metaheuristic research. Designers have drawn inspiration from optics and acoustics, animals and microorganisms, sports and sociocultural processes, epidemiological models, and chaotic maps. The common thread among these methods is the pursuit of a balance between exploration and exploitation through dynamic behavioral rules or hybrid combinations. Many algorithms demonstrate superior performance on benchmark problems and real-world applications, highlighting the potential of nature-inspired computation for tackling complex optimization tasks.
Beyond the single-paradigm metaheuristics surveyed above, a substantial body of work pursues hybridization, either at the algorithm level (coupling two full optimizers) or at the operator level (importing targeted mechanisms). For example, Akopov proposes a matrix-based hybrid genetic algorithm (MBHGA) tailored to agent-based models of controlled trade, demonstrating that an encoding aligned with model structure can materially improve search efficiency [
34]. A complementary line is RCGA–PSO hybrids: a real-coded GA interleaved with particle-swarm updates (and, in some instances, surrogate modeling) to combine GA’s recombination with PSO’s fast social drift [
35]. Earlier hybrids also embed local ACO procedures inside GA to intensify search around promising alignments—e.g., the classical GA-ACO for multiple sequence alignment—illustrating the long-standing value of coupling global recombination with pheromone-guided refinement [
36].
Recent work continues to explore hybridization at scale. Learning-to-rank-driven automated hybrid design composes behaviors from multiple bases (e.g., WOA, HHO, GA) on the fly [
37]; deep-RL-enhanced variants of WOA address resource scheduling in industrial operating systems by guiding WOA’s move selection with value estimates [
38]; improved WOA variants introduce dynamic gain sharing and other schedules to tighten convergence without sacrificing global reach [
39]. Parallel developments in new swarm designs and bio-inspired mechanisms (e.g., horned-lizard optimization) further enrich the design space that hybrids can draw from [
40]. To sharpen the comparison, they classify representative algorithms by the mechanism used to balance exploration and exploitation, and maps each mechanism to its analog in RCCO.
Relative to algorithm-level hybrids such as MBHGA or RCGA–PSO, RCCO is an operator-level hybrid with a hydrology metaphor that fuses leader/core attraction (PSO-like), axis-aligned band sampling (GA/DE), opposition learning, and heavy-tailed jumps under a single, linearly decaying schedule, yielding few hyperparameters while preserving diversification. In line with recent taxonomies emphasizing mechanism-level reporting, principled parameter control, and standardized benchmarks/constraint handling [
41,
42], RCCO targets five gaps: low-overhead adaptivity (simple time-decays), late-stage stability (rank-weighted cloud core), escape from funnels (rare Cauchy gusts with decay), transparent constraint handling (wrap-and-reflect with penalties), and comparability (CEC-style suites and classical constrained designs).
3. Rain-Cloud Condensation Optimizer (RCCO)
Inspiration
RCCO draws an analogy between the optimization process and the life cycle of droplets inside a rain cloud. In the atmosphere, moist air rises and condenses on aerosols, forming droplets that are carried by updrafts toward a dense cloud core; entrainment mixes ambient air, while collisions cause coalescence, and sporadic turbulence injects energetic gusts. The algorithm mirrors these mechanisms in the search space: candidate solutions are droplets, the incumbent best acts as a leader, a rank-weighted average of the top fraction forms the cloud core, and stochastic perturbations act as thermal noise, entrainment, and turbulence.
Figure 1 refines this metaphor with a stylized schematic. Panel (a) shows droplets advected by buoyancy and shear toward the leader and the core within a softly shaded cloud. Panel (b) depicts domain-aware coalescence via axis-aligned band sampling between two parent droplets, an entrainment mirror across the domain center, and rare heavy-tailed gusts that propel droplets across distant regions. This visual framing motivates the mathematical operators introduced in the next section.
RCCO also seeds its initial population using a sine map, a chaotic map related to the tent map. Starting from a random point
, the sine map iteratively applies
; for chaotic parameters, it yields a sequence that visits the unit interval densely, similar to the tent map, which folds and stretches the interval to produce a chaotic sequence
Figure 2 plots the sine map function and the orbit of one seed point. The sine map ensures that initial droplets cover the domain evenly, improving diversity during early search.
Finally, RCCO occasionally adds heavy-tailed turbulence bursts using a Cauchy distribution. The Cauchy distribution has undefined mean and variance and exhibits much heavier tails than a Gaussian distribution [
10].
Figure 3 compares the probability density functions of a standard normal and a Cauchy distribution. The heavy tails increase the probability of large jumps, enabling the optimizer to escape local optima.
4. Mathematical Model
This section formalizes the update rules underlying RCCO. Let N denote the population size, T the number of iterations, and n the dimensionality of the search space. Each candidate solution is a droplet with fitness . Lower fitness values correspond to better solutions.
4.1. Condensation: Updraft to the Leader and Core
At the start of iteration
t, the droplets are sorted in ascending order of fitness. The best droplet becomes the leader
, and the top
droplets form the cloud core. Weighted summation produces the core point
where
indicates the
j th best droplet and
assigns greater weight to higher-ranked droplets. The core acts as a secondary attractor besides the leader.
Each droplet
is then perturbed toward both the leader and the core. The buoyancy and shear coefficients decay over time to shift the algorithm from exploration to exploitation:
where
introduces randomness. The thermal variance vector scales with the search range and decays as
where
and
are lower and upper bounds. For droplet
i, the condensation update is
where
is thermal noise. Before acceptance, the candidate is wrapped within bounds using a wrap-and-reflect operator
to model recirculation at cloud edges. If the new fitness improves upon the droplet’s current fitness, the update is accepted.
4.2. Coalescence, Entrainment, and Turbulence
After condensation, each droplet undergoes an exploration phase. Two distinct parents
and
are selected uniformly at random. A trial point is sampled from the coalescence band defined by the component-wise minimum and maximum of the parents:
where
,
and
is a uniform random vector. With probability
, an entrainment mirror is applied to model the ingestion of external air into the cloud:
and the mirrored point is used if its fitness is superior. With probability
, a turbulence gust injects heavy-tailed noise drawn from a Cauchy distribution:
where
component-wise. The tangent transform generates standard Cauchy variates. The wrap-and-reflect operator ensures the candidate remains in bounds. The update is accepted if it yields a lower fitness.
To overcome the limitation of purely time-decaying coefficients in Equations (
4)–(
27), we optionally adapt
and
using feedback from the search state. Let the normalized population diversity be
and define a short-horizon improvement rate over a window
h as
We then form adaptive coefficients
We use
in place of
inside Equation (
28). Here,
enforces bounds (e.g.,
,
,
,
); typical settings
,
,
, and small gains
work well. Intuitively, when diversity shrinks or progress stalls,
increases (more exploration) and
is restrained; when progress is strong,
is reinforced. This state-aware control parallels DRL-guided parameter tuning reported in the metaheuristics literature [
38].
We also expose the secondary operators—band sampling, mirroring, and gusts—as a portfolio
with selection probabilities
for
. After each iteration, we update a smoothed reward and the selection distribution:
where
is the normalized improvement credited to operator
o (zero if no improvement), and
,
are smoothing/temperature hyperparameters. This adaptive operator selection biases RCCO toward operators that are empirically effective on the current landscape, aligning with hybrid scheduling and learning-to-rank composition strategies [
37]. Both adaptive modules are optional; they recover the original fixed linear schedules for full reproducibility.
5. Pseudocode
Algorithm 1 summarizes the RCCO procedure. It uses the equations defined above to update each droplet. Condensation (lines 7–14) uses the leader and core attraction defined in Equation (
28). Coalescence and entrainment (lines 18–29) draw new candidates from the band (Equation (
6)) and optionally apply mirroring (Equation (
7)). Turbulence bursts (line 23) follow Equation (
8). After each update, the candidate is wrapped into the domain. The iteration best value is recorded in the curve
.
Algorithm 1 Rain-Cloud Condensation Optimizer (RCCO) |
Require: population size N, iterations T, bounds , dimension n, objective f |
Ensure: best rain rate , best droplet , convergence curve RainCurve |
1: Initialize N droplets via sine map seeding |
2: Evaluate fitness and identify |
3: for to T do |
4: Sort droplets by and recompute ▹ Condensation |
5: Compute core point using Equation (26) |
6: for to N do |
7: Compute coefficients and from Equations (4)–(27) |
8: Generate thermal noise |
9: from Equation (28) |
10: ▹ wrap–reflect bounds |
11: |
12: if then |
13: ; ; update if needed |
14: end if |
15: end for |
16: for to Ndo ▹ Coalescence, entrainment, and turbulence |
17: Select parents uniformly at random |
18: Sample using Equation (6) |
19: if then |
20: Apply mirror as in Equation (7) |
21: end if |
22: if then |
23: Add gust (Equation (8)) |
24: end if |
25: |
26: Evaluate |
27: if then |
28: ; ; update if needed |
29: end if |
30: end for |
31: |
32: end for |
To strengthen the theoretical basis for the RCCO design given in the algorithm, we note that the condensation update in Equation (
28), together with the time-decaying stochastic terms from Equations (
4)–(
27), can be interpreted as a stochastic contraction toward a moving attractor formed by the current leader and the rank-weighted cloud core. As the buoyancy, shear, and thermal scales decay in time, the effective contraction rate increases while the noise variance decreases, which provides a principled explanation for RCCO stable late-stage exploitation and smooth convergence profiles.
The rank-weighted cloud core defined in Equation (
26) is the minimizer of a weighted least-squares potential over the population. Because it aggregates information from the top-ranked droplets with explicit weights, its sampling variance is smaller than that of any individual point near the same basin. Pulling droplets simultaneously toward the leader and this low-variance core reduces estimator noise in the target direction and damps oscillations, thereby accelerating practical descent and improving robustness on narrow basins.
The sine-map initialization in the algorithm yields an ergodic, low autocorrelation sequence which, after affine rescaling to
, improves space-filling and reduces the initial covering radius of the population. This “chaos-aided” seeding increases the probability that at least one droplet starts close to a high-quality basin, a well-documented benefit in nature-inspired optimization [
8].
RCCO exploration operators supply complementary mechanisms. Band sampling in Equation (
6) performs geometry-aware recombination: within locally convex basins, segments between good parents stay inside the basin and advance exploitation; along rugged fronts, the same segments probe informative directions between incumbents. The mirror move in Equation (
7) implements opposition-based learning in bounded domains and has been shown to increase improvement probability by testing complementary locations [
43]. The Cauchy gusts in Equation (
8) introduce heavy-tailed jumps for basin escape, with an amplitude that is explicitly annealed over iterations so that global exploration gradually fades into local refinement.
Taken together with wrap–reflect feasibility handling as specified in the algorithm, these components define a time-inhomogeneous Markov process that is broadly exploratory early and progressively contractive later. This view provides methodological justification—beyond metaphor—for why the weighted cloud core, sine-map seeding, band sampling, opposition-based mirroring, and annealed heavy-tailed gusts contribute to RCCO strong performance on CEC-style landscapes.
6. Movement Strategy
The movement strategy of RCCO combines attraction to the leader and core with random perturbations.
Figure 4 visualizes how a droplet at position
(blue) is pulled toward the leader
(red) and the core
(orange). The vectors representing buoyancy and shear from Equation (
28) are shown by arrows of diminishing length as iterations progress. Thermal noise sampled from a normal distribution adds jitter to prevent premature convergence.
7. Exploration and Exploitation Behavior
RCCO balances exploration and exploitation through its two-stage update. Condensation focuses exploitation: droplets move toward the leader and weighted core, and the magnitude of buoyancy and shear coefficients decreases over time, concentrating search near promising regions. Coalescence and entrainment drive exploration: sampling along the band (Equation (
6)) generates diverse combinations of parent solutions, mirroring across the domain (Equation (
7)) introduces opposition-based learning [
43], and turbulence bursts (Equation (
8)) occasionally produce large leaps due to the heavy-tailed Cauchy distribution.
Figure 5 illustrates the contrast between the exploitation phase (left) and the exploration phase (right). During exploitation, points cluster around the leader and core (dense cloud of green points). In exploration, sampling between parents, mirror operations, and gusts scatter points widely (blue markers), enabling the algorithm to escape local minima. Over iterations, the algorithm gradually reduces exploration intensity by decaying the coefficients in Equations (
4)–(
27) and the weighting of turbulence bursts.
To justify the main design choices, note that the condensation step (Equation (
28)) can be rewritten as
, where
and
. Under the time-decaying noise of Equations (
4)–(
27), this induces a stochastic contraction toward the moving attractor
in the sense that
, yielding stable late-stage exploitation. The weighted cloud core
(Equation (
26)) is the minimizer of
with rank-based weights
, hence a low-variance estimator of the location of the promising region; if the top-
k points around
have covariance
, then
, which is smaller than the variance of any single droplet. Combining
with
therefore reduces estimator variance for
, dampens oscillations, and accelerates descent. Chaotic sine-map seeding produces an ergodic, low-autocorrelation set after affine rescaling to
, improving space filling and reducing the initial covering radius so that at least one droplet is more likely to fall inside a good basin (consistent with chaos-aided metaheuristics [
8]). Band sampling (Equation (
6)) provides geometry-aware recombination: in locally convex basins, convex combinations of parents remain near the basin; in rugged zones, segments between historically good points probe informative directions. The mirror move (Equation (
7)) implements opposition-based learning, which in bounded domains increases the probability of improvement by testing the complementary location of a candidate [
43]. Finally, Cauchy gusts (Equation (
8)) introduce a heavy-tailed chance of long jumps that helps escape deep funnels, while the amplitude factor
anneals exploration so the dynamics become increasingly contractive. Together with wrap–reflect bounds, these components define a feasible, time-inhomogeneous Markov process that is exploratory early and progressively exploitative, providing a principled mechanism—beyond the guiding metaphor—by which RCCO balances global search with stable convergence.
8. Complexity Analysis
The computational complexity of RCCO can be derived by analyzing the cost per iteration. Let
denote the cost of evaluating the objective function. During condensation, each of the
N droplets computes coefficients and noise (constant cost) and evaluates the objective once if the update is accepted. The complexity of condensation is therefore
During coalescence and entrainment, each droplet samples two parents and performs up to two additional objective evaluations (for the mirrored and gusted candidates). Thus, the exploration phase has complexity. Since both phases run in each iteration, the total time complexity over
T iterations is
The algorithm stores
N droplets of dimension
n and their fitness values, leading to a memory complexity of
. Additional temporary vectors such as
and random variates have lower order and can be neglected. Thus, RCCO scales linearly with population size and dimensionality and is suitable for high-dimensional problems when
N is kept moderate.
Beyond this motivation, RCCO strength lies in three complementary design choices that directly target premature convergence, boundary brittleness, and inefficient exploration. First, intensification is guided by a dual pull—toward both the leader and a rank-weighted cloud core of the top quintile—which stabilizes progress and reduces over-commitment to a single anchor. Second, domain-aware coalescence within the coordinate-wise band spanned by two droplets, augmented by a mirror test across the domain center, exploits discovered structure while probing complementary regions at constant cost. Third, rare, time-damped Cauchy gusts provide inexpensive long jumps that improve basin-escape probability without explicit restarts, and wrap-and-reflect boundary handling maintains dynamic feasibility near the walls.
Compared Algorithms
In this paper, we benchmarked a diverse suite of optimization algorithms to evaluate their performance, including the following: Slime Mould Algorithm (SMA) adapts oscillatory foraging intensity to modulate direction and step size for stochastic search [
44]; Gradient-Based Optimizer (GBO) couples a gradient-inspired update with a local-escaping operator to balance intensification and diversification [
45]; Sand Cat Swarm Optimization (SCSO) emulates sand cat hunting (digging/low-frequency sensing) to switch between global exploration and local exploitation [
46]; Whale Optimization Algorithm (WOA) models humpback bubble-net foraging via encircling and logarithmic-spiral moves to intensify around elites [
47]; Jellyfish Search Optimizer (JSO) alternates passive ocean-current drift (global) with active food attraction (local) [
48]; Leader–Follower Particle Swarm Optimizer (LFPSO) partitions particles into leaders and followers to enhance information sharing while preserving diversity [
49]; Artificial Protozoa Optimizer (APO) abstracts protozoa foraging and reproduction behaviors for continuous optimization [
50]; Golden Jackal Optimization (GJO) uses pack encircling and attacking to intensify search around promising regions [
51]; Moth–Flame Optimization (MFO) guides agents along logarithmic spirals toward “flames” (best solutions) to trade off exploration and exploitation [
52]; Artificial Ecosystem-Based Optimization (AEO) coordinates producer/consumer/decomposer interactions as complementary move operators [
53]; Orca Predation Algorithm (OPA) implements cooperative driving, encircling, and attacking phases to balance search [
54]; Walrus Optimizer (WO) leverages colony social signalling to sustain diversity while refining elites [
55]; and Sea-Horse Optimizer (SHO) derives movement rules from sea-horse predation and mating patterns to navigate complex landscapes [
56]; Particle Swarm Optimization (PSO) simulates social behavior of bird flocking by adjusting particle velocities toward both personal and global best positions to converge to optimal solutions [
57]; Genetic Algorithm (GA) evolves a population of candidate solutions through selection, crossover, and mutation, mimicking natural evolution to search for optimal solutions [
58]; Real-Coded Genetic Algorithm (RCGA) encodes solutions as real-valued vectors and applies crossover and mutation directly on continuous variables to solve continuous optimization problems [
59]; and Enhanced Horned Lizard Optimization Algorithm (EHLOA) augments the standard HLOA with strategies such as round initialization, escape operators, and burst attacks to escape from local optima and effectively handle high-dimensional problems [
30].
9. Assessment of the CEC2022 Benchmark Functions
Table 1 and
Table 2 present the comparative evaluation of RCCO against twenty-one state-of-the-art algorithms on the full CEC2022 benchmark suite. The results highlight the consistent superiority of RCCO across unimodal, multimodal, hybrid, and composition functions.
For the unimodal functions F1–F4, RCCO attains first rank in all cases, with the lowest mean values and error measures. On F1, it records with negligible variance, far outperforming SMA, SPBO, and WOA. Similarly, in F2–F4, RCCO remains the most accurate and stable, showing tight standard deviations compared to alternatives like SSOA and BOA that suffer from instability. These results demonstrate RCCO decisive exploitation capability on simple landscapes.
On the multimodal and hybrid functions F5–F8, RCCO continues to perform strongly. It ranks first on F5 with a mean of and error , while many algorithms such as SPBO and OHO diverge. For the rugged F6, RCCO ranks second overall, maintaining a competitive mean () and variance far below the – range of weaker methods. On F7, it again secures the top rank with stable convergence, and on F8, it remains competitive (rank 5), confirming robustness across challenging hybrid/composition landscapes.
Finally, in the composition functions F9–F12, RCCO consolidates its performance. It ranks first on F9 and F11, achieving the lowest mean values and minimal variance, significantly outperforming less stable algorithms such as ROA and SSOA. On F10, RCCO delivers mid-tier results (rank 5) yet still maintains better stability than high-error competitors. For F12, RCCO secures second place with a mean of and low variance, almost identical to the top performer. These outcomes underline RCCO adaptability to complex landscapes that blend multiple functional properties.
In summary, RCCO demonstrates state-of-the-art performance across the CEC2022 suite. It consistently achieves first place on unimodal problems, dominates multimodal and hybrid functions with excellent accuracy and stability, and performs strongly on the most challenging composition functions. This confirms its effectiveness as a robust and reliable optimizer capable of balancing exploration and exploitation across diverse optimization landscapes.
As can be seen in
Table 3, RCCO attains the best mean on 10/12 problems (F1, F3–F5, F6–F9, F11–F12), finishes runner-up on F10, and places third on F2, yielding the strongest average rank (
) versus PSO (
), EHLOA (
), RCGA (
), and GA (
). On unimodal F1–F4, RCCO consistently dominates (e.g., F1 mean
with
vs. PSO
, EHLOA
, RCGA
, GA
), evidencing fast and stable exploitation. On the rugged F6, RCCO achieves
, improving over EHLOA (
) and PSO (
) while avoiding the catastrophic errors of RCGA/GA (
and
). RCCO is also conspicuously stable on composite cases; for example, on F9, it matches or slightly improves the best mean (
) but with a much tighter spread (
, orders of magnitude below PSO’s
), and on F12, it attains the lowest mean (
) with the smallest variance. The only clear exceptions are F2, where EHLOA leads by
(
vs. RCCO
), and F10, where RCGA edges RCCO by
(
vs.
). RCCO combines top accuracy with consistently low dispersion, outperforming PSO and the genetic baselines (RCGA/GA) and matching or surpassing the enhanced hybrid (EHLOA) on most functions.
10. Qualitative Assessment of the CEC2022 Benchmark Functions
For RCCO with populations
, the search history panels show that, for all three population sizes, RCCO begins with a broad scatter of samples and then collapses rapidly into a compact cluster around the global basin. On the easier, nearly convex landscapes (F1, F3–F5, F7, F10–F12), the cluster forms close to the origin; on the more deceptive, multimodal surfaces (F6, F8–F9), several “islands’’ appear and RCCO concentrates on the best island. Increasing the population from 20 to 100 mainly densifies this cluster and smooths the spatial coverage, while the overall contraction pattern remains the same as seen in
Figure 6 and
Figure 7.
In the trajectory plots, the parameter traces exhibit a short transient during the first few tens of iterations for , after which the motion becomes nearly flat, indicating early exploitation. On narrow or deceptive landscapes (notably F2 and F9), the trajectories display a handful of step-like jumps, reflecting deliberate relocations into better basins rather than random wandering. The length of the transient is similar across the three population sizes, with larger N producing slightly smoother paths.
The average-fitness curves (log scale) fall steeply at the start for all three populations—often by several orders of magnitude—followed by a slower, steady decline. A salient feature across is that the population mean tracks the best-so-far closely, evidencing population-wide progress rather than improvement confined to a single elite. Larger populations make the mean curves smoother, but do not change the overall descent pattern.
The convergence (best-so-far) curves continue to decrease after the initial drop and show additional step reductions when RCCO hops between basins, indicating resilience to premature stagnation. This behavior is consistent for , with larger N yielding slightly finer late-stage refinements but similar final accuracy.
Problem-wise, RCCO converges quickly and monotonically on F1, F3–F5, F7, and F10–F12 for all three population sizes, with the cloud of samples tightening near the optimum and both average and best fitness decreasing smoothly. On F2, the algorithm undergoes a few pronounced relocations before settling, leading to a stepwise convergence profile that still attains low terminal error. The most challenging multimodal cases (F6, F8, F9) highlight RCCO ability to escape local minima: the search history shows migration toward the most promising island, and the best-so-far curve keeps descending throughout the run. F11 exhibits a dramatic early reduction—several orders of magnitude—followed by steady refinement across . For F12, all three populations reach near-zero error almost immediately.
Across populations of 20, 50, and 100, RCCO consistently demonstrates fast basin identification, purposeful exploration when needed, and strong population-level improvement, with low risk of premature convergence and particularly robust performance on the deceptive multimodal functions (F6 and F11).
10.1. Box Plots Across All Benchmark Functions
Figure 8 and
Figure 9 illustrate the box-plot analysis of the Rain-Cloud Condensation Optimizer (RCCO) across all twelve CEC2022 benchmark functions, considering population sizes of 20, 50, and 100. For unimodal functions (F1–F4), the distributions show tight clustering with median values improving significantly as population size increases, confirming the optimizer’s ability to exploit the search space efficiently. In contrast, multimodal cases (F5–F8) reveal broader spreads, yet the larger populations maintain superior median performance and reduce variability, which highlights the benefit of population diversity in escaping local optima. Finally, for hybrid and composition functions (F9–F12), RCCO demonstrates robust improvements when population size is increased to 100, with sharper reductions in the final objective values and narrower variability, underscoring the importance of balancing exploration and exploitation. Collectively, the box-plot as seen in
Figure 8 and
Figure 9 summary validates RCCO consistent scalability and adaptability across problem categories, with population size acting as a decisive factor for stability and convergence.
10.2. Convergence Bands Across the Benchmark Suite
Figure 10 presents the convergence bands of the RCCO optimizer across all twelve CEC2022 benchmark functions. Each plot illustrates the median best objective over iterations for different population sizes, with the shaded regions denoting the interquartile ranges, thereby capturing the variability of performance across runs.
On the unimodal functions (F1–F4), RCCO exhibits consistent and rapid declines in the best objective values, indicating effective exploitation and strong convergence properties. The narrow bands further emphasize robust stability, especially for larger population sizes. In the multimodal functions (F5–F8), the optimizer maintains competitive trajectories with reduced variability compared to smaller populations, demonstrating its capacity to navigate rugged landscapes while preserving diversity. The hybrid and composition functions (F9–F12) show a broader spread in the bands, reflecting higher problem complexity, yet RCCO consistently outperforms or matches competing strategies by achieving steady improvements and avoiding premature stagnation.
Overall, the convergence band analysis underscores RCCO robustness in balancing exploration and exploitation. Its ability to sustain stable performance across unimodal, multimodal, hybrid, and compositional functions highlight its adaptability and resilience in tackling diverse optimization challenges.
10.3. Performance Profile for RCCO
Figure 11 presents the performance profile of the Rain-Cloud Condensation Optimizer (RCCO) across different population sizes on the CEC2022 benchmark suite. The horizontal axis
denotes the performance factor relative to the best result, while the vertical axis represents the fraction of functions for which the optimizer achieves a solution within this factor. The results clearly highlight the benefits of larger population sizes. With a population of 100, RCCO consistently achieves near-complete coverage at very small
values, underscoring its robustness and ability to maintain diversity. The medium population of 50 demonstrates a balanced trade-off between efficiency and accuracy, covering more functions effectively than the smallest population while using fewer resources than 100. In contrast, the population size of 20 lags behind, with significantly fewer functions solved near optimality, reflecting limited exploration capacity. Overall, these findings indicate that RCCO benefits strongly from larger populations, which improve global search capability and mitigate premature convergence.
10.4. Diagnostic Visualizations
Figure 12 presents the empirical cumulative distribution function (ECDF) of the final best objective values for different population sizes. The plot demonstrates that larger populations, in particular 100, consistently achieve lower objective values across a greater fraction of the benchmark functions. This leftward shift of the ECDF highlights improved reliability and robustness when a larger pool of candidate solutions is maintained.
Complementing this,
Figure 13 shows the distribution of final best objective values as a function of the benchmark index. Although population size does not drastically alter outcomes on every function, the scatter lines indicate clear benefits for multimodal and composition functions, where higher diversity prevents premature convergence. Notably, functions such as F5, F9, and F11 exhibit substantial improvements with population 100 compared to smaller populations. Together, these diagnostic views confirm that RCCO gains in both robustness and accuracy when operating with larger population sizes, particularly in more complex search landscapes.
11. Adaptive Strategies for RCCO
The adaptive RCCO monitors population diversity to reshape leadership guidance, tracks acceptance rate to regulate thermal noise, and couples entrainment mirroring and turbulence bursts to the observed effectiveness and stagnation state. A diversity-/stagnation-aware condensation step with buoyancy and shear performs exploitation while keeping time-decayed exploration. When progress stalls and diversity collapses, a cloudburst partially re-seeds poor solutions and briefly boosts exploration knobs. We use for time decay, ⊙ for element-wise products, for wrap/reflect projection to the box , for the span, and .
Population diversity (monitors exploration pressure). We quantify the normalized spread across coordinates; Equation (
15) drives leadership size/curvature updates and turbulence triggers.
Per-iteration improvement rate (feedback signal). We estimate the fraction of accepted proposals; Equation (
16) is the raw acceptance used by the thermal controller.
Smoothed acceptance (robust control state). We maintain an EWMA of acceptance; Equation (
17) stabilizes decisions against noise.
One-fifth-style thermal controller (step-size adaptation). When the acceptance rate exceeds the target
, the thermal scale grows; otherwise, it shrinks (see Equation (
18)):
Per-coordinate thermal variance (annealed noise floor). Equation (
19) sets the Gaussian jitter used in condensation with a late-stage floor.
Mirror probability tracking (operator self-tuning). Equation (
20) nudges the mirror probability toward its empirical acceptance
(clipped).
Turbulence probability under stagnation (exploration on demand). Equation (
21) increases
during stagnation/low diversity and decays it otherwise.
Gust scale adaptation (strength of heavy-tailed kicks). Equation (
22) coordinates the magnitude
with the same condition
.
where
and
counts consecutive no-improvement iterations.
Turbulence sampling (Cauchy bursts). Equation (
23) injects rare, heavy-tailed moves scaled by
and
.
Leader fraction adaptation (breadth of guidance). Equation (
24) widens the leader set when diversity is low and narrows it when high.
Rank-weight curvature adaptation (strength of elitism). Equation (
25) sharpens weights when diversity is scarce and flattens them when abundant.
Cloud core (diversity-aware centroid of leaders). Equation (
26) forms the core as an
-curved rank-weighted mean over the best
droplets.
where
is the
j-th best droplet.
Buoyancy and shear (time-/state-modulated pulls). Equation (
27) strengthens pulls during stagnation and when diversity is low, with time decay
.
with
and
.
Condensation update (guided Gaussian exploitation). Equation (
28) moves each droplet toward the leader and core with Gaussian thermal noise and wraps to the box.
where
is the current best droplet.
Cloudburst (soft restart under collapse). Equation (
29) partially re-seeds the worst droplets with a uniform/local Gaussian mixture when
is large and
is small.
Post-burst nudges (temporary exploration boost). Equation (
30) briefly increases turbulence and thermal scale (with clipping to bounds).
As can be seen in
Table 4, the adaptive strategy delivers selective gains rather than uniform improvements: it outperforms the baseline on 4/12 functions—F2, F6, F8, and F12—most notably on F6 (mean
, std
) and F2 (mean
, std
); F8 shows a negligible mean gain (
) with slightly higher dispersion (
), and F12 yields a small mean gain (
) with a sizable stability benefit (std
). Conversely, the baseline dominates on 8/12 functions, with especially large variance inflation on F9 and F10 (std rising from
to
), consistent with turbulence bursts overshooting on smoother landscapes; degradations on F1–F5 and F7 are modest in mean (
to
, mixed std). Overall, the average rank shifts from
(RCCO) to
(RCCO_Adaptive; lower is better), suggesting the adaptive mechanisms chiefly help rugged, high-variance problems (e.g., F6) and may warrant conservative gust caps or higher stagnation thresholds on smooth cases.
12. Engineering Design Problems
The complete mathematical formulations for these engineering Design Problems are mention in
Appendix A.
12.1. Cantilever Stepped Beam
The cantilever stepped beam design problem seeks optimal heights and widths for five beam segments such that the total volume of the beam is minimized while satisfying stress and deflection constraints. Each segment of length carries a load P at the free end. The optimization variables are bounded between 1 and 5 cm for the heights and between 30 and 65 cm for the widths. The objective and constraints are given below.
Figure 14 depicts a five-segment cantilever with a concentrated load at the free end.
The optimization problem can be stated as follows:
Table 5 summarizes the optimization results for the cantilever stepped beam. The RCCO algorithm achieved the lowest mean volume (63,465) with a relatively small standard deviation (589), demonstrating robust performance. It outperformed the second best optimizer (POA) by about 646 units (roughly a
improvement) and produced a significantly better objective value than the average of the remaining algorithms. The RCCO design variables lie in the middle of their bounds, indicating a balanced beam profile.
12.2. Pressure Vessel
The pressure-vessel design problem requires choosing the shell thickness , head thickness , inner radius , and shell length such that the manufacturing cost of a cylindrical vessel with hemispherical heads is minimized. The steel plates available for the shell and heads come in increments of in. The objective function includes material, forming, and welding costs, subject to stresses and manufacturing constraints on thicknesses and volume.
Figure 15 sketches a thin-walled pressure vessel with a cylindrical shell of length
, inner radius
, shell thickness
, and head thickness
.
The cost function and constraints are given by [
60]:
Table 6 contains the numerical results. The RCCO optimizer delivered the lowest mean cost (about 6077) with the smallest standard deviation (82). Its cost is roughly 29 units cheaper than the next best algorithm (ChOA), a relative improvement of nearly
, and it significantly outperforms the average of the remaining methods. Moreover, the RCCO solution respects the discrete thickness increments and achieves a balanced vessel design with radius
and length
.
12.3. Planetary Gear Train
A planetary gear train consists of a ring gear with
R teeth, a sun gear with
S teeth, and
identical planet gears with
P teeth each. The ring gear meshes internally with the planets while the planets mesh externally with the sun gear. The design objective considered here is to match a target transmission ratio
by choosing integer values for
, and
that satisfy meshing and spacing requirements. The gear ratio for a fixed ring gear is
, as described in
Figure 16.
The meshing condition requires that
R equals the sum of the sun teeth and twice the planet teeth,
. Fixing the ring gear implies a transmission ratio
[
61]. To match a target ratio
, we minimize the squared deviation
subject to
and to the planet spacing requirement that
is evenly divisible by the number of planets
[
61]. The variables
are constrained to positive integers within prescribed bounds.
The optimization results in
Table 7 show that the RCCO algorithm attained the smallest mean objective value (
), surpassing the next best algorithm (WOA) by about
(
). Although the differences between methods are relatively small, RCCO also exhibited a low standard deviation, indicating consistency. Its selected gear tooth counts (
,
,
) and planet number yield a ratio closest to the target.
12.4. Ten-Bar Planar Truss
The ten-bar planar truss is a classical benchmark problem for weight minimization. Ten bars of known lengths
are connected to form a plane truss subject to two load cases. The design variables are the cross-sectional areas
(
) of each bar. The objective is to minimize the total weight
while satisfying stress limits
and nodal displacement limits
, as described in [
62]. The Young’s modulus is
and the material density is
[
62].
Figure 17 illustrates a planar truss comprising six nodes and ten bars. This sketch conveys the idea of multiple interconnected bars and is not drawn to scale.
The optimization formula is as follows:
From
Table 8, we see that the RCCO algorithm obtains the lowest mean weight (
). It improves upon the runner-up (SCA) by approximately
units (
) and beats the average performance of the remaining algorithms by more than
. The RCCO design exhibits moderate cross-sectional areas across most bars, indicating an efficient weight distribution within the allowable bounds.
12.5. Three-Bar Truss
In the three-bar truss design problem, the cross-sectional areas of two members are optimized:
controls the area of the two diagonal bars and
controls the area of the central vertical bar. The structure supports a vertical load
P at its apex. The aim is to minimize the volume while satisfying stress constraints in each member. The heights and loads are fixed (
,
). Both design variables are bounded between 0 and 1 [
60].
A three-bar truss is shown in
Figure 18. Two inclined members of area
support the top node, and a vertical member of area
completes the triangular structure.
The volume to be minimized is
According to
Table 9, the RCCO algorithm achieves the smallest mean volume (
), narrowly beating ZOA by
units. Although the improvement is tiny in absolute terms (approximately
), RCCO also displays the smallest standard deviation, indicating high consistency. The optimal design variables (
,
) satisfy all stress constraints while minimizing material usage.
13. Conclusions
This paper presented the Rain-Cloud Condensation Optimizer (RCCO), a hydrology- inspired metaheuristic that operationalizes condensation, coalescence, entrainment, and turbulence as complementary search operators. A dual-attractor mechanism—pulling droplets toward both the global leader and a rank-weighted cloud core—provides stable intensification, while band sampling with mirroring and occasional heavy-tailed gusts sustain diversity and offer inexpensive basin escape. Sine-map seeding strengthens early coverage; a wrap–and–reflect rule maintains feasibility at the boundaries. RCCO exposes a few hyperparameters and exhibits linear time and memory growth with population size and problem dimension.
Extensive tests on the CEC2022 suite show that RCCO achieves competitive-to-superior accuracy with low variance and strong stability across unimodal, multimodal, hybrid, and composition functions, including high-dimensional cases, while five engineering studies (cantilever stepped beam, pressure vessel, planetary gear train, ten-bar planar truss, and three-bar truss) confirm effectiveness under practical constraints. Future work includes formal convergence analysis, adaptive/parameter-free variants, multiobjective and discrete extensions, and scalable parallel implementations.