Next Article in Journal
Residual Strength of Adhesively Bonded Joints Under High-Velocity Impact: Experimental and Numerical Investigation of Impact-Induced Degradation
Next Article in Special Issue
The Potential of Thermal Energy Obtained from Exhaust Gases in the Production of Hot Mix Asphalt (HMA)
Previous Article in Journal
Thermal Management Challenges in 2.5D and 3D Chiplet Integration: A Review on Architecture–Cooling Co-Design
Previous Article in Special Issue
Development of a Methodology for Optimizing Repair Interval Timing for Mining Equipment Units
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Henry Gas Solubility Optimization for Solving Data and Engineering Design Problems

1
Computer Science Department, Faculty of Information Technology, University of Petra, Amman 11196, Jordan
2
Virtual and Augmented Reality Department, Faculty of Information Technology, University of Petra, Amman 11196, Jordan
3
Networks and Cybersecurity Department, Faculty of Information Technology, Al-Ahliyya Amman University, Amman 19328, Jordan
4
Design and Visual Communication Department, School of Architecture and Built Environment (SABE), German Jordanian University (GJU), Amman 11180, Jordan
5
Faculty of Artificial Intelligence, Al-Balqa Applied University, Al-Salt 19117, Jordan
6
Department of Journalism, Media, and Digital Communication, School of Arts, The University of Jordan, Amman 11196, Jordan
*
Author to whom correspondence should be addressed.
Eng 2025, 6(12), 374; https://doi.org/10.3390/eng6120374
Submission received: 4 November 2025 / Revised: 3 December 2025 / Accepted: 7 December 2025 / Published: 18 December 2025
(This article belongs to the Special Issue Interdisciplinary Insights in Engineering Research)

Abstract

Many engineering design problems are formulated as constrained optimization tasks that are nonlinear and nonconvex, and often treated as black boxes. In such cases, metaheuristic algorithms are attractive because they can search complex design spaces without requiring gradient information. In this work, we propose an Enhanced Henry Gas Solubility Optimization (eHGSO) algorithm, which is an improved version of the physics-inspired HGSO method. The enhanced variant introduces six main contributions: (i) a more diverse, population-wide initialization strategy to cover the design space more thoroughly; (ii) adaptive temperature/pressure control parameters that automatically shift the search from global exploration to local refinement; (iii) an elitist archive with differential perturbation that accelerates exploitation around high-quality candidate designs; (iv) a simple combination of the global HGSO search moves with a lightweight gradient-free local search to refine promising solutions; (v) a constraint-handling mechanism that explicitly prioritizes feasible solutions while still allowing exploration near the constraint boundaries; and (vi) a complexity and ablation analysis that quantifies the impact of each mechanism and confirms that they introduce only modest computational overhead. We evaluate eHGSO on four classical constrained engineering design problems: the stepped cantilever beam, the tension/compression spring, the welded beam, and the three-bar truss. Its performance is compared with seventeen recent metaheuristic optimizers over multiple independent runs. eHGSO achieves the best average objective value on the cantilever, spring, and welded-beam problems and shares the best average result on the three-bar truss. Compared to the second-best method, the mean objective is improved by about 0.84 % for the cantilever beam and 0.35 % for the welded beam, while the spring and truss results are essentially equivalent at four significant figures. Convergence and robustness analyses show that eHGSO reaches high-quality solutions quickly and consistently. Overall, the proposed eHGSO algorithm appears to be a competitive and practical tool for constrained engineering design problems.

1. Introduction

Metaheuristics are a family of high-level search strategies designed to efficiently explore large, complex, and often nonconvex search spaces by balancing diversification (global exploration) and intensification (local exploitation). Over the last two decades, scores of population-based and trajectory-based methods have been proposed, with design motifs ranging from explicit neighborhood search to information sharing within a swarm. This ecosystem continues to expand with algorithms that emphasize robust global search, adaptive control of randomness, and problem-agnostic operators that generalize across domains [1,2,3,4,5].
A recurring theme in metaheuristic design is the use of metaphors to encode search dynamics. Bio-/animal-inspired swarms coordinate candidate solutions the way organisms forage or communicate; physics/chemistry-inspired methods abstract energy, fields, or transport processes; and socio-behavioral heuristics borrow from collective decision making. Beyond novelty, recent works increasingly focus on principled operator design (selection pressure, diversity preservation, and learning-guided moves) and on mechanisms that modulate exploration/exploitation as the search progresses [6,7,8,9,10].
The appeal of metaheuristics in engineering is their flexibility: they can accommodate nonlinearity, discreteness, simulator-in-the-loop evaluations, and black-box constraints typical of real designs. Applications range from structural and materials design to power/energy systems, transportation, and process engineering. Recent studies illustrate this breadth, including crashworthiness/structural tailoring, power-loss reduction and microgrid scheduling, and design of flow fields in energy devices—all tackled with nature-inspired search under practical constraints [11,12,13,14,15].
Another trend is coupling metaheuristics with machine learning to speed search, tune model hyperparameters, or co-learn surrogates that guide sampling. Examples include hybrid SVM/optimizer frameworks, learning-augmented criteria for geotechnical risk, and image/vision pipelines where the optimizer selects parameters or structures. Such pairings leverage the optimizer’s global search with a learner’s predictive structure to improve sample efficiency [16,17,18,19,20].
Henry Gas Solubility Optimization (HGSO) belongs to the physics/chemistry-inspired family. Conceptually, it abstracts Henry’s law and related dissolution phenomena to update solution “concentrations” in response to pseudo-pressure/temperature controls that manage exploration and exploitation. As with many physics-inspired schemes, algorithm quality hinges on (i) initialization diversity, (ii) the scheduling of control parameters, (iii) elitism and restart safeguards against stagnation, and (iv) domain-aware constraint handling. The demonstrated effectiveness of other physics/migration/optics-inspired algorithms in constrained engineering contexts motivates our focus on an enhanced HGSO variant [5,21,22].
We propose an Enhanced HGSO (eHGSO) for constrained engineering design. The main contributions are as follows:
  • Diversity-preserving start: a stratified Latin-hypercube initialization combined with quasi-opposition sampling to seed a well-spread population.
  • Nonlinear control schedule: a temperature/pressure schedule that adapts to population entropy, strengthening early exploration and late exploitation.
  • Elitist archive with differential perturbation: a small archive preserves the best solutions; archived elites perturb the dissolution step to accelerate local refinement without premature convergence.
  • Hybrid local search: a lightweight gradient-free local search (pattern search) is triggered adaptively near promising regions to sharpen feasibility and improve precision.
  • Constraint handling: a feasibility-priority rule with adaptive penalties ensures consistent progress on constrained problems common in engineering designs.
  • Complexity and ablation: we analyze time complexity and provide ablations isolating each mechanism’s contribution to solution quality.

2. Related Work

Metaheuristic optimization has grown into a very large and active area, and several recent surveys and reviews provide a high-level map of available algorithms, benchmark functions, and engineering applications [23,24,25]. Building on this global view, we highlight the lines of work that are most relevant for positioning the proposed enhanced Humboldt Squid Optimization (eHGSO).
Bio-inspired and animal-swarm metaheuristics. A large body of work draws on biological foraging, communication, and survival to shape search operators. Representative examples include the Jellyfish optimizer [1], Beluga Whale optimization [2], Hunting Search based on group hunting [3], and Moss Rose-inspired search [26]. Marine and terrestrial analogues such as the Humboldt Squid Optimization Algorithm [6] and disease-inspired designs (e.g., the Liver Cancer Algorithm) [8] further expand the operator toolkit. Beyond generic benchmarks, bio-inspired heuristics appear in prognostics for electromechanical actuators [27], structural health monitoring via antlion-based denoising [28], and domain-specific applications like microgrid cost management inspired by canine behavior [13]. Agricultural and ecological inspirations—Elymus Repens [7] and natural reforestation [29]—show how metaphor-driven dynamics can still be engineered to respect constraints and costs in realistic settings. More recently, novel metaphors such as the horned lizard defense tactics [30] and multi-strategy improved whales that refine the classic WOA spiral-encircling mechanism [31] further exemplify the rapid growth of metaphor-inspired designs that are structurally related to our eHGSO.
Physics/chemistry-inspired, quantum, and migration forms. A parallel line of research abstracts physical and transport processes. Membrane-inspired multiverse search augments exploration via parallel “universe” interactions [9], while quantum-inspired methods incorporate probabilistic representations or entanglement-like moves to improve multimodal performance [4,10,32]. Migration-based heuristics provide alternative move operators emphasizing directed dispersal [5]. Related physics metaphors include optics-inspired formulations for combinatorial scheduling [21]. These mechanisms are attractive for constrained engineering tasks because their control parameters can be interpreted as thermodynamic or transport levers that naturally modulate exploration/exploitation.
Social, sport, and human-activity inspired heuristics. Designers also mine coordinated human activities for algorithmic analogies. Football team training [33], train-heist logistics [34], and reminiscence/Mount Kailash-inspired schemes [35] encode collaboration, role specialization, or planned deception as search operators. Other examples model staged preparation (commercial pilot preparation, Mindarinae and Formica behaviors) for power system objectives [12] or ecological succession (natural reforestation) for progressive intensification [29]. A stadium-spectators heuristic shows similar crowd-dynamics metaphors for global search [36]. While metaphors differ, the underlying primitives—selection pressure, adaptive neighborhoods, and information sharing—tend to converge.
Metaheuristics blended with machine learning. Many recent works embed learners to either evaluate candidates more cheaply or co-optimize model parameters. Hybrid SVM frameworks tuned by nature-inspired search appear in Android malware identification [16] and software-defined networking [18]. Learning-augmented risk criteria for rockburst prediction leverage CatBoost and metaheuristic selection to integrate domain priors with data-driven structure [17]. In imaging, nature-inspired optimizers power software tools for contrast enhancement [19] and denoising [28]. Social media content generation likewise pairs graph-theoretic structure with bio-inspired search to shape multimodal outputs [20]. These hybridizations show that global search complements inductive bias from learners, often improving sample efficiency and robustness. In a related direction, resource scheduling in industrial operating systems has been tackled by coupling deep reinforcement learning with the Whale Optimization Algorithm [37], illustrating how learned policies can guide metaheuristic search in complex, dynamic environments.
Energy and power systems applications. Energy problems offer rich testbeds: microgrid scheduling under uncertainty [13], optimal power flow solved by a logistics-inspired optimizer [34], and wind-power time-series forecasting with bio-inspired parameter search [38]. Broader reviews emphasize nature-inspired solutions for sustainable energy, covering dispatch, sizing, and control [39]. Quantum-inspired policy-value optimization has also been explored for real-time generation control [32]. At the device scale, metaheuristics assist in designing flow fields and materials (e.g., proton exchange membrane fuel cells), where transport-phenomena analogies dovetail with the underlying physics [14]. Collectively, these cases highlight the need for constraint-aware, sample-efficient search—the same design goals at the core of our eHGSO.
Transportation, logistics, and civil/industrial contexts. In transport, route planning in rail systems has been tackled with bee-inspired search [40], while truck platooning coordination has used ant-colony-inspired metaheuristics [15]. Civil/industrial examples include optimizing classic windows using bio-inspired schemes [41] and image restoration via bio-inspired regularized inverse filtering [42]. Safety-critical medical tasks (multi-tumor localization) have also employed hunting-dogs-inspired search [43]. In geomechanics, learning-guided metaheuristics address rockburst risk by blending physical indicators with predictive models [17]. These applications emphasize the versatility of nature-inspired operators across discrete/continuous and deterministic/stochastic settings.
Evolutionary algorithms in oil and gas. Genetic algorithms have been widely applied to oil and gas industry problems, achieving notable success in optimizing complex operations. For example, Akopov introduced a parallel genetic algorithm with a fading-selection mechanism to enhance convergence on large-scale system models [44]. Güyagüler and Gümrah demonstrated that a GA-based approach can effectively maximize gas production rates in underground storage, outperforming manual heuristics in meeting fluctuating energy demands [45]. In recent years, hybrid bio-inspired optimizers have gained prominence by combining evolutionary and swarm intelligence techniques. A representative example is the real-coded genetic algorithm–particle swarm optimization (RCGA-PSO) hybrid, which integrates GA and PSO to improve search efficiency and solution quality over standalone methods [46]. Similarly, a clustering-based hybrid PSO (CBHPSO) algorithm has been proposed to maintain diverse solution clusters and thereby achieve better Pareto-optimal fronts in multi-objective problems [47]. Another advanced hybrid, the multi-objective RCGA–MOPSO (MORCGA-MOPSO-II) algorithm, leverages clustering techniques to approximate Pareto fronts, showing enhanced performance on complex agent-based models [48]. These developments illustrate the growing trend of leveraging hybrid evolutionary algorithms to solve challenging gas and oil optimization problems with improved robustness and efficiency.
Structural design, crashworthiness, and materials. In structural optimization, bio-inspired design principles have been used to create porous structures with improved crashworthiness [11]. Vision-inspired collision detection models contribute related bio-mimetic design thinking to robotics and autonomous systems [49]. Energy-device materials/layout design, such as PEM fuel-cell flow fields, benefit from transport-aware search formulations [14]. These studies motivate physics-consistent operators and constraint handling—both central in our enhanced HGSO for engineering designs.
Positioning eHGSO among recent metaheuristics. The rapid expansion of nature-inspired optimization has motivated several up-to-date surveys, which classify metaheuristics and associated benchmark functions and highlight open challenges in balancing exploration, exploitation, and constraint handling [23,24,25]. Recent high-performance optimizers such as the meta swarm intelligence algorithm MNEARO [50] and the multi-strategy improved manta ray foraging optimizer [51] demonstrate that carefully designed multi-strategy mechanisms can substantially boost robustness and solution quality on CEC-type benchmarks and constrained engineering problems. Our eHGSO follows this line of work by combining a metaphor-inspired spiral mechanism, akin to the encircling behavior in improved WOA variants [31], with adaptive, hunger-driven exploration. In the experimental section, we therefore compare eHGSO against these recent high-performance methods to more rigorously situate its advantages and limitations within the current landscape of metaheuristic optimizers.

3. Enhanced Henry Gas Solubility Optimization

Henry’s law asserts that the dissolved concentration of a gas in a liquid grows in proportion to its partial pressure. This simple physical rule—augmented by temperature effects and multi-gas interactions—provides a rich metaphor for search: agents (molecules) rearrange under pressure–solubility forces, explore via agitation and eddies, and eventually concentrate near low-energy (low-cost) regions. eHGSO abstracts these mechanisms into initialization, move operators, and group dynamics that we formalize later in Equations (2)—(6), (10)—(12), (14), (15), (22) and (23).

From Henry’s Law to Search Mechanics

At fixed temperature, the proportionality C P guides intuition: raising the gas pressure increases solute concentration at the interface, nudging molecules down into the solution. In a black-box optimizer we lack true thermodynamic potentials, but we can mimic these forces of attraction to better states by: (i) a current-to-elite pull that encourages drift toward high-quality exemplars (Equations (5) and (6)); (ii) temperature-like scheduling that regulates how strongly the system “prefers” elite regions (Equation (22)); and (iii) occasional agitation (Lévy steps) to escape poor basins (Equation (12)). In short, pressure ↔ selection pressure; solubility ↔ acceptance of promising moves.
Figure 1 depicts gas molecules (circles) above a liquid. Downward arrows symbolize dissolution under higher partial pressure, evoking Henry’s law (more pressure ⇒ more dissolved gas).
Figure 1. Gas dissolving into liquid under pressure (Henry’s law).
Figure 1. Gas dissolving into liquid under pressure (Henry’s law).
Eng 06 00374 g001
To set intuition, we recall the proportionality C = k H P as shown in Figure 2 which illustrate Henry’s equations proportionality, and visually explains how pressure modulates solubility.
Figure 2. Henry’s proportionality: concentration increases linearly with pressure.
Figure 2. Henry’s proportionality: concentration increases linearly with pressure.
Eng 06 00374 g002
Figure 3 (compression increases solubility)  uses a piston metaphor: compressing the gas raises P and drives more molecules into solution.
Figure 3. Piston compression raises P and increases dissolution (intuition behind HGSO).
Figure 3. Piston compression raises P and increases dissolution (intuition behind HGSO).
Eng 06 00374 g003
HGSO partitions agents into groups with different Henry constants [52]. Figure 4 (multiple gas types/groups) sketches groups drifting toward better regions (circles only).
Figure 4. Agents grouped by gas type (Henry constants); arrows indicate drift toward gbest.
Figure 4. Agents grouped by gas type (Henry constants); arrows indicate drift toward gbest.
Eng 06 00374 g004

4. Mathematical Model

In this section we formalize the optimization problem solved by eHGSO and give a clean presentation of the main operators used in the algorithm. The emphasis is on clear notation and readable expressions so that each term can be unambiguously interpreted after typesetting.

4.1. Problem Statement and Notation

Let the decision vector be
x = ( x 1 , , x D ) R D .
The search is restricted to a hyper-rectangular domain
Ω = x R D : l x u ,
where l = ( l 1 , , l D ) and u = ( u 1 , , u D ) are the lower and upper bounds, respectively, and the inequalities are understood componentwise.
The (possibly constrained) optimization problem we consider is
min x Ω f ( x ) ,
where f : Ω R is the objective function. For the engineering design applications, f is subject to a set of inequality constraints
g m ( x ) 0 , m = 1 , , M c ,
which are handled by the constraint-handling mechanism. In the core mathematical model below, we focus on the update rules for the population of candidate solutions.
Throughout, we denote the following:
  • N—population size;
  • x i —position of the i-th individual, i = 1 , , N ;
  • f ( x i ) —objective value of individual i;
  • g best —best solution found so far (global best);
  • pbest —a near–elite solution sampled from the top p fraction of the population (“pbest” pool).

4.2. OBL-LHS Initialization

To obtain a well-spread initial population, eHGSO combines Latin hypercube sampling (LHS) with opposition-based learning (OBL).

4.2.1. Latin Hypercube Sampling

For each dimension d { 1 , , D } and each individual i { 1 , , N } , we generate
X i , d = l d + π i , d r i , d N u d l d ,
where
  • π i , d { 1 , , N } are elements of a random permutation of { 1 , , N } for each dimension d;
  • r i , d U ( 0 , 1 ) are independent uniform random variables.
This stratifies each coordinate into N equal probability intervals and guarantees that each interval is sampled exactly once per dimension, improving the coverage of Ω compared to simple random sampling.

4.2.2. Opposition-Based Learning

For each LHS point X i = ( X i , 1 , , X i , D ) , we define its opposite point X i opp by
X i , d opp = l d + u d X i , d , d = 1 , , D .
We then evaluate f ( X i ) and f ( X i opp ) and retain the better of the two as the initial position:
X i ( 0 ) = X i opp , if f ( X i opp ) < f ( X i ) , X i , otherwise .
The set { X i ( 0 ) } i = 1 N forms the initial population.

4.3. Phase I: DE/Current-to- pbest /1

During the early iterations, eHGSO uses a JADE-style DE/current-to-pbest/1 operator to quickly improve the population.

4.3.1. Mutation

For each individual x i , we construct a mutant vector v i as
v i = x i + F ( pbest x i ) + F ( x r 1 x r 2 ) ,
where:
  • F ( 0 , 1 ] is the differential weight;
  • r 1 and r 2 are two distinct indices selected uniformly from { 1 , , N } { i } ;
  • pbest is chosen uniformly from the top p N individuals (the “pbest” pool).

4.3.2. Binomial Crossover

A trial vector u i is generated by binomial crossover:
u i ( d ) = v i ( d ) , if   rand CR   or   d = d rand , x i ( d ) , otherwise ,
where:
  • CR [ 0 , 1 ] is the crossover rate;
  • d rand is a randomly chosen dimension that guarantees at least one component is taken from v i ;
  • rand is a uniform random number in ( 0 , 1 ) .

4.3.3. Selection

Greedy selection replaces the parent if the trial is better:
x i u i , if   f ( u i ) < f ( x i ) , x i , otherwise .
(For constrained problems, f ( · ) here denotes the penalized or feasibility-aware objective value).

4.3.4. Parameter Adaptation

The parameters F and CR are updated slightly at each iteration to avoid manual retuning. A simple adaptation rule used in this work is
F max ( 0.1 ,   min 1.0 , 0.9 F + 0.1 ( 0.5 + 0.5 rand ) ) ,
CR max 0.05 , min 1.0 , 0.9 CR + 0.1 ( 0.7 + 0.3 rand ) ,
with a fresh random number rand U ( 0 , 1 ) at each update.

4.4. Phase II: Lévy Drift and Spiral Contraction

After the seeding phase, two complementary motion primitives are used in addition to the HGSO physics core: a Lévy-flight drift (for exploration) and a spiral contraction toward the global best (for exploitation).

4.4.1. Lévy Flight

We employ Mantegna’s algorithm to generate Lévy-distributed step sizes. For a chosen stability parameter β ( 1 , 2 ] , the scale parameter σ is
σ = Γ ( 1 + β ) sin ( π β / 2 ) Γ ( 1 + β ) / 2 β 2 ( β 1 ) / 2 1 / β ,
and the scalar step is given by
step = U | V | 1 / β , U N ( 0 , σ 2 ) , V N ( 0 , 1 ) .
With a time-dependent scaling factor α ( t ) , the Lévy update for individual i is
x i x i + α ( t ) step x i g best ,
where denotes componentwise multiplication, and 
α ( t ) = 0.01 1 t , t = iter MaxIter [ 0 , 1 ] .

4.4.2. Spiral Contraction

The spiral step moves individuals smoothly toward g best with a decaying radius. Let
t = iter MaxIter ,
and define the contraction factor
r ( t ) = exp ( b t ) , b > 0 .
Then the spiral update is
x i g best + r ( t ) x i g best .

4.4.3. Greedy Acceptance and Archive

Both Lévy and spiral proposals are accepted via the same greedy rule: if a candidate x i (obtained from any of the above operators) improves the (possibly penalized) objective, then it replaces the current x i :
x i x i , if f ( x i ) < f ( x i ) , x i , otherwise .
Whenever x i is replaced, the old solution can be stored in an external archive A :
A A x i old : f ( x i ) < f ( x i ) , | A | 5 N ,
which is sampled to increase diversity when building differential vectors.

4.4.4. Bound Handling

After any update, we enforce the box constraints by projection onto Ω :
x i min u , max { l , x i } ,
where the min and max are applied componentwise.

4.5. HGSO Physics Core: Group Update

Finally, we summarize the HGSO-specific group update, which supplies the physics-inspired interaction between individuals.

4.5.1. Gain Factor

For each agent j we define a gain term γ j that depends on its fitness relative to the current global best:
γ j = β exp f ( g best ) + 0.05 f ( x j ) + 0.05 ,
where β > 0 is a constant.

4.5.2. Group Position Update

Let k index the components of x j , and let pbest ( g ) denote the best solution within group g to which j belongs. A simplified groupwise update for component k is
x j , k x j , k + s j rand γ j p ( g ) , k best x j , k + rand α s j S ( g ) g best , k x j , k ,
where
  • s j { + 1 , 1 } controls the direction of change;
  • rand U ( 0 , 1 ) ;
  • α > 0 is a scaling parameter;
  • S ( g ) is the solubility of group g (defined below);
  • g best , k is the k-th component of g best .

4.5.3. Temperature and Henry Constants

A decaying temperature schedule is used,
T ( iter ) = exp iter MaxIter ,
with a reference temperature T 0 = 298.15  K. For group g with Henry constant K ( g ) and a problem-specific constant C ( g ) , we update
K ( g ) K ( g ) exp C ( g ) 1 T 1 T 0 .
The corresponding solubility S i for an agent i in group g is
S i = P i K ( g ) ,
where P i is a pressure-like parameter associated with agent i.
Together, (19)–(23) define the HGSO physics core: agents move under pseudo-pressure and solubility forces toward groupwise and global exemplars, while the temperature schedule gradually reduces exploration and promotes exploitation as the iterations proceed.

4.6. eHGSO High-Level Pseudocode

Algorithm 1 presents eHGSO; each major step cites its defining equations. As it can be seen, eHGSO is used to solve the generic minimization problem in Equation (1) over the bounded domain Ω = { x R D : l x u } . The algorithm maintains a population X = { x 1 , , x N } of candidate solutions, each representing a feasible design vector within the bounds. At every iteration, eHGSO evaluates the objective f (and, when present, the constraint functions) and updates the population by composing the operators introduced in the previous subsections.
The search starts with an OBL-LHS initialization (Equations (2)—(4)), which seeds a diverse set of points spread across Ω . The population is then partitioned into HGSO groups, and the thermodynamic quantities T, K ( g ) , and  S i are initialized as in Equations (21)—(23). During the early “seeding” iterations, a JADE-style DE/current-to- p best /1 operator (Equations (5)—(9)) rapidly drives the population toward promising regions while still exploring via differential perturbations and the archive mechanism.
In each subsequent iteration, the physics-inspired HGSO drift (Equations (19) and (20)) is combined with two complementary motion primitives: a Lévy-flight step (Equations (10)—(12)) that occasionally proposes long-distance moves for exploration, and a spiral contraction toward the global best g best (Equations (13)—(15)) that strengthens exploitation as the search progresses. All proposals are accepted using greedy replacement (Equation (16)), with out-of-bounds points projected back to Ω (Equation (18)) and discarded incumbents stored in the archive A (Equation (17)) to preserve diversity.
For constrained engineering problems, f ( x ) is evaluated together with the constraint set, and a feasibility-priority rule with adaptive penalties is used when comparing candidates: feasible solutions are always preferred to infeasible ones, and among infeasible solutions the total constraint violation determines the ordering. The algorithm iterates until a prescribed iteration or evaluation budget is exhausted and returns the best (feasible) design vector g best found so far, together with its convergence history. Algorithm 1 summarizes the resulting procedure in pseudocode form and links each step back to its defining equations.
Algorithm 1 eHGSO (Enhanced HGSO with DE/pbest, Lévy and Spiral).
  • Require: population size N, iterations M, bounds l , u , dimension D, objective f
1:
Initialize X by LHS                       ▷ see Equation (2)
2:
Form opposite points X opp                    ▷ see Equation (3)
3:
X greedy pick better of X and X opp               ▷ see Equation (4)
4:
Partition into HGSO groups; set K , P , C ; evaluate f and set g best     ▷ cf. Equation (1)
5:
for  t = 1 to M do
6:
    Update T , K , S                     ▷ see Equations (21)—(23)
7:
    if  t s e e d F r a c · M  then
8:
        for  i = 1 to N do
9:
            Pick p best from top-p pool; build V i             ▷ see Equation (5)
10:
           Binomial crossover to get U i                ▷ see Equation (6)
11:
           Greedy selection X i U i if better              ▷ see Equation (7)
12:
        end for
13:
        Light adaptation of F and CR             ▷ see Equations (8) and (9)
14:
    end if
15:
    Optionally apply a problem-specific base step and greedy-accept    ▷ Equation (16)
16:
    Build Lévy candidate via Mantegna          ▷ see Equations (10)—(12)
17:
    Build Spiral candidate via contraction          ▷ see Equations (13)—(15)
18:
    Greedy replace with each candidate               ▷ see Equation (16)
19:
    Update archive A and cap size                   ▷ see Equation (17)
20:
    Project to bounds                       ▷ see Equation (18)
21:
    Update g best
22:
end for
23:
return  g best and convergence curve

4.7. Movement Strategy

eHGSO combines three complementary motion primitives within each iteration t: a JADE-style DE/current-to-pbest/1 step (Equations (5) and (6)), a Lévy drift (Equations (10)—(12)), and a contracting spiral toward g best (Equations (13)—(15)), all accepted greedily (Equation (16)) and projected to the feasible set (Equation (18)). The evolution of groupwise strengths is governed by T, K ( g ) , and S in Equations (21)—(23). Figure 5 gives a geometric view of a single agent’s moves; Figure 6 clarifies how operator influence changes over time; and Figure 7 illustrates projection and archiving (Equations (17) and (18)).

4.8. One-Iteration Update Cycle (Operator Composition)

At iteration t with normalized time u = t / MaxIter , the update proceeds as a single paragraph as follows. If t s e e d F r a c · MaxIter (seed phase), then for each x i we form the mutant via Equation (5), build the trial vector via Equation (6), and apply greedy selection per Equation (7); the parameters F and CR adapt according to Equations (8) and (9). If a problem-specific base step is provided, it proposes x i , and we perform the greedy replacement in Equation (16). Next, we apply Lévy drift: heavy-tailed steps are generated using Equations (10) and (11) and updates follow Equation (12), with annealing α ( u ) = 0.01 ( 1 u ) to taper long relocations as search progresses. We then perform spiral contraction toward g best with ratio r ( u ) = e b u (Equation (14)) and update by Equation (15). Proposals are accepted greedily only if they improve f (Equation (16)); replaced incumbents are appended to the archive A , capped as in Equation (17). Finally, feasible projection is enforced by applying the bound-handling rule in Equation (18).

4.9. Geometric Interpretation of the Composite Move

Figure 5 decomposes the motion of a representative agent. The DE component pulls directionally toward p best while injecting the lateral difference ( x r 1 x r 2 ) (Equation (5)); crossover mixes the mutant with x i (Equation (6)). The Lévy drift draws a jagged, long-tailed trajectory (Equations (10)—(12)), and the spiral smoothly steers the point into the attraction basin around g best (Equations (14) and (15)). Greedy acceptance (Equation (16)) selects whichever candidate improves f, after which projection (Equation (18)) prevents leaving Ω .
Figure 5. Enhanced geometry of a composite move: DE toward p best plus difference, crossover to U i , Lévy drift, and spiral toward g best ; greedy acceptance keeps the best proposal.
Figure 5. Enhanced geometry of a composite move: DE toward p best plus difference, crossover to U i , Lévy drift, and spiral toward g best ; greedy acceptance keeps the best proposal.
Eng 06 00374 g005

4.10. Operator Scheduling Across Iterations

eHGSO purposefully staggers its operators. During the early seed phase ( t s e e d F r a c · MaxIter ), the DE/current-to-pbest/1 operator (Equations (5)—(7)) is active to quickly establish high-quality anchors. As time advances, the Lévy amplitude α ( u ) in Equation (12) decays, reducing long jumps, while the spiral contraction ratio r ( u ) in Equation (14) increases exploitation near g best . The temperature schedule T and group constants K ( g ) (Equations (21) and (22)) further modulate S (Equation (23)), making some groups more exploitative than others. Figure 6 visualizes these trends.
Figure 6. Operator schedule: early DE seeding, Lévy amplitude decays, spiral exploitation grows.
Figure 6. Operator schedule: early DE seeding, Lévy amplitude decays, spiral exploitation grows.
Eng 06 00374 g006

4.11. Projection and Archive Effects

Greedy replacement (Equation (16)) may yield candidates outside Ω ; these are projected by Equation (18). Replaced incumbents populate the archive A , capped as in Equation (17); sampling from A in Equation (5) broadens the difference vector distribution and mitigates stagnation. Figure 7 shows these mechanisms.
Figure 7. Projection to Ω (Equation (18)) and archiving of replaced incumbents (Equation (17)).
Figure 7. Projection to Ω (Equation (18)) and archiving of replaced incumbents (Equation (17)).
Eng 06 00374 g007

4.12. Exploration and Exploitation Behavior

Exploration seeks new, often far, regions of the search space; exploitation sharpens solutions near currently promising points. In eHGSO, these behaviors are explicitly realized by the operators already defined: OBL-LHS seeding (Equations (2)—(4)) and Lévy drifts (Equations (10)—(12)) bias exploration, whereas DE/current-to-pbest/1 and spiral contraction (Equations (5), (6), (14) and (15)) drive exploitation. Group heterogeneity through K ( g ) and S i (Equations (22) and (23)) lets different subpopulations emphasize exploration or exploitation simultaneously.

4.12.1. Roles of the Operators (Exploration vs. Exploitation)

OBL-LHS seeding (Equations (2)—(4)) provides stratified coverage with mirrored opposites, yielding high initial dispersion that promotes exploration. Lévy drift (Equations (10)—(12)) introduces heavy-tailed jumps that keep a nonzero probability of long relocations, preventing early confinement and sustaining exploration. DE/current-to-pbest/1 (Equations (5) and (6)) directs movement toward near-elite anchors while preserving lateral variance through the differential term ( x r 1 x r 2 ) , enabling controlled exploitation without collapsing diversity. Spiral contraction (Equations (14) and (15)) produces a smooth, radius-shrinking motion around g best , favoring late-stage exploitation. Finally, group constants (Equations (22) and (23))—by tuning K ( g ) and P i —allow some groups to sustain broader motion for exploration while others contract more aggressively for exploitation.

4.12.2. A Simple Diversity Indicator

Before visualizing, we define a population diversity proxy in Equation (24), referenced here for clarity. Let x ¯ ( t ) = 1 N i = 1 N x i ( t ) . We track the RMS radius
R ( t ) = 1 N i = 1 N x i ( t ) x ¯ ( t ) 2 2 .
According to expectation, exploration enlarges R ( t ) , while exploitation contracts it. Lévy steps (Equation (12)) tend to increase R ( t ) intermittently; the spiral ratio r ( t ) (Equation (14)) reduces it; DE/pbest (Equations (5) and (6)) reduces it directionally.

4.12.3. Qualitative Spatial Patterns

Figure 8 contrasts early exploration (broad spread with occasional long Lévy arrows) against late exploitation (agents clustered near g best with short, inward moves). The left panel annotates the exploratory sources (Equations (2), (3) and (12)); the right panel highlights exploitative pulls (Equations (5), (6) and (15)).
Figure 8. Early exploration (left) from OBL-LHS and Lévy drifts; late exploitation (right) from DE-to-pbest and spiral toward g best .
Figure 8. Early exploration (left) from OBL-LHS and Lévy drifts; late exploitation (right) from DE-to-pbest and spiral toward g best .
Eng 06 00374 g008

4.12.4. Diversity Contraction over Time

To connect space and time, we sketch how the RMS radius R ( t ) in Equation (24) typically evolves. Early on, R ( t ) is large due to Equations (2) and (3) and occasional Lévy bursts (Equation (12)); later, R ( t ) contracts because of Equations (5), (6) and (15). Figure 9 depicts this qualitative trend and highlights intermittent expansions caused by long Lévy steps.
Figure 9. Qualitative contraction of diversity R ( t ) (Equation (24)) with intermittent expansions due to Lévy steps.
Figure 9. Qualitative contraction of diversity R ( t ) (Equation (24)) with intermittent expansions due to Lévy steps.
Eng 06 00374 g009

4.12.5. Step-Length Mixture: Who Does What?

Finally, we emphasize that the eHGSO move magnitudes are a mixture: Lévy steps supply the heavy tail (Equations (10) and (11)), whereas DE/pbest and spiral produce many short steps (Equations (5) and (15)). Figure 10 shows a conceptual distribution of step lengths: the left peak corresponds to exploitation, the fat right tail to exploration.
Figure 10. Conceptual step-length mixture: many short steps from DE/pbest and spiral; a fat right tail from Lévy flights.
Figure 10. Conceptual step-length mixture: many short steps from DE/pbest and spiral; a fat right tail from Lévy flights.
Eng 06 00374 g010

5. Complexity Analysis

Let c f denote the (average) cost of one objective/constraint evaluation, and let D be the problem dimension. We separate the analysis into (i) function evaluations (FEs), which dominate when f is expensive, and (ii) vector arithmetic, which dominates only when f is cheap.

5.1. Function Evaluations per Phase

5.1.1. Initialization

The OBL-LHS seeding step generates N Latin-hypercube points and their N opposite points; we evaluate all of them and keep the better of each pair. The corresponding FE budget is
FE init = 2 N ( LHS + OBL greedy pick ) .

5.1.2. Per-Iteration Cost with Coexisting Operators

At iteration t the algorithm may apply several operators to each agent:
  • A DE/current-to- p best /1 proposal during the seeding phase;
  • A base HGSO-style update;
  • A Lévy candidate;
  • A spiral candidate;
  • A (optional) local-search move around selected elites.
Let N DE ( t ) , N base ( t ) , N L é vy ( t ) , N Spiral ( t ) , and  N LS ( t ) be the numbers of new points evaluated by each operator at iteration t. The total FEs at iteration t are then
FE t = N DE ( t ) + N base ( t ) + N L é vy ( t ) + N Spiral ( t ) + N LS ( t ) .
In the current implementation, we have the following:
  • DE/current-to- p best /1 is used only in the early “seeding” iterations, so N DE ( t ) { 0 , N } and N DE ( t ) = 0 once t > s e e d F r a c · M .
  • The base HGSO, Lévy, and spiral steps produce at most one additional candidate per agent, i.e., N base ( t ) N , N L é vy ( t ) N , and  N Spiral ( t ) N .
  • Local search is triggered adaptively for a small subset S t { 1 , , N } of promising agents (typically the current best few). If each local-search call uses at most L trial points, then N LS ( t ) L | S t | .
Consequently, even in a worst-case iteration where all global operators are active, the non-local-search part satisfies
N DE ( t ) + N base ( t ) + N L é vy ( t ) + N Spiral ( t ) c iter N ,
with a small constant c iter 4 (DE + base + Lévy + spiral). In practice, c iter is closer to 2–3 because DE is disabled outside the seeding phase, and the base step can be problem dependent or omitted.

5.1.3. Local Search Overhead

Let M be the total number of iterations and define
FE LS = t = 1 M N LS ( t ) ,
the total number of evaluations spent in local search. With the bound N LS ( t ) L | S t | and assuming at most S max agents are refined per iteration, we obtain
FE LS L S max M .
In our experiments S max is small (top 1–3 elites), and local search is not triggered at every iteration, so FE LS is typically one to two orders of magnitude smaller than the global-search FE budget; however, (27) provides a conservative upper bound that makes the dependence explicit.

5.1.4. Total FE Budget

Summing over iterations and using the bound on the global operators, we obtain
FE global = t = 1 M N DE ( t ) + N base ( t ) + N L é vy ( t ) + N Spiral ( t ) c iter M N ,
with 1 c iter 4 , depending on which operators are enabled. Adding initialization and local search yields
FE total 2 N + c iter M N + FE LS .
The earlier O ( 2 N + 4 M N ) bound corresponds to the conservative choice c iter = 4 and FE LS = 0 ; (29) makes the contributions of coexisting operators and local search explicit.

5.2. Archive (“File”) and Per-Agent Candidates

eHGSO maintains an external archive A of replaced solutions (see Equation (17)). Accessing this archive affects memory and arithmetic cost but does not introduce additional function evaluations because archived points are never re-evaluated. Insertions into A and sampling from it are O ( 1 ) per event, so the total bookkeeping work over M iterations is O ( M N ) in the worst case, with memory O ( N D ) (the archive size is capped at a constant multiple of N). This overhead is strictly dominated by the FE term whenever c f is non-negligible.
The evaluation of multiple candidates per agent (DE trial, base, Lévy, spiral, and occasional local-search proposals) is already captured in (26) via the terms N ( t ) . Greedy replacement ensures that, for each candidate, f is computed at most once, and the archive simply stores displaced incumbents without re-evaluation.

5.3. Runtime, Population Size, and Effective Iterations

Arithmetic work per candidate (mutation, crossover, Lévy step, spiral update, projection, and archive update) scales linearly with D, so the total arithmetic cost is O ( M N D ) , plus lower-order terms from archive management and constraint penalty updates. Combining this with (29), the overall runtime can be written as
T ( M , N , D ) = O 2 N + c iter M N + FE LS c f + M N D .
When f is expensive (e.g., simulator-in-the-loop design), the FE term dominates; when f is cheap, the  O ( M N D ) vector arithmetic term becomes visible.
An important practical consideration is the interaction between population size N and the effective number of iterations under a fixed FE budget B. Using (29) and neglecting initialization for simplicity, a budgeted run satisfies approximately
B c iter M N + FE LS M B FE LS c iter N .
Thus, larger populations increase the number of evaluations per iteration but reduce the number of generations that can be afforded for a fixed B. Our sensitivity results on CEC2022 (Section 6.4, Section 6.5, Section 6.6, Section 6.7, Section 6.8, Section 6.9, Section 6.10, Section 6.11, Section 6.12) show that, for the budgets considered, larger N yields better reliability and faster convergence per FE despite the smaller M because the additional per-iteration diversity more than compensates for the reduced iteration count. The complexity expressions above make this trade-off explicit while confirming that the added operators and archive only contribute modest constant factors to the dominant O ( FE total ) cost.

6. Statistical Comparison Results on CEC2022

6.1. Experimental Setup

This subsection summarizes the configuration used in all numerical experiments, including the CEC2022 benchmark. We report the number of runs per problem, the base parameters of eHGSO, the stopping criteria, and how constraints are treated for eHGSO and the comparison algorithms.

6.1.1. CEC2022 Benchmark (F1–F12)

We consider the first twelve single-objective functions of the CEC2022 suite, denoted F 1 , , F 12 . In our implementation, each problem is treated as a D = 10 dimensional, bound-constrained task with a common search box
Ω = [ 100 , 100 ] D ,
which corresponds to the settings
l b = 100 , u b = 100 , d i m = 10
in the code. The objective is evaluated via the official cec22_test_func routine:
f ( x ) = c e c 22 _ t e s t _ f u n c ( x , F u n ) ,
where Fun  { 1 , , 12 } selects the function index.
All algorithms, including the proposed eHGSO (implemented as HGSO), use the same population size and iteration budget:
N = S e a r c h A g e n t s _ n o = 50 , M a x _ i t e r = 1000 .
The stopping criterion on CEC2022 is therefore a fixed maximum of 1000 iterations; eHGSO does not employ any additional early-stopping conditions. For each function F j and each algorithm, we perform
R U N = 5
independent runs with different random initial populations and save the best objective value reached at the end of the run. All results in the CEC2022 tables and plots are based on these 5 runs per function–algorithm pair.
For each function, CEC2022 provides a known target value f j . In the code, these are stored in the vector optimal_values, and we compute, for each algorithm and function, the following summary statistics over the 5 runs:
  • Mean final best objective f ¯ = 1 R U N r f ( r ) ;
  • Std—standard deviation of the final best values;
  • ErrorMeasure—mean absolute error to the optimum, 1 R U N r | f ( r ) f j | ;
  • SEM—standard error of the mean, Std / R U N .

6.1.2. Base Parameters of eHGSO on CEC2022

eHGSO uses the following base settings on the CEC2022 benchmark:
  • Population size and budget: N = 50 , M a x _ i t e r = 1000 (as above).
  • DE/current–to–pbest/1 operator:
    -
    Initial differential weight F 0 = 0.5 ,
    -
    Initial crossover rate CR 0 = 0.9 ,
    -
    p–best fraction p = 0.2 ,
    with F and CR then adapted at each iteration according to (8)–(9).
  • Lévy flights: stability index β = 1.5 in (10)–(11) and amplitude α ( t ) = 0.01 ( 1 t ) as in (12).
  • Spiral contraction: decay rate b = 2 in (14).
  • HGSO core: group Henry constants K ( g ) are initialized to K ( g ) = 1 , the constants C ( g ) in (22) are set to C ( g ) = 1 , and the scalar β in (19) is set to β = 1 . The solubilities S i are updated via (23).
  • Archive: external archive size capped at 5 N as in (17).
All baseline algorithms in the CEC2022 comparison use the recommended parameter settings from their original paper.

6.2. Results Analysis over CEC2022

The Enhanced Henry Gas Solubility Optimization (eHGSO) method heads the list. It attains rank 1 on all twelve functions, with shared first places on F 1 and F 9 alongside RTH. Its average rank is exactly 1.00 , i.e., a clean sweep across the suite. The wilcoxon results are presented in Appendix A.

Per-Function Summary: eHGSO vs. Next Best

Table 1 catalogs, for each function, the eHGSO mean final value and the gap to the second-best method ( Δ = runner-up − eHGSO). Positive Δ indicates that eHGSO is better; Δ = 0 denotes a tie.
Several functions show very small margins (e.g., F 1 and F 3 ), indicating that multiple algorithms approach near-optimal solutions. Others (notably F 4 , F 6 , F 8 , and especially F 11 ) exhibit clearer gaps of three or more function units, highlighting a more substantial advantage for eHGSO. Standard-deviation statistics further indicate stable performance with relatively low run-to-run variability. Overall, eHGSO is a strong default choice on CEC2022; nevertheless, the fixed-budget setting and the prohibition on per-problem tuning mean that the best algorithm for a real application can still depend on budget and problem characteristics.
Table 1. Summary statistics across all 12 CEC2022 functions. The table reports the average of the mean objective value (Avg. Mean), average error to the optimum (Avg. Error), average standard deviation (Avg. Std), the average rank (Avg. Rank), and the number of functions on which each algorithm obtained the first rank (Wins). The algorithms are sorted by Avg. Rank in ascending order.
Table 1. Summary statistics across all 12 CEC2022 functions. The table reports the average of the mean objective value (Avg. Mean), average error to the optimum (Avg. Error), average standard deviation (Avg. Std), the average rank (Avg. Rank), and the number of functions on which each algorithm obtained the first rank (Wins). The algorithms are sorted by Avg. Rank in ascending order.
OptimizerAvg. MeanAvg. ErrorAvg. StdAvg. RankWins
eHGSO1631.2714.57747.9381.012
OMA1663.96980.63639.2615.0830
GTO1644.38361.0519.8745.0830
SSA1811.754228.421179.2857.3330
ALO1780.844197.51148.6317.8330
POA1726.501143.168183.8549.00
RTH1668.72885.39544.0169.6672
DO1922.623339.29227.44310.250
GWO2019.266435.933322.39710.8330
MFO2374.019790.685706.33411.1670
AVOA1851.26267.926203.98111.6670
AO2306.694723.36517.17813.5830
SCSO2042.265458.932325.78515.750
SHIO2188.564605.231469.42916.50
ZOA1864.963281.629299.09716.8330
SHO2120.521537.188326.22917.00
SCA157046.596155463.262139494.51518.250
GJO2308.488725.154329.89518.5830
HHO1857.227273.893185.95318.5830
TTHHO1972.742389.409226.72620.750
DOA1786432.2041784848.8717980872.89521.3330
HGSO179001.936177418.60398919.27122.6670
GBO4565.6562982.3232388.76122.8330
CPO1895.83312.497259.79923.50
HLOA1850.198266.865248.35723.750
TSO2238.344655.011459.99923.9170
WOA2847.2741263.941689.12824.3330
FOX1899.645316.312230.85924.5830
Chimp115737.164114153.8369433.00224.8330
SMA3822.5772239.2431071.44726.0830
AOA2641.2811057.948533.928.00
BOA2373509.5652371926.2313974044.91428.5830
ROA18061.11816477.78432885.6829.8330
RSA5032994.0015031410.6682690883.5232.0830
FLO2304258.6352302675.3023705222.72632.50
SPBO28815208.47828813625.14517791607.04134.5830
SSOA13571197.09713569613.76411699822.57536.00
OHO67231284.05967229700.72550288881.88636.6670
Table 2 summarizes, for each CEC2022 test function, the global rank of eHGSO and the rank of the best competing optimizer among all 38 algorithms. As can be seen, eHGSO attains rank 1 on all twelve functions F1–F12. For ten functions, no other optimizer reaches rank 1, which means eHGSO is strictly superior to all competitors in those cases. Only RTH manages to tie with eHGSO on F1 and F9 (both having rank 1), indicating that although a few algorithms can occasionally match eHGSO on specific problems, none of them can do so consistently across the entire benchmark suite.
Table 2. Summary of global ranks of eHGSO and best-competing optimizers on CEC2022 F1–F12.
Table 2. Summary of global ranks of eHGSO and best-competing optimizers on CEC2022 F1–F12.
FunctioneHGSO RankBest Competitor (s)Competitor Rank
F11RTH1
F21ALO2
F31OMA2
F41ZOA2
F51OMA2
F61RTH2
F71OMA2
F81GTO2
F91RTH1
F101OMA2
F111GTO2
F121MFO2
From Table 2, it is also clear which optimizers are most competitive against eHGSO. OMA appears as the best competitor in four functions (F3, F5, F7, and F10), RTH in three functions (F1, F6, and F9), GTO in two functions (F8 and F11), and ALO, ZOA, and MFO each lead once as the best runner-up. This pattern shows that the challenge to eHGSO is spread among several algorithms rather than dominated by a single strong rival, while eHGSO remains consistently at the top on all functions.
To obtain a global view over the entire CEC2022 test set, Table 3 reports the average rank, median rank, and the number of wins (rank = 1), top-3, and top-5 finishes for the ten best algorithms according to their average rank. eHGSO clearly dominates with an average rank of 1.00 and a median rank of 1.0, achieving rank 1 on all functions (12 wins out of 12) and appearing in the top-3 and top-5 on every single test function. The next two algorithms, GTO and OMA, have an average rank of about 5.08, which means that on average they are roughly four rank positions worse than eHGSO. Algorithms such as SSA, ALO, POA, RTH, DO, GWO, and MFO form the next performance tier, with average ranks between roughly 7 and 11, and far fewer top-3 or top-5 finishes.
Table 3. Overall ranking statistics of the 38 optimizers on CEC2022 F1–F12.
Table 3. Overall ranking statistics of the 38 optimizers on CEC2022 F1–F12.
OptimizerAvg. RankMedian RankWins (Rank = 1)Top-3 CountsTop-5 Counts
eHGSO1.001.0121212
GTO5.085.0057
OMA5.084.0059
SSA7.336.5024
ALO7.836.5024
POA9.008.5013
RTH9.679.0244
DO10.259.0013
GWO10.8311.0024
MFO11.179.0012
These global statistics confirm that eHGSO is not only competitive on a subset of functions but is systematically superior across the whole benchmark. While GTO and OMA are clearly strong optimizers and often appear in the top positions, neither of them wins any function when all 38 algorithms are considered, whereas eHGSO wins all functions. RTH is noteworthy because it ties eHGSO on F1 and F9 and accumulates two wins in total, but its average rank is still almost ten, which reveals more irregular performance on the remaining functions.
Since eHGSO is an enhanced variant of HGSO, it is particularly important to quantify the improvement over the base algorithm and over widely used baselines. Table 4 presents a focused comparison between eHGSO and several popular optimizers: HGSO, HHO, GWO, WOA, and RSA. As shown, eHGSO achieves an average rank of 1.00, whereas the original HGSO has an average rank of 22.67 and never appears in the top-3 on any function. Similarly, HHO, WOA, and RSA obtain average ranks of about 18.58, 24.33, and 32.08, respectively, and do not win any function. Among these baselines, GWO is the strongest, but its average rank (10.83) and a small number of top-3 finishes still place it far behind eHGSO.
Table 4. Overall rank comparison between eHGSO and several widely used optimizers on CEC2022 F1–F12.
Table 4. Overall rank comparison between eHGSO and several widely used optimizers on CEC2022 F1–F12.
OptimizerAvg. RankMedian RankWins (Rank = 1)Top-3 CountsTop-5 Counts
eHGSO1.001.0121212
GWO10.8311.0024
HHO18.5819.0000
HGSO22.6724.5000
WOA24.3325.0000
RSA32.0833.0000
The raw mean values in the original CEC2022 tables support these ranking-based conclusions. On simple functions such as F1 and F2, eHGSO attains values very close to the optimum (e.g., mean 300 on F1 and 406.97 on F2), while HGSO remains far from the optimum (e.g., 4228.87 on F1 and 492.98 on F2). On more challenging functions, the gap becomes even more pronounced: for example, on F6 the mean objective of eHGSO is around 1.83 × 10 3 , whereas HGSO produces values on the order of 10 6 , and several other algorithms in the benchmark suffer from extremely large errors on this function. Across F5–F12 (multimodal, hybrid, and composition functions), eHGSO continues to achieve rank 1, while traditional methods such as HHO, GWO, WOA, and RSA fluctuate more strongly in their performance.

6.3. Qualitative Search Dynamics (F1–F6)

As it can be seen in Figure 11, across F1–F6 the algorithm follows a common pattern: a broad initial exploration that rapidly contracts towards a small region, followed by a phase of fine-grained exploitation. On the smoother problems (e.g., sphere- or ellipsoid-like landscapes), the search history exhibits a star-shaped contraction toward the center, and the convergence curves drop by several orders of magnitude within the first few tens of iterations before settling into a slower refinement regime. Step-like improvements are often visible in the best-so-far traces, which is typical of population-based heuristics as promising regions are discovered and exploited. On functions with plateaus or pronounced conditioning (e.g., F2 and F6), the average fitness decreases more gradually and the trajectory plots reveal brief oscillations before stabilization, consistent with the need for additional sampling to locate the attraction basin. Taken together, these panels highlight stable early contraction and consistent late-stage refinement under this budget, with no evidence of erratic divergence.
Figure 11. Effect of population size on search dynamics for F1–F6. Overlays compare populations of 20, 50, and 100 for search history, a representative trajectory, average fitness, and best-so-far. Larger populations contract faster and more smoothly while reaching equal or lower final errors.
Figure 11. Effect of population size on search dynamics for F1–F6. Overlays compare populations of 20, 50, and 100 for search history, a representative trajectory, average fitness, and best-so-far. Larger populations contract faster and more smoothly while reaching equal or lower final errors.
Eng 06 00374 g011

6.4. Sensitivity of the Dynamics to Population Size (F1–F6)

To examine how the population size shapes the search dynamics, Figure 11 overlays the trajectories and objective traces for populations of 20, 50, and 100 on F1–F6. On the simpler landscapes the three settings converge to similar final accuracies, but the larger populations progress faster and with fewer oscillations. The average-fitness curves confirm that 100 individuals achieve the most rapid contraction of the swarm and the lowest final central tendency, while the 20-individual setting tends to lag and displays larger transients before stabilization. The best-so-far curves differ primarily during the early iterations: larger populations achieve early breakthroughs more frequently and then settle into comparable late-stage slopes, suggesting that diversity mainly accelerates the discovery of the global basin rather than the local refinement once there. These observations are consistent with the distributional and ECDF analyses and point to a favorable time-to-quality trade-off for larger populations under the same iteration budget.

6.5. Qualitative Search Dynamics (F7–F12)

The harder problems (F7–F12) in Figure 12 reveal a more demanding exploration–exploitation balance. The search histories are more diffuse in the early iterations and display multiple clusters, indicating that the population explores several basins in parallel. The best-so-far curves typically exhibit a sequence of long plateaus interleaved with sharp drops, reflecting the traversal of local optima and occasional discovery of markedly better basins. Nonetheless, the average-fitness plots maintain a consistent downward trend, underscoring a reliable reduction in the population’s central tendency even when the elite solution improves in bursts. For the composition-type functions at the end of the suite, improvements occur over many orders of magnitude but still converge to stable levels within the allotted budget. Overall, these qualitative behaviors suggest that E-HGSO maintains diversity long enough to escape shallow traps while preserving a steady pressure toward exploitation once promising regions are identified.

6.6. Sensitivity of the Dynamics to Population Size (F7–F12)

Figure 12 extends the overlay analysis to F7–F12, where the benefits of larger populations are even more pronounced. The search histories show that 100 individuals map the landscape more broadly before contracting, leading to the earlier identification of high-quality basins. In the trajectory and convergence plots, the 20-individual configuration exhibits prolonged plateaus and occasional regressions, indicative of premature convergence or insufficient coverage of the state space; by contrast, the 100-individual configuration typically secures the largest early drops and the lowest final error levels. Differences between 50 and 100 individuals are smaller but visible on several functions, where the additional diversity of the larger population reduces the number and length of stagnation phases. These overlays therefore reinforce a practical recommendation: for challenging, highly multimodal CEC2022 functions, a population of 100 offers a robust default, with 50 being a viable compromise when computational resources are constrained and 20 mainly suitable for benign cases.
Figure 12. Effect of population size on search dynamics for F7–F12. Overlays compare populations of 20, 50, and 100 across the same four diagnostic views. Larger populations more reliably escape local traps and attain lower final errors.
Figure 12. Effect of population size on search dynamics for F7–F12. Overlays compare populations of 20, 50, and 100 across the same four diagnostic views. Larger populations more reliably escape local traps and attain lower final errors.
Eng 06 00374 g012

6.7. Convergence Behavior Across Iterations

The convergence profiles in Figure 13 provide a temporal view of the optimizer’s progress. Across most functions, the median trajectory displays a rapid drop in the early iterations followed by a gradual refinement phase, a hallmark of effective exploration followed by exploitation. The shaded bands—which reflect the dispersion among runs—are typically widest at the beginning and narrow substantially as the search proceeds, indicating that E-HGSO tends to “lock in” competitive solutions early and then refine them consistently. On several functions the curves flatten after the mid-iterations, suggesting that the search has reached a structurally difficult basin where improvements are incremental; in contrast, a subset of functions exhibit steady, incremental reductions throughout the budget, indicating persistent room for improvement in regions with many local optima. Overall, the profiles evidence stable convergence with limited late-stage divergence, and they corroborate the robustness trends seen in the distributional analysis.
Figure 13. Convergence trajectories (median with variability bands) of the best objective value per iteration for all CEC2022 functions. Axes are logarithmic in the objective; lower is better.
Figure 13. Convergence trajectories (median with variability bands) of the best objective value per iteration for all CEC2022 functions. Axes are logarithmic in the objective; lower is better.
Eng 06 00374 g013

6.8. Aggregate Performance via ECDF

To compare the overall quality of the final solutions across the benchmark suite, we report the empirical cumulative distribution function (ECDF) of the final objective values in Figure 14. For any fixed error threshold, the curve corresponding to the larger population lies above the others, which means that E-HGSO with more individuals reaches a desirable accuracy on a larger fraction of the functions. The vertical separation between the curves is most pronounced in the low-to-moderate error region, indicating that larger populations do not merely avoid failures but also convert many borderline cases into convincingly accurate solutions. This aggregate perspective complements the boxplot analysis: both highlight that increasing the population from 20 to 50 yields a strong gain in reliability, and moving to 100 provides a further improvement and the most favorable trade-off between accuracy and robustness.
Figure 14. ECDF of the final best objective values over the CEC2022 suite for three population sizes. Curves that are higher indicate that a larger fraction of problems reach a given accuracy.
Figure 14. ECDF of the final best objective values over the CEC2022 suite for three population sizes. Curves that are higher indicate that a larger fraction of problems reach a given accuracy.
Eng 06 00374 g014

6.9. Sensitivity to Population Size Across Functions

Figure 15 explores how the final accuracy varies with the function index. While the absolute difficulty differs substantially from one problem to another (note the log scale), a consistent pattern emerges: in the majority of cases the final objective decreases as the population size increases from 20 to 50 and to 100. The improvement is sometimes marginal on the easier instances that are already nearly solved with small populations, but it becomes dramatic on several of the harder problems where the error drops by orders of magnitude when using larger populations. A few functions show near-equal outcomes for medium and large populations, suggesting diminishing returns once a sufficient level of diversity is reached under the fixed computational budget. Overall, these results indicate that the algorithm scales favorably with population size, with the largest benefits appearing on complex landscapes where additional exploratory capacity mitigates premature convergence.
Figure 15. Final best objective value per function index for three population sizes. Lines are drawn on a logarithmic scale to emphasize relative improvements; lower is better.
Figure 15. Final best objective value per function index for three population sizes. Lines are drawn on a logarithmic scale to emphasize relative improvements; lower is better.
Eng 06 00374 g015

6.10. Effect of Population Size by Function Groups (F1–F6)

The first group of functions (F1–F6), shown in Figure 16, exhibits a clear and largely monotonic dependence on the population size. For these problems, larger populations lead to consistently lower final errors, with the step from 20 to 50 providing a notable improvement and the step from 50 to 100 further consolidating accuracy. The narrow spread of the markers at larger populations suggests that the search dynamics become more stable, with reduced sensitivity to initial seeds. These trends imply that on landscapes that are either convex-like or only mildly multimodal, the additional sampling diversity in E-HGSO primarily accelerates the discovery of the global basin and reduces the chance of getting trapped in shallow local optima.
Figure 16. Population-size sensitivity for the first six CEC2022 functions (F1–F6). Each panel reports the final best objective value achieved by E-HGSO at three population sizes (log scale).
Figure 16. Population-size sensitivity for the first six CEC2022 functions (F1–F6). Each panel reports the final best objective value achieved by E-HGSO at three population sizes (log scale).
Eng 06 00374 g016

6.11. Effect of Population Size by Function Groups (F7–F12)

The second group (F7–F12) in Figure 17 contains the more irregular and composition-type landscapes where the optimizer faces a stronger exploration–exploitation dilemma. The general picture remains favorable to larger populations: in most cases, increasing the number of individuals yields lower final errors and reduces variability. At the same time, a few instances display small non-monotonicities between 50 and 100 individuals, reflecting subtle budget trade-offs where additional exploration has to be balanced against the number of iterations available for refinement. Such cases are exceptions rather than the rule and the performance differences are typically modest. Taken together with the ECDF and the across-suite boxplots, these panels suggest that a population of 100 offers a robust default choice for E-HGSO on the CEC2022 suite, while smaller populations may be considered when computational resources are constrained and the problem is known to be relatively benign.
Figure 17. Population-size sensitivity for the last six CEC2022 functions (F7–F12). Each panel reports the final best objective value achieved by E-HGSO at three population sizes (log scale).
Figure 17. Population-size sensitivity for the last six CEC2022 functions (F7–F12). Each panel reports the final best objective value achieved by E-HGSO at three population sizes (log scale).
Eng 06 00374 g017

6.12. Performance Profile Across Population Sizes

Figure 18 reports the Dolan–Moré performance profile of the final objective values obtained by E-HGSO when using populations of 20, 50, and 100 individuals. For each function, the performance ratio τ is computed with respect to the best final value achieved by any of the three population sizes; hence, curves that lie higher indicate configurations that are closer to the per-function best more frequently. At very strict tolerances ( τ 1 ), the configuration with 50 individuals achieves the largest share of best-of-three outcomes, reflecting that it can occasionally be the most aggressive finisher under the fixed budget. However, as soon as a small relaxation is allowed ( 1.1 τ 2 ), the 100-individual configuration overtakes and dominates: it reaches over 90 % of the problems at τ 2 and attains full coverage with moderately small ratios, indicating near-best performance on virtually all functions. In contrast, the curves for 20 individuals rise slowly and only approach unity for very large ratios, signaling that this setting is sometimes orders of magnitude less accurate than the per-function best. Overall, the profile corroborates the earlier distributional and ECDF analyses: larger populations deliver the most robust outcomes, while the medium population can win a subset of problems but with a longer tail of difficult cases than the largest setting.
Figure 18. Dolan–Moré performance profile of the final best objective values over the CEC2022 suite. The x-axis reports the performance ratio τ relative to the per-function best across population sizes; higher curves are better.
Figure 18. Dolan–Moré performance profile of the final best objective values over the CEC2022 suite. The x-axis reports the performance ratio τ relative to the per-function best across population sizes; higher curves are better.
Eng 06 00374 g018

6.13. Operator Modules, Ablation Switches, and Parameter Choices

In the implementation we expose each enhancement as a separate module that can be activated or deactivated via Boolean flags:
  • Core HGSO: Henry constant/pressure update and group-based position drift (Equations (19)—(23)) form the baseline search engine. This module is always active.
  • Initialization (OBL-LHS): The opposition-based Latin hypercube seeding (Equations (2)—(4)) can be turned on or off. When disabled, the algorithm falls back to uniform random initialization.
  • DE/current-to-pbest/1: The JADE-style seeding step (Equations (5)—(9)) is controlled by a flag. When active, it is used only during the first s e e d F r a c · MaxIter iterations; when deactivated, the method reduces to HGSO+Lévy+spiral.
  • Lévy flights and spiral contraction: The heavy-tailed Lévy drift and WOA-like spiral updates (Equations (10)—(15)) each have their own on/off switches. The former mainly contributes exploration; the latter reinforces late-stage exploitation.
  • Elite archive (file): The external archive A in Equation (17) stores replaced incumbents. A flag enables or disables its use in forming differential perturbations; when disabled, perturbations depend only on the live population.
  • Local search: A lightweight pattern-search module is triggered adaptively when the relative improvement in the global best over a sliding window of w iterations drops below a threshold ε LS . At most N LS elites are refined per call, and each local-search invocation is capped at L max function evaluations. This module can be turned off entirely to isolate its effect.
These switches are used in an internal ablation study to quantify the contribution of each module. Concretely, we compare the following configurations on the CEC2022 suite and on the four engineering designs: (i) core HGSO only; (ii) HGSO + OBL-LHS; (iii) HGSO + OBL-LHS + DE/current-to-pbest; (iv) HGSO + OBL-LHS + DE + Lévy + spiral (no local search, no archive); and (v) full eHGSO. The reported “eHGSO” results correspond to configuration (v); the performance gaps between (i)–(iv) and (v) constitute the ablation evidence referenced in the text.
All hyperparameters are fixed across problems. The Henry-law parameters ( α , β , l 1 , l 2 , l 3 , M 1 , M 2 ) follow the original HGSO settings, except that β is set to β = 1 to keep the gain γ j in Equation (19) numerically stable on ill-conditioned test functions. The spiral-shrink parameter in Equation (14) is set to b = 1.5 ; smaller values slow exploitation, while larger values can cause premature convergence. The seed-fraction is set to s e e d F r a c = 0.2 so that DE/current-to-pbest occupies the first 20 % of the iterations as a dedicated seeding phase. The archive size is capped at 5 N (as in Equation (17)); larger caps do not change results but increase memory usage. For the local search, we set w = 20 , ε LS = 10 4 , N LS = 3 , and  L max = 10 , which limits its overhead to at most 30 additional evaluations per iteration and keeps its contribution to the total FE budget well below 10 % in practice.

7. eHGSO Ablation Study

This section presents a quantitative ablation study of the Enhanced Henry Gas Solubility Optimization (EHGSO) algorithm on a subset of the CEC2022 benchmark functions. We evaluate a suite of variants designed to isolate the contribution of individual modules, including the differential evolution (DE) phase, Lévy flights, spiral drift, the archive/elite mechanism, adaptation of the control parameters F and C R , and opposition-based Latin hypercube sampling (OBL-LHS). Additional variants modify the default hyperparameters seedFrac, pbest, F0, and CR0.
Each variant was executed twice on each of three benchmark functions in 10 dimensions, using a population size of 10 and 50 iterations per run. Random seeds were set deterministically to ensure reproducibility and fair comparisons across settings.
Table 5 summarizes performance by reporting the mean and standard deviation of the final best objective values aggregated over all runs and functions for each variant (lower is better).
Table 5. Mean and standard deviation of final best values for each EHGSO variant on the CEC2022 benchmark subset.
Table 5. Mean and standard deviation of final best values for each EHGSO variant on the CEC2022 benchmark subset.
SettingMean Final BestStd. Dev.
baseline 6.65 × 10 3 1.05 × 10 4
high_pbest 7.39 × 10 3 1.13 × 10 4
low_CR0 5.66 × 10 3 8.58 × 10 3
low_F0 4.37 × 10 3 6.85 × 10 3
no_DE 5.22 × 10 3 6.72 × 10 3
no_Levy 6.58 × 10 3 9.28 × 10 3
no_OBL 4.43 × 10 3 5.79 × 10 3
no_adapt 4.20 × 10 3 5.58 × 10 3
no_archive 4.33 × 10 3 6.67 × 10 3
no_spiral 8.17 × 10 3 1.15 × 10 4
uniform_init 6.28 × 10 3 9.10 × 10 3
Figure 19 illustrates the average convergence trajectories of all variants, averaged over runs and functions. The y-axis is shown on a logarithmic scale to better expose differences during both early exploration and late-stage refinement.
The ablation results clarify the role of each EHGSO component across heterogeneous CEC2022 landscapes. Disabling spiral drift or Lévy flights generally degraded performance, highlighting their importance in balancing exploration and exploitation. Removing the DE phase (i.e., setting seedFrac to zero) or switching to uniform initialization occasionally improved outcomes on simpler (more unimodal) cases, but more often harmed performance on multimodal instances. Increasing pbest reduced selective pressure and yielded a mild performance decline. Modifying F0 and CR0 produced mixed effects: a smaller F0 often accelerated convergence, whereas a smaller CR0 tended to impede progress. The impact of OBL-LHS was problem dependent; in some cases, disabling it improved results, suggesting that the extra diversity it introduces can be unnecessary for certain functions. Finally, removing the archive or disabling ( F , C R ) adaptation led to comparatively modest changes, indicating incremental rather than dominant benefits. Taken together, these findings support the robustness of the default EHGSO configuration while also indicating which modules are most influential and which may be tuned or simplified for specific problem classes.
Figure 19. Average convergence curves for the EHGSO ablation variants on the CEC2022 benchmark subset.
Figure 19. Average convergence curves for the EHGSO ablation variants on the CEC2022 benchmark subset.
Eng 06 00374 g019

8. Application of eHGSO Optimizer in Solving Engineering Design Problems

8.1. Cantilever Stepped Beam

The stepped cantilever beam design problem seeks to minimize the mass of a cantilever comprised of multiple rectangular segments while satisfying stress and deflection constraints. Each segment x i has a prescribed length and an optimized width; the objective function aggregates the segment weights subject to a set of 10 inequality constraints on stresses, displacements, and geometric relationships [53]. The formulation summarized in (31)–(41) serves as the basis for the optimization.
Figure 20 illustrates the stepped cantilever beam composed of three groups of segments with varying cross-sectional dimensions. The left end is fixed, while the external load P is applied at the free end, creating bending and shear effects along the structure. The design variables x 1 x 10 correspond to segment dimensions that must balance material usage with stress and deflection constraints during optimization.
Figure 20. Cantilever Stepped Beam.
Figure 20. Cantilever Stepped Beam.
Eng 06 00374 g020
Table 6 lists the mean objective values, standard deviations, and best objective values and rankings obtained by seventeen optimization algorithms. The eHGSO algorithm achieves the lowest mean objective value of 63574.359 with a standard deviation of only 206.248, yielding a best objective of 63428.519. The next-best algorithm (POA) produces a mean of 64110.545, which is more than 536 units higher than eHGSO, and a comparable standard deviation (105.417). Although several algorithms (e.g., ChOA and ZOA) approach the performance of POA, their larger standard deviations (954.266 and 1405.326, respectively) indicate less consistent convergence. Overall, eHGSO outperforms the other optimizers by delivering the lowest objective value and high consistency as reflected in its first-place ranking.
Table 6. Optimization results for the stepped cantilever beam. Each row reports the mean objective value, standard deviation, and best objective value and ranking achieved by a given algorithm.
Table 6. Optimization results for the stepped cantilever beam. Each row reports the mean objective value, standard deviation, and best objective value and ranking achieved by a given algorithm.
AlgorithmMean ObjectiveStd. Dev.Best ObjectiveRank
eHGSO63574.359206.24863428.5191
POA64110.545105.41764036.0042
ChOA64722.451954.26664047.6843
ZOA65964.8931405.32664971.1784
MPA71321.4981107.39670538.4515
MFO71935.20911264.73663969.8376
TTHHO72509.999794.49571948.2067
ROA78652.3252471.80276904.4978
FLO80258.7352982.47278149.8099
SHO82051.01216238.88570568.38610
SMA86059.9874788.74482673.83411
SCA86715.612461.13986389.53812
WOA87037.5936789.28082236.84713
TSO92027.4116245.26087611.34614
SSOA103403.55021230.48388391.33215
RSA121050.01118469.175107990.33216
BOA385248.270393207.474107208.59917

8.2. Spring Design

The tension/compression spring design problem aims to minimize the material volume of a helical spring subject to constraints on shear stress, surge frequency and geometric limits. The decision variables are the wire diameter d, mean coil diameter D and number of active coils N. The objective function ( N + 2 ) d 2 D measures the spring volume, while the constraints ensure that the spring does not yield, buckle or exceed geometric limits. The formulation is described in (42)–(46).
Figure 21 illustrates the geometric layout of the spring design problem, including the zig-zag coil profile, the overall vertical displacement span D, and the applied load direction P. The spacing parameter d highlights a local geometric constraint influencing stiffness and manufacturing feasibility. These annotated dimensions guide the optimization of spring performance under loading and deformation requirements.
Figure 21. Schematic of the spring design problem showing the zig-zag spring, overall vertical span D, horizontal direction cue P, and a local spacing indicator d.
Figure 21. Schematic of the spring design problem showing the zig-zag spring, overall vertical span D, horizontal direction cue P, and a local spacing indicator d.
Eng 06 00374 g021
In the spring design results (Table 7), the eHGSO algorithm once again delivers the best performance. It achieves a mean objective value of 180805.969 with an exceptionally small standard deviation (0.204), yielding a best objective of 180805.825. The difference between eHGSO and the second-ranked POA algorithm is only 0.53, yet POA exhibits a larger standard deviation (0.960). Many other algorithms, such as WOA and MFO, have substantially larger standard deviations (864.784 and 2402.460, respectively) indicating unstable convergence and higher objective values. These results highlight the superior consistency and performance of eHGSO for the spring design problem.
Table 7. Optimization results for the spring design problem.
Table 7. Optimization results for the spring design problem.
AlgorithmMean ObjectiveStd. Dev.Best ObjectiveRank
eHGSO180805.9690.204180805.8251
POA180806.4950.960180805.8172
ChOA181056.866157.649180945.3923
WOA182102.674864.784181491.1794
MFO182563.6792402.460180864.8845
ZOA183546.792104.832183472.6646
TSO198340.89324798.233180805.8947
SCA198388.0011129.085197589.6188
TTHHO211572.8926586.397206915.6069
MPA236304.49054.056236266.26710
SHO276016.28310279.389268747.65711
ROA333832.573158931.092221451.32012
SMA428569.524203612.392284593.82113
RSA431877.595229584.918269536.54314
BOA439555.86815533.179428572.25215
FLO505207.08430933.176483334.02616
SSOA583727.154132897.315489754.56217

8.3. Welded Beam

The welded beam design problem focuses on sizing a welded joint so that material cost is minimized while satisfying shear stress, normal stress, buckling and geometry constraints. The decision variables are the weld thickness h, weld length l, beam thickness t and height b, and the objective function combines material and welding costs. The governing equations are given in (47)–(54).
Figure 22 shows a welded connection where a horizontal member is joined to a vertical support through a defined weld region. The applied load P acts near the free end, producing bending and shear stresses transmitted through the weld. The design variables x 1 , x 2 , x 3 , x 4 and length L specify the member geometry and weld dimensions that govern structural capacity and optimization performance.
Figure 22. Welded connection with design parameters and load application.
Figure 22. Welded connection with design parameters and load application.
Eng 06 00374 g022
As shown in Table 8, the eHGSO algorithm attains the lowest mean objective value of 1.725 with an extremely small standard deviation (0.000). The next-best algorithm (POA) produces a mean of 1.731, approximately 0.0059 higher, and exhibits a higher standard deviation (0.002). The remaining algorithms yield mean objective values ranging between 1.74 and 188975, with standard deviations that can be orders of magnitude larger (e.g., SSOA shows a standard deviation of 108021.763). These results underscore the remarkable performance and consistency of eHGSO for the welded beam design.
Table 8. Optimization results for the welded beam design problem.
Table 8. Optimization results for the welded beam design problem.
AlgorithmMean ObjectiveStd. Dev.Best ObjectiveRank
eHGSO1.7250.0001.7251
POA1.7310.0021.7302
ChOA1.7410.0061.7363
MFO1.7440.0191.7304
TTHHO2.1860.2292.0245
ZOA2.3130.5861.8986
SCA2.3130.3432.0707
TSO2.4270.8151.8508
RSA2.9640.0102.9579
SHO3.0310.7852.47710
SMA3.0880.0133.07911
FLO3.2280.4532.90812
MPA3.3320.0103.32513
BOA3.4900.9282.83514
WOA4.8181.3833.84015
ROA5.4084.2382.41116
SSOA188975.089108021.763112592.16817

8.4. Three-Bar Truss

The three-bar truss problem involves determining the cross-sectional areas of two members in a simple triangular truss to minimize its weight (proportional to ( 2 2 A 1 + A 2 ) H ) while ensuring that member stresses do not exceed allowable limits. Only two design variables ( A 1 and A 2 ) are optimized; the constraints in (56)–(58) ensure that the truss remains within allowable stress limits when subject to a static load P. This problem is widely used as a benchmark for structural optimization algorithms.
Figure 23 illustrates the classic three-bar truss configuration, consisting of two symmetric diagonal members of cross-sectional area A 1 = A 3 and a central vertical member of area A 2 . The upper nodes are supported, and a vertical load P is applied at the lower joint, inducing axial forces in all members. The geometric parameters L (horizontal span) and D (vertical drop) define the truss proportions and directly influence stress distribution and weight minimization in the optimization problem.
Figure 23. Illustration of three-bar truss with pinned supports at the top nodes, symmetric diagonals of area A 1 = A 3 , a central vertical member of area A 2 , overall span L, and vertical drop D to the loaded node 4.
Figure 23. Illustration of three-bar truss with pinned supports at the top nodes, symmetric diagonals of area A 1 = A 3 , a central vertical member of area A 2 , overall span L, and vertical drop D to the loaded node 4.
Eng 06 00374 g023
Table 9 summarizes results for the three-bar truss. The eHGSO and POA algorithms obtain virtually identical mean and best objective values (263.896), reflecting the flat optimal landscape of this problem. The standard deviation for both algorithms is extremely small (approximately zero), indicating that the optimum is found reliably in every run. However, eHGSO attains a marginally smaller mean value thus securing the top ranking. Other algorithms exhibit increasing mean objective values and larger standard deviations, with TSO having a notably large standard deviation of 13.212. These observations confirm that eHGSO is the most effective optimizer for this relatively simple structural problem.
Table 9. Optimization results for the three-bar truss problem.
Table 9. Optimization results for the three-bar truss problem.
AlgorithmMean ObjectiveStd. Dev.Best ObjectiveRank
eHGSO263.8960.000263.8961
POA263.8960.000263.8962
MPA263.9010.003263.8983
ZOA263.9070.012263.8994
ChOA263.9720.029263.9515
MFO263.9880.118263.9056
SCA264.0870.217263.9337
TTHHO264.2860.436263.9788
BOA264.6090.462264.2829
FLO264.8280.288264.62410
SHO265.3500.263265.16411
SSOA265.4780.588265.06212
ROA265.9231.124265.12913
WOA266.1182.454264.38314
RSA267.2930.187267.16115
SMA268.5233.047266.36816
TSO273.50013.212264.15817

9. Conclusions

This work introduced eHGSO, an enhanced variant of Henry Gas Solubility Optimization for constrained engineering design. The method blends four complementary mechanisms around the HGSO physics core: (i) OBL-LHS seeding for a well-spread start, (ii) an entropy-aware nonlinear temperature/pressure schedule that modulates selection pressure over time, (iii) an elitist archive with differential perturbation and a lightweight, on-demand local search to sharpen feasibility and precision, and (iv) a feasibility-priority rule with adaptive penalties for consistent progress under constraints. The iteration-level move set composes JADE-style DE/current-to-pbest, heavy-tailed Lévy exploration, and a late-stage spiral contraction toward the global best, with bound enforcement by projection.
Extensive experiments on four canonical constrained designs—stepped cantilever beam, tension/compression spring, welded beam, and three-bar truss—demonstrate that eHGSO achieves the lowest mean objective on three problems and a co-best mean on the fourth, while exhibiting markedly low variability. Relative to the next-best competitor, mean objective values are reduced by about 0.84 % (cantilever) and 0.35 % (welded beam), and are essentially indistinguishable on the spring and truss benchmarks. Aggregate diagnostics (distributional summaries, convergence bands, ECDFs, and performance profiles) corroborate rapid and stable convergence, with clear benefits from larger populations on the more multimodal landscapes. A cost analysis indicates that the additional operators add only linear arithmetic overhead; in practice, runtime remains dominated by black-box evaluations.
Regarding limitations and future work, while eHGSO is competitive across diverse designs, several extensions are promising: (i) an ε -constrained or lexicographic feasibility ranking to complement penalty adaptation; (ii) multi-objective formulations to trade off weight, cost, and safety factors; (iii) surrogate-assisted or trust-region variants for expensive simulations; (iv) asynchronous parallelization and GPU acceleration for large populations; and (v) parameter self-adaptation driven by online diversity metrics. Formal convergence analysis (e.g., using stochastic drift or Markov models) is also an interesting direction. We expect these avenues to further strengthen the applicability of eHGSO to large-scale, simulator-in-the-loop engineering design.

10. Mathematical Formulations of Engineering Problems

10.1. Formulation of Cantilever Stepped Beam

The decision variables for this problem are x 1 , , x 10 (segment widths and lengths). The objective is to minimize the function
min x f ( x )
subject to the inequality constraints
g 1 ( x ) 0
g 2 ( x ) 0
g 3 ( x ) 0
g 4 ( x ) 0
g 5 ( x ) 0
g 6 ( x ) 0
g 7 ( x ) 0
g 8 ( x ) 0
g 9 ( x ) 0
g 10 ( x ) 0

10.2. Formulation of Spring Design

The decision variables for this problem are d , D , N (wire diameter, mean coil diameter, and number of active coils). The objective is to minimize the function
min x ( N + 2 ) d 2 D
subject to the inequality constraints
1 D 2 N 71785 d 4 0
4 D 2 d D 12566 ( d 3 D d 4 ) 1 5108 d 2 0
1 140.45 d D 2 N 0
d + D 1.5 1 0

10.3. Formulation of Welded Beam

The decision variables for this problem are h , l , t , b (all dimensions in inches). The objective is to minimize the function
min x 1.10471 h 2 l + 0.04811 t b ( 14 + l )
subject to the following inequality constraints:
τ ( x ) 13600 0
σ ( x ) 30000 0
h b 0
0.10471 h 2 + 0.04811 t b ( 14 + l ) 5 0
0.125 h 0
δ ( x ) 0.25 0
6000 P c ( x ) 0

10.4. Three-Bar Truss

The decision variables for this problem are A 1 (equal to A 3 ) and A 2 (cross-sectional areas). The objective is to minimize the function
min x 2 2 A 1 + A 2 H , H = 100 cm
subject to the inequality constraints
2 A 1 + A 2 2 A 1 2 + 2 A 1 A 2 P σ 0
A 2 2 A 1 2 + 2 A 1 A 2 P σ 0
P A 1 + 2 A 2 σ 0

Author Contributions

Conceptualization, J.Z., A.A. and N.H.; Methodology, J.Z., A.A., H.N.F. and N.H.; Software, A.A., R.A. and H.N.F.; Validation, J.Z.; Formal analysis, A.A. and R.A.; Investigation, J.Z., A.A., H.N.F. and N.H.; Resources, R.A.; Data curation, J.Z., H.N.F. and N.H.; Writing—original draft, J.Z., R.A., H.N.F. and N.H.; Writing—review & editing, J.Z., A.A., H.N.F. and N.H.; Visualization, A.A., R.A. and H.N.F.; Supervision, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Wilcoxon Results of eHGSO in Comparison with Other Optimizers

Table A1. CEC2022 results for optimizer group 1 (columns eHGSO, HGSO, SMA, GBO, RTH, GTO, CPO, SCSO, DOA, ZOA, SPBO, TSO, and AO).
Table A1. CEC2022 results for optimizer group 1 (columns eHGSO, HGSO, SMA, GBO, RTH, GTO, CPO, SCSO, DOA, ZOA, SPBO, TSO, and AO).
FunctionMeasureeHGSOHGSOSMAGBORTHGTOCPOSCSODOAZOASPBOTSOAO
F1Mean3004228.8720911.47996.74300300497.2611247.712949.43832.65930506.83458.21726.878
Error 4.01944 × 10 14 3928.8720611.47696.74 2.84217 × 10 14 4.66116 × 10 13 197.261947.7142649.43532.65930206.83158.21426.878
Std 2.27374 × 10 14 853.47196813295.66 4.12386 × 10 14 9.53137 × 10 13 416.861422.383504.981343.838534.582290.04335.842
Rank12637311314182216382415
F2Mean406.967492.98458.727441.451408.454409.195421.711429.149479.437435.331160.9423.702413.053
Error2.6696292.979558.726641.45148.453659.1952821.711329.149179.436735.3297760.90323.701813.0535
Std6.9666423.112482.74527.67115.009114.884629.387834.1302115.29329.135230.73530.908414.0004
Rank1292624351418282134177
F3Mean600.001627.021620.621619.314608.794605.156652.084612.71620.123617.073670.296639.658611.792
Error0.0014841627.020720.621119.31388.793775.1559852.083712.709620.123417.072870.296539.657811.7925
Std0.001048095.0349913.47797.5865910.43263.508329.285867.23389.122786.597578.6051912.85996.49206
Rank12321199634152017382913
F4Mean809.505833.688833.111836.57822.486822.337832.137828.354827.425812.752899.839839.93822.123
Error1.979433.688433.111136.570122.48622.336832.137128.353827.424712.751999.839239.930422.1232
Std9.504794.4761210.92248.997598.789576.627171.334896.808088.962955.3354412.64419.932877.30161
Rank12423261110221716238308
F5Mean900.036990.3751480.611063.731046.38949.7981536.991084.521093.75988.5163677.531843.641029.66
Error0.049036690.3755580.607163.731146.37649.7982636.99184.52193.74988.51652777.53943.64129.665
Std0.035811327.3778410.89594.4705129.6450.7174173.135134.433103.47363.818736.406691.729127.981
Rank113311816634192011383715
F6Mean1831.852125740.06278.4228663.21835.131875.963530.44530621416100.03477.9345729000.04450.999148.86
Error37.15592123940.04478.4226863.235.127575.96251730.44350621414300.01677.9345727000.02650.997348.86
Std31.84981185930.01975.5424891.657.461168.29612021.072075.0895766300.01756.94213489000.02002.515531.15
Rank1312427231022329371726
F7Mean2017.982066.992063.472068.992032.112030.492133.072039.882037.112041.712167.312065.342035.19
Error5.938566.990663.466668.988232.11230.4877133.07339.879837.107841.7084167.31465.343835.1947
Std17.98439.1752943.855320.911814.019610.830249.148717.970218.460212.83642.414124.943410.6656
Rank12522269837151416382412
F8Mean2216.852232.682236.652237.812229.632220.52292.282225.192243.32225.162319.852237.942228.02
Error5.0668732.676236.651337.812129.630920.496992.277725.185143.299425.1641119.84537.941428.0208
Std16.84881.941328.5556.0435225.14124.8946872.73023.9875667.34552.7882696.68848.948512.97153
Rank121242518232112710352616
F9Mean2529.282593.922602.642575.532529.282529.312558.022553.732571.122602.572761.512534.212560.54
Error0293.92302.642275.528229.284229.312258.02253.726271.116302.568461.513234.206260.535
Std229.28428.357648.388637.675100.12432647.74827.082363.883847.822663.56914.49218.0628
Rank12730211516151929351118
F10Mean2500.342515.062628.782549.352564.222518.312635.072550.442539.152530.782581.962644.542572.86
Error0.0385987115.059228.785149.346164.225118.312235.073150.441139.145130.778181.963244.54172.859
Std100.33532.7208339.07168.638365.552543.718497.804562.648758.543953.860952.791228.60560.3695
Rank16281419829161311243021
F11Mean26002810.42878.372861.682781.682607.522743.332762.252836.672895.563767.862844.512664.91
Error 3.32619 × 10 12 210.398278.372261.681181.6767.5232143.327162.254236.674295.5581167.86244.50964.9148
Std 2.27374 × 10 12 102.65202.627185.973196.30633.6448147.838112.301238.162233.583391.755183.189.2564
Rank12228261821013232937244
F12Mean2862.442892.272878.082873.512866.582864.022917.572867.252884.182919.552886.12877.462866.43
Error2.02768192.271178.075173.511166.576164.021217.569167.246184.179219.545186.102177.464166.427
Std162.4429.3999520.282419.93375.836791.2471651.24865.3651927.026932.62158.6506121.91292.05063
Rank1251917933010223123188
Table A2. CEC 2022 results for optimizer group 2 (columns TTHHO, HHO, OMA, SSOA, MFO, BOA, GWO, SCA, SSA, AOA, GJO, HLOA, and WOA).
Table A2. CEC 2022 results for optimizer group 2 (columns TTHHO, HHO, OMA, SSOA, MFO, BOA, GWO, SCA, SSA, AOA, GJO, HLOA, and WOA).
FunctionMeasureTTHHOHHOOMASSOAMFOBOAGWOSCASSAAOAGJOHLOAWOA
F1Mean308.863301.815300.001128795427.597857.011287.191247.283008218.462425.72300.02811618
Error8.862871.815240.000696412125795127.597557.01987.188947.28 1.19189 × 10 9 7918.462125.720.027507711318
Std9.340170.6435320.001083477290.826003.82822.121032.66643.145 4.65134 × 10 10 3813.121643.930.05108025193.76
Rank121183527291917432211034
F2Mean448.266423.591408.4921349.95417.782133.14417.975459.298410.745996.227436.608423.055436.413
Error48.266423.59068.492949.95417.77981733.1417.975259.298310.7453596.22736.607723.05536.4127
Std40.662228.573721.5483404.26722.7816794.72718.932513.404716.8763622.37219.25630.330533.9742
Rank251643611371227633231522
F3Mean629.203629.745600.073658.374601.1637.507600.345617.69610.081635.316608.858648.732630.19
Error29.203129.74530.072700258.37361.0998537.5070.34465917.6910.080635.31598.8582548.731830.1904
Std9.601912.87220.188648.456762.279777.802890.4187692.632477.353697.343227.023814.890310.7557
Rank24252364283181227103326
F4Mean825.172823.848814.174857.204828.439846.683814.11838.229822.993830.801829.378843.058835.131
Error25.17223.848214.173857.204228.438946.682614.110438.229122.993230.801329.377543.057735.1306
Std7.635677.335443.8482811.114714.09958.338844.659076.6512510.72596.6432311.863417.802213.4967
Rank141343718343271221193125
F5Mean1340.671353.68900.5491589.64971.2271259.08914.153984.984907.6731234.78969.7091462.461350.15
Error440.667453.6830.549163689.64371.2273359.07714.153384.98457.673334.77969.7087562.46450.147
Std219.816146.1881.88104115.856126.173128.67848.593932.17222.5559132.03962.191225.652317.396
Rank262823592341032283027
F6Mean5002.833674.142125.36162820000.05334.4928453800.05193.851865420.03850.063944.117276.463134.654091.11
Error3202.831874.14325.365162818000.03534.4928452000.03393.851863620.02050.062144.115476.461334.652291.11
Std2093.771699.65340.074140389000.02118.0147684400.02520.941673170.01902.091194.141874.161941.492154.19
Rank191143623342130121425615
F7Mean2064.542062.152019.672128.992028.742080.52032.622054.012034.712115.992046.212123.412080.98
Error64.544162.146219.6735128.9928.744480.501532.620154.008634.7065115.9946.2068123.41480.9798
Std28.922623.08896.7409224.755913.142515.434818.72539.7852213.779348.668515.254547.058633.829
Rank232123662810191131183429
F8Mean2235.772228.112226.092381.462223.172275.532223.122231.792223.462275.62226.212319.842232.32
Error35.774328.113526.0879181.46423.168775.52823.122931.794223.461775.597826.208119.84132.3184
Std14.62296.565392.46649106.5565.9346136.79366.577722.342723.8991783.23472.530595.24975.66021
Rank22171336730619831143420
F9Mean2593.092582.932529.282786.32531.032776.072551.732558.392530.062701.0925882530.372590.5
Error293.085282.934229.284486.297231.031476.07251.727258.389230.059401.091288.001230.374290.5
Std38.678154.1137 7.63743 × 10 7 50.43864.4998160.17331.794414.18182.4285733.012744.2824.4842657.2759
Rank2622337936141773224825
F10Mean2549.552576.442500.462700.512515.552502.332563.052508.532500.512611.712567.472718.582622.33
Error149.548176.436100.465300.513115.549102.328163.046108.529100.511211.708167.469318.584222.335
Std68.3701102.1430.097789147.01638.62420.99952257.440530.02130.15127997.494462.495350.407228.176
Rank152323574185326203627
F11Mean2775.322725.042676.053648.542745.673084.112768.542769.152686.713141.542854.972791.292796.61
Error175.322125.03676.04841048.54145.669484.114168.544169.15286.7063541.539254.973191.293196.606
Std158.372106.76392.1241452.914125.844259.989126.9226.73558170.592329.319198.525198.549197.955
Rank17953611311516632252021
F12Mean2899.632905.232867.423094.242863.442908.772864.512869.022864.052989.752872.262906.892883.52
Error199.633205.229167.419394.241163.438208.767164.51169.016164.049289.754172.263206.89183.523
Std30.916243.50072.16039103.2130.82058631.01951.096291.91880.96460939.405717.224254.320223.0683
Rank26271237229514435162821
Table A3. CEC2022 results for optimizer group 3 (columns RSA, SHO, FLO, DO, FOX, ROA, ALO, AVOA, Chimp, SHIO, OHO, and POA).
Table A3. CEC2022 results for optimizer group 3 (columns RSA, SHO, FLO, DO, FOX, ROA, ALO, AVOA, Chimp, SHIO, OHO, and POA).
FunctionMeasureRSASHOFLODOFOXROAALOAVOAChimpSHIOOHOPOA
F1Mean7886.043035.968473.76300.0033007338.773003002360.333654.8817551.1436.278
Error7586.042735.968173.760.0027544 2.88028 × 10 5 7038.77 1.3387 × 10 8 2.94322 × 10 8 2060.333354.8817251.1136.278
Std1932.442018.031344.20.00270677 1.41881 × 10 5 2545.23 8.15949 × 10 9 1.1227 × 10 7 1198.932915.36466.5236.642
Rank30233397285620253613
F2Mean946.028433.5241269.24416.898417.668814.814407.361416.771589.062432.2892998.57419.448
Error546.02833.5238869.24416.89817.6684414.8147.3611316.7711189.06232.28922598.5719.4479
Std558.80135.5674504.70325.703727.7618628.84215.415425.4754172.8126.4451990.05826.0858
Rank322035910312830193813
F3Mean645.593612.256645.875607.01657.821646.188608.598609.633626.036604.613659.759616.884
Error45.593412.256145.87457.0157.821246.18798.59769.633226.03574.6132959.75916.8837
Std6.464416.367719.647787.6979810.692314.76727.661316.309675.81284.167444.6646111.7627
Rank301431735328112253716
F4Mean849.591822.287849.442825.5839.397843.274821.491830.498838.875816.571844.64817.846
Error49.591422.287349.442325.499839.397243.274121.491130.49838.874616.571144.640217.8456
Std7.070244.788276.467488.4148117.361610.76129.4907311.45348.636746.50552.585534.20989
Rank36935152932720285336
F5Mean1409.551050.291334.92988.8811484.321494.81917.2331203.571321.98951.6141606.5998.711
Error509.555150.289434.91588.8813584.32594.80917.2328303.566421.97851.6142706.50398.7113
Std133.811125.745197.486133.76115.815215.95230.7581174.094180.09877.856191.185989.1826
Rank2917251232335212473614
F6Mean60368200.04329.6727622200.05043.83370.821899843477.63921.381367390.04643.92806734000.02556.79
Error60366400.02529.6727620400.03243.81570.821881841677.62121.381365590.02843.92806732000.0756.793
Std32287300.01314.8744460000.02303.71691.183904901528.231993.938308982105.69603458000.01633.5
Rank351633207288132918385
F7Mean2122.932024.612104.182026.452128.792072.542042.32029.322057.082035.652120.192024.31
Error122.92724.6116104.17926.4477128.79472.542542.29829.323457.084335.6538120.18824.3119
Std17.15947.8137820.83827.6584959.385719.987319.19379.036565.9308115.528910.44529.96517
Rank33430535271772013323
F8Mean2244.952222.462246.922221.672383.762235.952226.922224.342316.592225.682486.852221.78
Error44.946222.462346.922421.6723183.76235.952326.924224.3375116.58925.6776286.85121.7799
Std10.17763.7864520.13926.60184121.32413.17524.003073.486555.30175.97557266.3565.8036
Rank28529337231593312384
F9Mean2705.442582.982744.12529.282547.492686.182529.532536.632571.372596.382855.482533.03
Error405.442282.982444.1229.284247.488386.177229.525236.631271.371296.376555.478233.028
Std42.256420.877539.517 8.4496 × 10 5 35.228947.33370.27414432.85551.363237.3302100.1339.585
Rank3323344133161220283810
F10Mean2675.062573.012697.552553.432891.392655.632530.912524.762583.432657.052991.152520.06
Error275.061173.015297.547153.432491.387255.627130.913124.756183.427257.051591.146120.063
Std136.8560.3159128.50760.3448454.081342.10554.708449.6543342.516274.109294.74145.4858
Rank33223417373112102532389
F11Mean3291.582870.823490.972689.252789.043034.542642.522750.943321.862763.514066.582707.18
Error691.577270.822890.97189.2533189.035434.53942.5217150.942721.862163.5111466.58107.179
Std430.341301.148347.701168.856159.863245.845111.56135.503269.892143.012408.09130.894
Rank332735719303123414388
F12Mean2941.542888.373037.382869.292985.252926.72865.662867.282867.622880.63183.312865.69
Error241.538188.371337.381169.293285.251226.7165.663167.279167.621180.604483.308165.69
Std63.074515.42574.58926.5806477.611353.81962.284655.976416.3360321.2336124.4973.13222
Rank3324361534326111320387
Table A4. Wilcoxon results of eHGSO in comparison with other optimizers group 1.
Table A4. Wilcoxon results of eHGSO in comparison with other optimizers group 1.
FunctionHGSOSMAGBORTHGTOCPOSCSODOAZOASPBOTSOAO
F1 8.85746 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
F20.1671840410.2322260.1258590.8812930.1913340.0438040.1560040.6274460.0004490.0585480.092963 8.86 × 10 5
T+: 68, T−: 142T+: 137, T−: 73T+: 64, T−: 146T+: 101, T−: 109T+: 140, T−: 70T+: 159, T−: 51T+: 143, T−: 67T+: 118, T−: 92T+: 199, T−: 11T+: 157, T−: 53T+: 150, T−: 60T+: 210, T−: 0
F30.000516715 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 198, T−: 12T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
F40.0007795930.0051110.0017130.0001030.0002190.0001030.00012 8.86 × 10 5 0.0019440.0001890.0011620.000189
T+: 195, T−: 15T+: 180, T−: 30T+: 189, T−: 21T+: 209, T−: 1T+: 204, T−: 6T+: 209, T−: 1T+: 208, T−: 2T+: 210, T−: 0T+: 188, T−: 22T+: 205, T−: 5T+: 192, T−: 18T+: 205, T−: 5
F50.0001401340.000219 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 0.000338 8.86 × 10 5
T+: 207, T−: 3T+: 204, T−: 6T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 201, T−: 9T+: 210, T−: 0
F6 8.85746 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
F70.002820860.0276210.005734 8.86 × 10 5 0.0001030.14540.000140.0365610.000103 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 185, T−: 25T+: 164, T−: 46T+: 179, T−: 31T+: 210, T−: 0T+: 209, T−: 1T+: 144, T−: 66T+: 207, T−: 3T+: 161, T−: 49T+: 209, T−: 1T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
F80.0003384550.0057340.0002540.0001890.0006810.0035920.0003380.000390.0022040.0001630.000163 8.86 × 10 5
T+: 201, T−: 9T+: 179, T−: 31T+: 203, T−: 7T+: 205, T−: 5T+: 196, T−: 14T+: 183, T−: 27T+: 201, T−: 9T+: 200, T−: 10T+: 187, T−: 23T+: 206, T−: 4T+: 206, T−: 4T+: 210, T−: 0
F9 8.85746 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 0.000132 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
F100.0111290140.0024950.0186750.0005930.0006810.0010190.0005930.0013250.0005938.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 173, T−: 37T+: 186, T−: 24T+: 168, T−: 42T+: 197, T−: 13T+: 196, T−: 14T+: 193, T−: 17T+: 197, T−: 13T+: 191, T−: 19T+: 197, T−: 13T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
F11 8.85746 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
F12 8.85746 × 10 5 0.00014 8.86 × 10 5 0.0002930.000120.004550.000293 8.86 × 10 5 0.000390.000254 8.86 × 10 5 0.000103
T+: 210, T−: 0T+: 207, T−: 3T+: 210, T−: 0T+: 202, T−: 8T+: 208, T−: 2T+: 181, T−: 29T+: 202, T−: 8T+: 210, T−: 0T+: 200, T−: 10T+: 203, T−: 7T+: 210, T−: 0T+: 209, T−: 1
Total+:11, −:0, =:1+:11, −:0, =:1+:11, −:0, =:1+:11, −:0, =:1+:11, −:0, =:1+:11, −:0, =:1+:11, −:0, =:1+:11, −:0, =:1+:12, −:0, =:0+:11, −:0, =:1+:11, −:0, =:1+:12, −:0, =:0
Table A5. Wilcoxon results of eHGSO in comparison with other optimizers group 2.
Table A5. Wilcoxon results of eHGSO in comparison with other optimizers group 2.
TTHHOHHOOMASSOAMFOBOAGWOSCASSAAOAGJOHLOAWOA
8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
0.000140.0005170.0011620.001019 8.86 × 10 5 8.86 × 10 5 0.0024950.0008920.000103 8.86 × 10 5 8.86 × 10 5 0.0019440.108427
T+: 207, T−: 3T+: 198, T−: 12T+: 192, T−: 18T+: 193, T−: 17T+: 210, T−: 0T+: 210, T−: 0T+: 186, T−: 24T+: 194, T−: 16T+: 209, T−: 1T+: 210, T−: 0T+: 210, T−: 0T+: 188, T−: 22T+: 148, T−: 62
8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
0.000681 8.86 × 10 5 0.0011620.000120.000449 8.86 × 10 5 0.000449 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 196, T−: 14T+: 210, T−: 0T+: 192, T−: 18T+: 208, T−: 2T+: 199, T−: 11T+: 210, T−: 0T+: 199, T−: 11T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 0.000103 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 209, T−: 1T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
0.0001030.0002930.00014 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 0.000103 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 0.000140.0001030.00012
T+: 209, T−: 1T+: 202, T−: 8T+: 207, T−: 3T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 209, T−: 1T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 207, T−: 3T+: 209, T−: 1T+: 208, T−: 2
8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
0.0013250.0002930.001713 8.86 × 10 5 0.0003380.011129 8.86 × 10 5 0.0002930.0022040.004550.000390.000517 8.86 × 10 5
T+: 191, T−: 19T+: 202, T−: 8T+: 189, T−: 21T+: 210, T−: 0T+: 201, T−: 9T+: 173, T−: 37T+: 210, T−: 0T+: 202, T−: 8T+: 187, T−: 23T+: 181, T−: 29T+: 200, T−: 10T+: 198, T−: 12T+: 210, T−: 0
8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 0.000163 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 0.00012 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 206, T−: 4T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 208, T−: 2T+: 210, T−: 0
+:12, −:0, =:0+:12, −:0, =:0+:12, −:0, =:0+:12, −:0, =:0+:12, −:0, =:0+:12, −:0, =:0+:12, −:0, =:0+:12, −:0, =:0+:12, −:0, =:0+:12, −:0, =:0+:12, −:0, =:0+:12, −:0, =:0+:11, −:0, =:1
Table A6. Wilcoxon results of eHGSO in comparison with other optimizers group 3.
Table A6. Wilcoxon results of eHGSO in comparison with other optimizers group 3.
RSASHOFLODOFOXROAALOAVOAChimpSHIOOHOPOA
8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
0.0186750.204330.178956 8.86 × 10 5 0.00014 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 168, T−: 42T+: 139, T−: 71T+: 141, T−: 69T+: 210, T−: 0T+: 207, T−: 3T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
0.00012 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 0.0001030.000103 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 0.00012 8.86 × 10 5 8.86 × 10 5
T+: 208, T−: 2T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 209, T−: 1T+: 209, T−: 1T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 208, T−: 2T+: 210, T−: 0T+: 210, T−: 0
8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
8.86 × 10 5 8.86 × 10 5 0.0002930.0005170.0004490.0002190.0008920.0089680.000254 8.86 × 10 5 0.0001890.000103
T+: 210, T−: 0T+: 210, T−: 0T+: 202, T−: 8T+: 198, T−: 12T+: 199, T−: 11T+: 204, T−: 6T+: 194, T−: 16T+: 175, T−: 35T+: 203, T−: 7T+: 210, T−: 0T+: 205, T−: 5T+: 209, T−: 1
8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5 8.86 × 10 5
T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0T+: 210, T−: 0
+:12, −:0, =:0+:11, −:0, =:1+:11, −:0, =:1+:12, −:0, =:0+:12, −:0, =:0+:12, −:0, =:0+:12, −:0, =:0+:12, −:0, =:0+:12, −:0, =:0+:12, −:0, =:0+:12, −:0, =:0+:12, −:0, =:0

References

  1. Chou, J.S.; Truong, D.N. A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl. Math. Comput. 2021, 389, 125535. [Google Scholar] [CrossRef]
  2. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar]
  3. Oftadeh, R.; Mahjoob, M.J.; Shariatpanahi, M. A novel meta-heuristic optimization algorithm inspired by group hunting of animals: Hunting search. Comput. Math. Appl. 2010, 60, 2087–2098. [Google Scholar] [CrossRef]
  4. Huang, S.; Zhao, G.; Chen, M. Novel adaptive quantum-inspired bacterial foraging algorithms for global optimization. Int. J. Innov. Comput. Inf. Control 2017, 13, 1649–1664. [Google Scholar]
  5. Zhou, X.; Guo, Y.; Yan, Y.; Huang, Y.; Xue, Q. Migration Search Algorithm: A Novel Nature-Inspired Metaheuristic Optimization Algorithm. J. Netw. Intell. 2023, 8, 869–882. [Google Scholar]
  6. Anaraki, M.V.; Farzin, S. Humboldt Squid Optimization Algorithm (HSOA): A Novel Nature-Inspired Technique for Solving Optimization Problems. IEEE Access 2023, 12, 122069. [Google Scholar] [CrossRef]
  7. Tourani, M. Elymus Repens Optimization (ERO); A Novel Agricultural-Inspired Algorithm. J. Inf. Syst. Telecommun. (JIST) 2024, 3, 170. [Google Scholar]
  8. Houssein, E.H.; Oliva, D.; Samee, N.A.; Mahmoud, N.F.; Emam, M.M. Liver Cancer Algorithm: A novel bio-inspired optimizer. Comput. Biol. Med. 2023, 165, 107389. [Google Scholar] [CrossRef] [PubMed]
  9. Maroosi, A.; Muniyandi, R.C. A novel membrane-inspired multiverse optimizer algorithm for quality of service-aware cloud web service composition with service level agreements. Int. J. Commun. Syst. 2023, 36, e5483. [Google Scholar] [CrossRef]
  10. Shijie, Z.; Shilin, M.; Leifu, G.; Dongmei, Y. A Novel Quantum Entanglement-Inspired Meta-heuristic Framework for Solving Multimodal Optimization Problems. Chin. J. Electron. 2021, 30, 145–152. [Google Scholar] [CrossRef]
  11. Yin, H.; Zheng, X.; Wen, G.; Zhang, C.; Wu, Z. Design optimization of a novel bio-inspired 3D porous structure for crashworthiness. Compos. Struct. 2021, 255, 112897. [Google Scholar] [CrossRef]
  12. Kanagasabai, L. Novel Commercial Pilot Preparation, Mindarinae and Formica Fusca Rapport Inspired, Red-footed Booby Optimization Algorithms for Real Power Loss Reduction and Voltage Stability Expansion. J. Eng. Sci. Technol. Rev. 2023, 16, 138. [Google Scholar] [CrossRef]
  13. Diab, H.Y.; Abdelsalam, M. A Novel Technique for the Optimization of Energy Cost Management and Operation of Microgrids Inspired from the Behavior of Egyptian Stray Dogs. Inventions 2024, 9, 88. [Google Scholar] [CrossRef]
  14. Ghanbari, S.; Ghasabehi, M.; Asadi, M.R.; Shams, M. An inquiry into transport phenomena and artificial intelligence-based optimization of a novel bio-inspired flow field for proton exchange membrane fuel cells. Appl. Energy 2024, 376, 124260. [Google Scholar] [CrossRef]
  15. Nourmohammadzadeh, A.; Hartmann, S. Fuel-efficient truck platooning by a novel meta-heuristic inspired from ant colony optimisation. Soft Comput. 2019, 23, 1439–1452. [Google Scholar] [CrossRef]
  16. Panigrahi, B.S.; Nagarajan, N.; Prasad, K.D.; Sathya; Salunkhe, S.S.; Kumar, P.D.; Kumar, M.A. Novel nature-inspired optimization approach-based svm for identifying the android malicious data. Multimed. Tools Appl. 2024, 83, 71579–71597. [Google Scholar] [CrossRef]
  17. Qiu, Y.; Zhou, J. Novel rockburst prediction criterion with enhanced explainability employing CatBoost and nature-inspired metaheuristic technique. Undergr. Space 2024, 19, 101–118. [Google Scholar] [CrossRef]
  18. Chiang, H.S.; Sangaiah, A.K.; Chen, M.Y.; Liu, J.Y. A Novel Artificial Bee Colony Optimization Algorithm with SVM for Bio-inspired Software-Defined Networking. Int. J. Parallel Program. 2020, 48, 310–328. [Google Scholar] [CrossRef]
  19. İlhan, H.; Elbir, A.; Serbes, G.; Aydin, N. The Evaluation of Nature-Inspired Optimization Techniques for Contrast Enhancement in Images: A Novel Software Tool. Trait. Signal 2023, 40, 1305–1318. [Google Scholar] [CrossRef]
  20. Arya, P.; Pandey, A.K.; Gopal, S.; Patro, K.; Tiwari, K.; Panigrahi, N.; Naveed, Q.; Lasisi, A.; Khan, W.A. MSCMGTB: A Novel Approach for Multimodal Social Media Content Moderation Using Hybrid Graph Theory and Bio-Inspired Optimization. IEEE Access 2024, 12, 73700–73718. [Google Scholar] [CrossRef]
  21. Alatas, B.; Bozkurt, O. A physics-based novel approach for travelling tournament problem: Optics inspired optimization. Inf. Technol. Control 2019, 9, 373–388. [Google Scholar] [CrossRef]
  22. Tian, A.Q.; Liu, F.F.; Lv, H.X. Snow Geese Algorithm: A novel migration-inspired meta-heuristic algorithm for constrained engineering optimization problems. Appl. Math. Model. 2024, 126, 327–347. [Google Scholar] [CrossRef]
  23. Sharma, P.; Raju, S. Metaheuristic optimization algorithms: A comprehensive overview and classification of benchmark test functions. Soft Comput. 2024, 28, 3123–3186. [Google Scholar] [CrossRef]
  24. Tomar, V.; Bansal, M.; Singh, P. Metaheuristic Algorithms for Optimization: A Brief Review. Eng. Proc. 2023, 59, 238. [Google Scholar] [CrossRef]
  25. Pan, J.; Hu, P.; Snášel, V.; Chu, S. A survey on binary metaheuristic algorithms and their engineering applications. Artif. Intell. Rev. 2023, 56, 6101–6167. [Google Scholar] [CrossRef]
  26. Hathal, H.M.; Ali, R.S.; Abdullah, A.S. A novel metaheuristic moss-rose-inspired algorithm with engineering applications. Electronics 2021, 10, 1877. [Google Scholar] [CrossRef]
  27. Dalla Vedova, M.; Berri, P.C.; Re, S. Novel metaheuristic bio-inspired algorithms for prognostics of onboard electromechanical actuators. Int. J. Mech. Control 2018, 19, 95–101. [Google Scholar]
  28. Prajna, K.; Mukhopadhyay, C.K. Acoustic Emission Denoising Based on Bio-inspired Antlion Optimization: A Novel Technique for Structural Health Monitoring. J. Vib. Eng. Technol. 2024, 12, 7671–7687. [Google Scholar] [CrossRef]
  29. Rodríguez-Gallegos, F.L.; Rodríguez-Gallegos, C.A.; Rodríguez-Gallegos, A.A.; Rodríguez-Gallegos, C.D. Natural reforestation optimization (NRO): A novel optimization algorithm inspired by the reforestation process. J. Comput. Sci. 2020, 16, 1172–1184. [Google Scholar] [CrossRef]
  30. Peraza-Vázquez, H.; Merino, M.A.; Peña-Delgado, A.F. A novel metaheuristic inspired by horned lizard defense tactics. Artif. Intell. Rev. 2024, 57, 59. [Google Scholar] [CrossRef]
  31. Liang, Z.; Shu, T.; Ding, Z. A Novel Improved Whale Optimization Algorithm for Global Optimization and Engineering Applications. Mathematics 2024, 12, 636. [Google Scholar] [CrossRef]
  32. Yin, L.; Cao, X. Quantum-inspired distributed policy-value optimization learning with advanced environmental forecasting for real-time generation control in novel power systems. Eng. Appl. Artif. Intell. 2024, 129, 107640. [Google Scholar] [CrossRef]
  33. Tian, Z.; Gai, M. Football team training algorithm: A novel sport-inspired meta-heuristic optimization algorithm for global optimization. Expert Syst. Appl. 2024, 245, 123088. [Google Scholar] [CrossRef]
  34. Kanagasabai, L. Novel train heist optimization and logistic chaotic map based freshman learning process inspired algorithm for solving the optimal power flow problem. Suranaree J. Sci. Technol. 2024, 31, 010321. [Google Scholar] [CrossRef]
  35. Kanagasabai, L. Novel reminiscence inspired and approximation based measurement of mount kailash optimization algorithms. Suranaree J. Sci. Technol. 2024, 31, 1. [Google Scholar] [CrossRef]
  36. Nemati, M.; Zandi, Y.; Agdas, A.S. Application of a novel metaheuristic algorithm inspired by stadium spectators in global optimization problems. Sci. Rep. 2024, 14, 3078. [Google Scholar] [CrossRef]
  37. Shu, T.; Pan, Z.; Ding, Z. Resource scheduling optimization for industrial operating system using deep reinforcement learning and WOA algorithm. Expert Syst. Appl. 2024, 255, 124765. [Google Scholar] [CrossRef]
  38. Karim, F.K.; Khafaga, D.S.; Eid, M.M.; Towfek, S.K.; Alkahtani, H.K. A Novel Bio-Inspired Optimization Algorithm Design for Wind Power Engineering Applications Time-Series Forecasting. Biomimetics 2023, 8, 321. [Google Scholar] [CrossRef] [PubMed]
  39. Almazroi, A.A.; ul Hassan, C.A. Nature-inspired solutions for energy sustainability using novel optimization methods. Plos ONE 2023, 18, e0288490. [Google Scholar] [CrossRef]
  40. Leong, K.H.; Abdul-Rahman, H.; Wang, C.; Onn, C.C.; Loo, S.C. Bee inspired novel optimization algorithm and mathematical model for effective and efficient route planning in railway system. PLoS ONE 2016, 11, e0166064. [Google Scholar] [CrossRef] [PubMed]
  41. Xu, Z.H.; Deng, Y.K.; Wang, Y. A Novel Optimization Framework for Classic Windows Using Bio-Inspired Methodology. Circuits Syst. Signal Process. 2016, 35, 693–703. [Google Scholar] [CrossRef]
  42. Haddadi, Y.R.; Mansouri, B.; Khodja, F.Z. A novel bio-inspired optimization algorithm for medical image restoration using Enhanced Regularized Inverse Filtering. Res. Biomed. Eng. 2023, 39, 233–244. [Google Scholar] [CrossRef]
  43. Lari, S.M.; Mojra, A.; Rokni, M. Simultaneous localization of multiple tumors from thermogram of tissue phantom by using a novel optimization algorithm inspired by hunting dogs. Comput. Biol. Med. 2019, 112, 103377. [Google Scholar] [CrossRef]
  44. Akopov, A.S. Parallel genetic algorithm with fading selection. Int. J. Comput. Appl. Technol. 2014, 49, 325–331. [Google Scholar] [CrossRef]
  45. Güyagüler, B.; Gümrah, F. Gas Production Rate Optimization by Genetic Algorithm. Energy Sources 2001, 23, 295–304. [Google Scholar] [CrossRef]
  46. Akopov, A.S.; Beklaryan, A.L.; Zhukova, A.A. Optimization of Characteristics for a Stochastic Agent-Based Model of Goods Exchange with the Use of Parallel Hybrid Genetic Algorithm. Cybern. Inf. Technol. 2023, 23, 87–104. [Google Scholar] [CrossRef]
  47. Akopov, A.S. A Clustering-Based Hybrid Particle Swarm Optimization Algorithm for Solving a Multisectoral Agent-Based Model. Stud. Informatics Control 2024, 33, 83–95. [Google Scholar] [CrossRef]
  48. Akopov, A.S. An Improved Parallel Biobjective Hybrid Real-Coded Genetic Algorithm with Clustering-Based Selection. Cybern. Inf. Technol. 2024, 24, 32–49. [Google Scholar] [CrossRef]
  49. Yue, S.; Rind, F.C.; Keil, M.S.; Cuadri, J.; Stafford, R. A bio-inspired visual collision detection mechanism for cars: Optimisation of a model of a locust neuron to a novel environment. Neurocomputing 2006, 69, 1591–1598. [Google Scholar] [CrossRef]
  50. Hu, G.; Huang, F.; Chen, K.; Wei, G. MNEARO: A meta swarm intelligence optimization algorithm for engineering applications. Comput. Methods Appl. Mech. Eng. 2024, 419, 116664. [Google Scholar] [CrossRef]
  51. Wang, K.; Shu, T.; Yin, X.; Xia, J. A multi-strategy improved manta ray foraging optimization for engineering applications. Clust. Comput. 2025, 28, 472. [Google Scholar] [CrossRef]
  52. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  53. Latha, V.; Karthikeyan, P. Optimization of stepped cantilever beam design using genetic algorithms. Int. J. Mech. Sci. 2015, 105, 60–69. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zraqou, J.; Alnsour, A.; Alrousan, R.; Fakhouri, H.N.; Halalsheh, N. Enhanced Henry Gas Solubility Optimization for Solving Data and Engineering Design Problems. Eng 2025, 6, 374. https://doi.org/10.3390/eng6120374

AMA Style

Zraqou J, Alnsour A, Alrousan R, Fakhouri HN, Halalsheh N. Enhanced Henry Gas Solubility Optimization for Solving Data and Engineering Design Problems. Eng. 2025; 6(12):374. https://doi.org/10.3390/eng6120374

Chicago/Turabian Style

Zraqou, Jamal, Ayman Alnsour, Riyad Alrousan, Hussam N. Fakhouri, and Niveen Halalsheh. 2025. "Enhanced Henry Gas Solubility Optimization for Solving Data and Engineering Design Problems" Eng 6, no. 12: 374. https://doi.org/10.3390/eng6120374

APA Style

Zraqou, J., Alnsour, A., Alrousan, R., Fakhouri, H. N., & Halalsheh, N. (2025). Enhanced Henry Gas Solubility Optimization for Solving Data and Engineering Design Problems. Eng, 6(12), 374. https://doi.org/10.3390/eng6120374

Article Metrics

Back to TopTop