Next Article in Journal
Liver-VLM: Enhancing Focal Liver Lesion Classification with Self-Supervised Vision-Language Pretraining
Previous Article in Journal
Multi-Modal EEG–Fusion Neurointerface Wheelchair Control System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Hybrid Metaheuristic Algorithm for Real-World Mechanical Engineering Optimization Problems

by
Chiara Furio
1,*,
Luciano Lamberti
1 and
Catalin I. Pruncu
2,*
1
Department of Mechanics, Mathematics and Management, Polytechnic University of Bari, Via Edoardo Orabona, 4, 70125 Bari, Italy
2
College of Engineering, Design and Physical Sciences, Mechanical and Aerospace Engineering, Brunel University London, Kingston Ln, Uxbridge UB83PH, UK
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(23), 12580; https://doi.org/10.3390/app152312580
Submission received: 27 September 2025 / Revised: 14 November 2025 / Accepted: 20 November 2025 / Published: 27 November 2025
(This article belongs to the Section Mechanical Engineering)

Abstract

Real-world constrained optimization problems often are highly nonlinear and present non-convex design spaces. Metaheuristic algorithms (MHOAs) are naturally suited to solving real-world optimization problems in view of their global optimization capability, but may require too many analyses to complete the optimization process. Hybrid methods enhance searching by combining two or more algorithms to better balance exploration and exploitation. Elitist strategies may be utilized to generate high-quality trial designs, yet with no guarantee that each new design always improves the current best record. In order to solve these issues and minimize the number of analyses, this study presents the novel HALSGWJA (Hybrid Approximate Line Search Grey Wolf JAYA) algorithm. HALSGWJA combined grey wolf optimizer (GWO) and JAYA (two powerful MHOAs still attracting optimization experts), enhanced by approximate line search. HALSGWJA utilized approximate gradient information to perform line searches, providing descent directions with respect to the current best record. This results in a complete renewal of the current population and a much higher probability of improving all individuals with respect to the previous iteration. The proposed HALSGWJA algorithm was successfully tested on 20 real-world mechanical engineering problems: (i) the CEC2020 test suite of 19 real-world mechanical engineering examples with up to 30 optimization variables and 86 nonlinear constraints and (ii) the optimal crashworthiness design of a vehicle subject to side impact with 11 optimization variables and 10 highly nonlinear constraints. Sizing and topology optimization problems, as well as problems with discrete variables, were considered. Remarkably, HALSGWJA outperformed 18 other state-of-the-art metaheuristic algorithms in the CEC2020 problems and 25 other algorithms in the crashworthiness design problem. HALSGWJA practically converged to target optima in all test cases (the largest penalty on target optimized cost was only 0.0263% in problem 13 of the CEC2020 library). Furthermore, it obtained in many cases 0 or nearly 0 standard deviation on optimized cost. Lastly, HALSGWJA always ranked first in terms of computational speed, requiring fewer analyses than its competitors and exhibiting, in most cases, a moderate dispersion on the number of analyses entailed by the optimization process.

1. Introduction

Real-world optimization problems are formulated in terms of (i) practical design features representing the tasks to be accomplished and (ii) design variables that define the configuration of the particular scenario to be optimized. The goal is to minimize or maximize an NDV–variables cost function W ( X )   subject to a set of NCON inequality/equality constraint functions, G ( X ) or H ( X ) . Real-world optimization problems are usually nonlinear with respect to the cost function and constraints and present non-convex design spaces.
Gradient-based optimization algorithms (GBOAs) or metaheuristic optimization algorithms (MHOAs) may be used for solving real-world optimization problems. GBAOs formulate and solve a sequence of approximate sub-problems using gradient information and perform line searches to improve design in the current iteration. MHOAs replace local information provided by gradients at some specific point of design space with global information relative to a population of candidate designs distributed over the search space. MHOAs randomly perturb design mimicking natural phenomena, physics/chemistry/mathematics laws, evolution, animal behavior, human behavior and activities, social sciences, etc. The quality of candidate designs included in the population is progressively improved as the search process converges to the optimal solution.
Metaheuristic algorithms are better suited for real-world optimization problems than gradient-based algorithms in view of their ability to globally explore the search space, avoiding local optima towards which the search may be directed by some particular gradient configurations. Furthermore, metaheuristics can efficiently deal with discrete variables and non-convex search spaces that often characterize real-world design problems. MHOAs combine exploration and exploitation. The former governs early optimization stages; variables are largely perturbed to quickly approach the best regions of the search space. The latter governs the final stages of the optimization process, carrying out a local search in the neighborhood of the most promising solutions. Exploration and exploitation must be properly balanced to find the global optimum with low computational effort.
MHOAs broadly include four types of algorithms: (i) evolutionary, (ii) science-based, (iii) human-based, (iv) swarm intelligence-based. Evolutionary algorithms rely on evolution theory and mimic evolutionary mechanisms. Science-based MHOAs are inspired by physics, chemistry, astronomy and astrophysics, and mathematics laws. Human-based algorithms mimic human behaviors, including learning/education, social interactions, international relationships, and political activity. Swarm intelligence-based algorithms simulate the social/individual behavior of insects, terrestrial animals, birds and other flying animals, and fishes and other aquatic animals; their optimization search is inspired by mechanisms driving reproduction, food search, hunting, migration, etc. Hybrid/improved/enhanced methods attempt to minimize computation cost, improve robustness, and limit the number of internal parameters of metaheuristic formulations. Hybrid MHOAs may present parallel (component algorithms are independently run on parallel computers) or serial (algorithms are sequentially executed on the same machine) architectures. Comprehensive reviews of MHOAs have been presented in Refs. [1,2,3,4,5,6,7].
MHOAs are largely used in real-world problems, such as in the static/dynamic optimization of civil, aerospace, and mechanical systems and structures [8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34]; characterization of materials, and systems/structural identification [35,36,37,38,39,40]; and damage identification [41,42,43,44,45,46]. However, important issues still must be considered in metaheuristic optimization: (i) no algorithm is superior to its competitors in all problems; (ii) computational cost is, in most cases, significantly higher than for gradient-based optimizers; (iii) the setting of internal parameters and computational complexity of sophisticated optimizers are often difficult to handle.
While increasing computational power enabled the development of new MHOAs, it should be underlined that most of the new algorithms had a very limited impact in the optimization community. For this reason, optimization experts very often preferred to hybridize or modify the most efficient available MHOAs rather than developing new methods from scratch. In general, the development of hybrid formulations should rely on important principles such as (i) to complement the exploration and exploitation characteristics of each component algorithm, (ii) to simplify the algorithm, and (iii) to achieve good versatility. Following this rationale, Furio et al. [6,7] developed two efficient hybrid metaheuristic algorithms combining the grey wolf optimizer (GWO) and the JAYA method. GWO was developed by Mirjalili et al. [47] in 2014; GWO mimics the leadership hierarchy and hunting mechanism of grey wolves. GWO is the second most cited algorithm in the metaheuristic optimization literature after particle swarm optimization (PSO). Its simple formulation (much simpler than PSO and without internal parameters) and high computational efficiency explain the huge number of engineering applications documented in the literature for this method (e.g., [48,49,50]). JAYA, developed by Rao [51] in 2016, utilizes the most straightforward search scheme ever proposed in metaheuristic optimization; variables are perturbed using just one equation to approach the population’s best solution and move away from the worst solution. JAYA’s efficiency and versatility are confirmed by the many studies presented in the literature (e.g., [52,53]). The inherently hybrid nature of JAYA, combining the survival of the fittest individual and the leader’s ability to guide the rest of the population to the optimal solution, makes JAYA an excellent component algorithm for the hybridization of metaheuristic methods.
The term “hybrid MHOAs” also covers the integration of gradient information into the metaheuristic search. In this regard, Ficarella et al. [26] developed efficient hybrid formulations of three widely used metaheuristic optimization algorithms, such as simulated annealing (SA), harmony search (HS), and big bang-big crunch (BBBC). In [26], low-cost gradient evaluation and approximate line search were added to each metaheuristic search engine to generate very high-quality trial designs, always belonging to descent directions. This allowed the continuous improvement of the design by quickly approaching the global optimum. The population was dynamically renewed by simultaneously improving as many candidate designs as possible. The hybridization scheme of [26] may be applied, in principle, to any metaheuristic optimizers.
Based on the previous arguments, the novel HALSGWJA (Hybrid Approximate Line Search Grey Wolf JAYA) algorithm was developed in the present study. HALSGWJA combines the basic formulations of GWO and JAYA, but it utilizes approximate gradient information and line search to modify trial designs, always making them very likely to improve the current best record. Since this strategy is common to exploration and exploitation, these two stages are well balanced over the whole optimization process, thus maximizing search efficiency. HALSGWJA definitely improves the hybrid GWO-JAYA formulations previously developed in [6,7], which already included elitist and repair strategies to generate high-quality candidate solutions but did not directly use line search in the initial generation of trial solutions, as this is instead performed by the present algorithm. This allows HALSGWJA to avoid the heuristic criteria for the acceptance/rejection of trial designs that were necessary in [6,7], thus achieving faster and more robust convergence behavior.
The HALSGWJA algorithm was tested on 20 real-world mechanical engineering optimization problems including (i) the 19 examples of the CEC2020 library on real-world mechanical engineering and (ii) a highly nonlinear optimal crashworthiness design problem. These 20 test problems are very significant as they include up to 30 optimization variables and 86 nonlinear constraints, as well as sizing/topology, static/dynamic optimization problems, and discrete variable problems. HALSGWJA was extensively compared with more than 40 other metaheuristics. Remarkably, HALSGWJA always competed well with CEC2020’s best performing algorithms and other advanced state-of-the-art metaheuristic optimizers, requiring fewer analyses to complete the optimization process.
The remainder of this article is organized as follows. Section 2 describes the HALSGWJA algorithm. Optimization results are discussed in Section 3. Section 4 summarizes this study and outlines future investigations.

2. The HALSGWJA Algorithm

The HALSGWJA algorithm developed in this study has six steps: (i) initialization; (ii) GWO phase, including approximate gradient information; (iii) JAYA phase, including evaluation and improvement of trial designs with elitist strategies; (iv) population resorting, definition of the new best/worst designs, and updating of α-β-δ wolves; (v) convergence check; (vi) end of search process and output of optimum design.
  • Step 1. Initialization
HALSGWJA randomly generates the population of NPOP candidate solutions (i.e., wolves) as
x j i = x j L + ρ j i   ( x j U x j L ) j = 1 , , N D V i = 1 , , N P O P
where NDV is the number of optimization variables, x j L and x j U are the lower and upper bounds of the jth variable, respectively, and ρ j i   is a random number in the interval (0,1).
The cost function W( X ) (depending on the NDV variables stored in the solution vector X ) is collapsed with the NCON inequality constraint functions Gk( X ) ≤ 0 into the penalized cost function Wp( X ):
W p ( X ) = W ( X ) + p · ψ
where p is the penalty coefficient that may be constant or adaptively change in the optimization process. The penalty function ψ is
ψ = k = 1 N c o n m a x 0 , g j 2
The penalized cost Wp( X ) and the cost function W( X ) coincide if the trial solution X   satisfies optimization constraints. Equality constraints H( X ) = 0 are transformed into inequality constraints of the G( X ) ≤ 0 and −Gk( X ) ≤ 0 types. Candidate solutions are sorted with respect to penalized cost function values; the current best solution X b e s t and the worst solution X w o r s t , respectively, correspond to the lowest (highest) and highest (lowest) values of ψ for minimization (maximization) problems.
  • Step 2. GWO phase: Generation of new trial designs with approximate gradient information
GWO simulates the leadership hierarchy and hunting mechanism of grey wolves [47]. GWO defines four hierarchical types of grey wolves: α, β, δ, and ω. The hunting phase is simulated by mathematical operators depicting searching, encircling, and attacking prey. The group is led by the α wolf. The β wolf helps the α wolf in decision-making or other pack activities. The δ wolves rank below the α and β wolves but above the lowest-ranked ω wolves. While α, β, and δ wolves estimate the prey’s position, other wolves randomly update their positions around the prey.
The proposed HALSGWJA algorithm defines new trial designs by enhancing classical GWO with gradient information. The α, β, and δ wolves are the three best individuals ranked with respect to penalized cost function. All other designs are classified as wolves ω. Wolves encircle the prey during the hunt. This behavior can be modeled as
D = C · X p X i t
X i t + 1 = X p A · D
where it is the current iteration, A and C are coefficient vectors, X p is the prey’s position vector, and X i t   is the generic search agent (i.e., grey wolf) of population and its updated position X i t + 1 . The “ · ” is the term-by-term vector multiplication defining another vector.
Vectors A and C are defined as follows:
A = 2 a · r 1 a
C = 2 r 2
where the components of vector a decrease linearly from 2 to 0 in the optimization process; r 1 and r 2 are random vectors extracted from [0,1]. In the exploration phase, search agents tend to search for another prey (i.e., another optimum solution) for A > 1. Conversely, search agents converge towards prey (i.e., optimum solution) in the exploration phase for A < 1.
To simulate hunting, GWO assumes that the three best individuals (i.e., wolves α, β, and δ) can estimate the prey’s location (i.e., optimal solution) better than the rest of the population. All other designs are hence updated using the design vectors X α , X β , and X δ of the best individuals. The generic design X i of population is updated to X i , t r as follows:
D α = C 1 · X α X i D β = C 2 · X β X i D δ = C 3 · X δ X i
X 1 = X α A 1 ·   D α X 2 = X β A 2 ·   D β X 3 = X δ A 3 ·   D δ
X i , t r p r e l = X 1 + X 2 + X 3 3
Vectors X 1 ,   X 2 ,   a n d   X 3 vary for each candidate design X i . Equations (8)–(10) update the positions of all individuals.
Classical GWO sorts the new population of trial solutions X i , t r p r e l to update positions X α , X β , and X δ in each iteration. This process ends when the optimizer completes the limit number of iterations or function evaluations specified by the user. However, classical GWO has two inherent limitations; (i) the search is biased toward the α wolf, with the risk of stagnating search in the neighborhood of current best record X b e s t , and (ii) each new trial solution X i , t r p r e l does not necessarily improve X b e s t or at least the corresponding search agent X i of the previous population. The present authors attempted to overcome these problems by introducing elitist strategies that refine the X i , t r p r e l trial designs [6,7]. In particular, only the trial solutions that are very likely to improve design are retained in the updated population and are then further refined based on their rank with respect to X b e s t . However, the perturbation step given to X i , t r p r e l could be too small to effectively improve X b e s t or at least X i .
In order to solve these issues, HALSGWJA utilizes gradient information to refine trial designs X i , t r p r e l . Ideally, gradients of cost function and constraint functions should be computed with respect to each trial design X i currently perturbed. However, this is not affordable in terms of computational cost. Hence, HALSGWJA evaluates the rate of variation in penalized cost function with respect to X i . The largest improvement would obviously be obtained by perturbing the design from X i   to X b e s t . However, this may not be the fastest improvement if one considers the ΔWp(i,k)S(i,k) ratio between the ΔWp(i,k) =   W p X i W p X k variation in penalized cost function and the corresponding ΔS(i,k)= X i X k perturbation size from X i   to another search agent X k better ranked in the population than X i . If X k ranks better than X i , the corresponding direction S i , k = X k X i is a descent direction with respect to X i . Conversely, if X k ranks below X i , perturbing the design along the X k X i direction would not improve the design with respect to X i : hence, it is better to perturb the design in the opposite direction S i , k = X k X i or, equivalently, along the direction S i , k = X i X k .
In view of this, HALSGWJA computes the average gradient of the penalized cost function with respect to X i as γ(i,k) = ΔWp(i,k)S(i,k) for each design X k stored in the population. The preliminary trial design X i , t r p r e l   , generated by the GWO scheme, is hence perturbed as follows:
X i , t r = X i , t r p r e l + τ b e s t X b e s t X i + τ F S T , b t t X F S T , b e t t e r X i τ F S T , w r s X F S T , w o r s e X i
In the above equation, X F S T , b e t t e r is the candidate design of population ranking above    X i , yet different from X b e s t , that achieves the largest value of average gradient γ(i,k); X F S T , w o r s e is the candidate design of population ranking below X i , that achieves the largest value of average gradient γ(i,k); and τbest, τFST,btt, and τFST,wrs are three random numbers in the interval [0,1]. It can be seen that Equation (11) refines the position of X i , t r p r e l   by combining three descent directions that may potentially improve the design with respect to X i . The X b e s t X i direction would lead to achieving the largest improvement for X i as the search is directed towards the current best record stored in the population. The X F S T , b e t t e r X i direction is the one that would lead to an improvement in X i as quickly as possible, moving towards designs that rank above X i . The X F S T , w o r s e X i direction is the one that would lead to escape the most quickly from any design of the population ranking below X i . The perturbation steps taken along these three directions are weighted by the random numbers τbest, τFST,btt, and τFST,wrs, respectively.
  • Step 3. Evaluation and correction of new trial designs: JAYA phase and elitist strategies
The trial designs X i , t r must be evaluated to check whether they should be included in the new population. Classical GWO replaces the old design X i with the new design X i , t r if Wp X i , t r < Wp X i but this task entails a new constraint evaluation. The present authors introduced in [6,7] an elitist strategy where the cost function W( X i , t r ) of the new trial design X i , t r was compared with the penalized cost function Wp  X b e s t of the current best record X b e s t . If W( X i , t r ) > Wp X b e s t , the new trial design X i , t r was directly rejected. Conversely, if W( X i , t r ) < Wp X b e s t and W( X i , t r ) ≤ 1.1 W( X b e s t ), the new trial solution X i , t r was provisionally included in the new population. This step is no longer necessary in HALSGWJA because Equation (11) refines trial designs X i , t r , combining three significant directions that lead to a decrease in the cost function value at least with respect to the old design X i . Hence, HALSGWJA includes in the new population Πtr all X i , t r trial designs defined with Equation (11). Unlike classical GWO and the improved GWO formulations developed in [6,7], HALSGWJA now defines the new trial solutions X i , t r of the new population not only considering the positions of wolves α, β, and δ but also the effect of the rest of the population. This strategy reduces, by a great deal, the risk of stagnation for the new trial solutions.
HALSGWJA adopts a JAYA-based scheme to further improve trial designs  X i , t r . The hybridization of GWO and JAYA was considered also in [6,7], using JAYA operators to exploit the trial designs X i , t r or to globally explore the design space returning to perturb the old design X i originally included in the population. The former was performed if W( X i , t r ) ≤ 1.1 W( X b e s t ), while the latter was performed if W( X i , t r ) > 1.1 W( X b e s t ). The selection of the 1.1 threshold made in [6,7] resembles the probabilistic acceptance/rejection criterion of simulated annealing that allows local minima to be bypassed and becomes very useful in the exploitation phase. In the present study, HALSGWJA modified the above-mentioned criterion to be W( X i , t r ) ≤ ρwW( X b e s t ) or W( X i , t r ) > ρwW( X b e s t ). The threshold value ρw, which replaces the 1.1 constant threshold value, is adaptively changed by HALSGWJA in each iteration based on the characteristics of the original population Πiter of candidate designs X i at the beginning of current iteration, the population of trial designs X i , t r defined in the GWO phase with Equations (4)–(11), and the approximate gradients evaluated at the best designs of these two populations. This allows search agents to be adaptively perturbed in a significantly more dynamic way than in [6,7]. The new threshold value ρw defined by HALSGWJA is
ρ w = M i n W p X δ W p X b e s t ; W δ , t r W b e s t , t r ; ¯ W p X b e s t ¯ W X b e s t , t r ; ¯ W X b e s t , t r ¯ W p X b e s t
In order to use Equation (12), HALSGWJA sorts the trial designs X i , t r in terms of cost function. Let Wbest,tr and Wδ,tr, respectively, be the best value and the third best value of cost function evaluated for the Πtr population of trial designs X i , t r ; the corresponding trial designs X b e s t , t r   and X δ , t r practically represent the new wolves αtr and δtr of Πtr estimated on the basis of cost function values without evaluating constraints.
The approximate gradient of penalized cost function ¯ W p X b e s t is evaluated at X b e s t for the original population Πiter-initial of candidate designs X i at the beginning of the current iteration, considering wolves α (that coincides with X b e s t ), β, and δ, and the X F A S T design yielding the fastest variation in penalized cost function toward the current best record. For the Πtr population of trial designs, the average gradient of cost function ¯ W X b e s t , t r is evaluated at X b e s t , t r , considering wolves αtr (that coincides with X b e s t , t r ), βtr, and δtr and the X F A S T , t r trial design for which cost function decreases most quickly, moving toward X b e s t , t r .
  ¯ W p X b e s t = M e a n W p X b e s t W p X β X b e s t X β ; W p X b e s t W p X δ X b e s t X δ ; W p X b e s t W p X F A S T X b e s t X F A S T               ¯ W X b e s t , t r = M e a n W X b e s t , t r W X β , t r X b e s t , t r X β , t r ; W X b e s t , t r W X δ , t r X b e s t , t r X δ , t r ; W X b e s t , t r W X F A S T , t r X b e s t , t r X F A S T , t r
It can be seen from Equation (13) that HALSGWJA computes approximate gradients by averaging the relative variations in the penalized cost function Wp and the cost function W with respect to design perturbations taken along three representative descent directions. The computational cost of these operations is very low because penalty function values are available for all search agents at the beginning of each iteration while the evaluation of cost function for trial designs is usually much cheaper than evaluating constraints, especially for the CEC 2020 mechanical engineering test suite solved in this study, which includes explicitly stated test problems.
The rationale behind Equation (12) as follows. Basically, HALSGWJA checks if the Πtr population of trial designs has a higher quality than the original population Πiter-initial that should be updated in the current iteration. Such a scenario is likely to occur considering that trial designs were defined in the GWO phase as trying to perturb optimization variables along descent directions. However, HALSGWJA adopts a conservative approach in order to minimize the probability of exploiting, in the JAYA phase, some design that is significantly worse than the three best search agents of Πtr and Πiter-initial and that hence cannot effectively contribute to searching the optimum. For that purpose, the ρw threshold takes the minimum ratio between the responses of the δ and α wolves (i.e., the best records of the two populations) included in the two populations and further penalizes this ratio by considering the minimum ratio between approximate gradients. If the Πtr population of trial designs tends to the target optimum, the gradient of cost function should decrease and finally approach zero, passing from Πiter-initial to Πtr.
In the JAYA phase, HALSGWJA exploits the trial design X i , t r if it occurs that W( X i , t r ) ≤ ρwW( X b e s t ). Since the JAYA search scheme consists of approaching the best design and escaping from the worst design of the population, it is necessary to define these two individuals. For that purpose, HALSGWJA evaluates the value of the penalized cost function W p X b e s t , t r at X b e s t , t r and compares it with its counterpart W p X b e s t , evaluated at the best record X b e s t obtained so far. The X b e s t , J A Y A design is hence defined as follows:
W p X b e s t , t r W p X b e s t   X b e s t , J A Y A = X b e s t , t r W p X b e s t , t r > W p X b e s t   X b e s t , J A Y A = X b e s t    
The X w o r s t , J A Y A design is instead defined as follows:
W X δ , t r W X δ   X w o r s t , J A Y A = X δ , t r W X δ , t r > W X δ   X w o r s t , J A Y A = X δ  
The rationale of Equations (14) and (15) is to combine the best characteristics of the original population Πiter-initial and the trial population Πtr that should replace it. This allows the efficiency of the exploitation phase to be maximized because HALSGWJA is forced to locally search in a region of design space containing only high-quality solutions such as the best α and δ wolves of the two populations. Remarkably, HALSGWJA may either select both X b e s t , J A Y A and X w o r s t , J A Y A   from Πtr or from Πiter, as well as one from one population and the other from another population. The new trial solution X i , t r is hence defined as:
X i , t r = X i , t r + λ 1   · X b e s t , J A Y A X i , t r λ 2   · X w o r s t , J A Y A X i , t r
where λ 1 and λ 2 are two vectors of NDV random numbers in the [0,1] interval.
The cost function W(( X i , t r )′) is evaluated for the new trial solution X i , t r ′. Then HALSGWJA includes the X i , t r ′ design in the final population Πiter-final, defined at the end of the current iteration if it occurs that W(( X i , t r )′) < W( X i , t r ); hence, the exploiting strategy set by HALSGWJA for X i , t r is rated successful. Conversely, if W(( X i , t r )′) > W( X i , t r ), X i , t r ′ is rejected and X i , t r is finally retained in the updated population Πiter-final, HALSGWJA rates only the improvement in design achieved by X i , t r   as being satisfactory. HALSGWJA performs an exploration in the JAYA phase if the trial design X i , t r yields W( X i , t r ) > ρwW( X b e s t ). While X i , t r is not good for exploitation, it may still be better than most of the designs included in the original population Πiter-initial as well as being highly ranked in the trial population Πtr. Hence, it is necessary to explore over all of the 2 × NPOP search agents included in Πiter-initial and Πtr populations before deciding on whether to dismiss the trial design X i , t r . Again, HALSGWJA defines the X b e s t , J A Y A and X w o r s t , J A Y A designs required in the JAYA phase by combing the properties of Πiter-initial and Πtr. Similarly to the exploitation phase, HALSGWJA compares the value of penalized cost function W p X b e s t , t r with its counterpart W p X b e s t , and the X b e s t , J A Y A design is defined by Equation (14). The X w o r s t , J A Y A design is instead defined by comparing the worst designs of the two populations:
W X w o r s t , t r W X w o r s t   X w o r s t , J A Y A = X w o r s t , t r W X w o r s t , t r > W X w o r s t   X w o r s t , J A Y A = X w o r s t  
The new trial solution X i , t r defined by HALSGWJA in the JAYA-based exploration phase is
X i , t r = X i , J A Y A + λ 1   · X b e s t , J A Y A X i , J A Y A λ 2   · X w o r s t , J A Y A X i , J A Y A
where random vectors λ 1 and λ 2 are similar to those used in Equation (16). The absolute values of the X i or X i , t r vectors’ components included in Equations (16) and (17) are considered. The X i , J A Y A design included in Equation (18) is defined as follows:
W X i , t r W X i   X i , J A Y A = X i , t r W X i , t r > W X i   X i , J A Y A = X i    
Similarly to X b e s t , J A Y A and X w o r s t , J A Y A , the X i , J A Y A   design can either belong to Πiter-initial or Πtr. The rationale of Equations (18) and (19) is as follows. Ideally, each individual included in the new population Πiter-final that has been updated in the current iteration should be better than its counterpart included in the original population Πiter-initial. While the trial design X i , t r may not be good for exploitation because it does not have enough quality for improving the current best record, HALSGWJA attempts to explore the search space to improve at least the i-th individual of the population. For this reason, HALSGWJA perturbs the best design X i , J A Y A amongst X i and X i , t r with the JAYA-based scheme of Equation (18); the algorithm searches on the X b e s t , J A Y A X i , J A Y A descent direction that leads to a better design than X i , J A Y A and escape from the worst design X w o r s t , J A Y A , which certainly cannot improve (or it is very unlikely to improve) X i , J A Y A .
Again, the new trial solution X i , t r ′ is included in the final population Πiter-final when W(( X i , t r )′) < ρwW( X b e s t ), which obviously yields W(( X i , t r )′) < ρwW( X b e s t ) < W( X i , t r ) as the last inequality leads to the exploration phase being carried out.
When the JAYA-based exploration phase fails, still yielding W(( X i , t r )′) > ρwW( X b e s t ) besides W( X i , t r ) > ρwW( X b e s t ), HALSGWJA utilizes a mirroring strategy for X i , t r and X i , t r . The new trial solution X i , t r ( m i r r ) is defined as follows:
W X i , t r W X i , t r   X i , t r , m i r r = X b e s t + η m i r r X b e s t X i , t r W X i , t r > W X i , t r   X i , t r , m i r r = X b e s t + η m i r r X b e s t X i , t r  
where ηmirr is a random number in the [0,1] interval. Basically, HALSGWJA perturbs the design by moving from the current best record along a descent direction with respect to the best design amongst X i , t r and X i , t r ′. Hence, the search direction is mirrored with respect to the non-descent directions X i , t r X b e s t and X i , t r X b e s t that did not improve design. In this way, the trial solution X i , t r ( m i r r ) is very likely to improve design. The random step size set by ηmirr serves to limit the risk of generating infeasible designs. The new solution X i , t r ( m i r r ) is evaluated following the same steps described above for all other trial solutions generated by HALSGWJA.
  • Step 4. Resort population, update α, β, and δ wolves,  X b e s t , and  X w o r s t
The NPOP updated individuals are sorted in terms of the penalty function. The best three individuals of the new population are taken as α, β, and δ wolves, with their corresponding vectors of optimization variables X α , X β , and X δ . The best and worst search agents are taken as X b e s t and X w o r s t , respectively.
To avoid search stagnation, HALSGWJA checks if wolves α, β, and δ were updated in the current iteration. This task is accomplished by an elitist criterion based on the concept of descent direction. The S β = X b e s t X β and S δ = X b e s t X δ directions are descent with respect to positions X β and X δ of β and δ wolves as they lead to the improvement of the design, moving towards the current best record, represented by the α wolf. Similarly to [7], HALSGWJA perturbs β and δ wolves by moving along S β and S δ . The positions of wolves β and δ are displaced from the center of mass of the three best individuals of the population and then “mirrored” with respect to X b e s t as follows:
X β m i r r = X b e s t + X β + X δ 3 X δ + 1 + η m i r r , β X b e s t η m i r r , β X β X δ m i r r = X b e s t + X β + X δ 3 X δ + 1 + η m i r r , δ X b e s t η m i r r , δ X δ
where the random numbers ηmirr,β and ηmirr,δ are extracted from the interval (0,1); they limit step sizes on S β and S δ so as to minimize the risk of generating infeasible solutions. Interestingly, since the original perturbation scheme of Ref. [7] did not account for the relative position of the center of mass of the three best individuals and the δ wolf, the following “limit” cases could occur in [7]: (i) for ηmirr,β = ηmirr,δ = 0, X β m i r r and X δ m i r r coincided with X b e s t ; (ii) for ηmirr,β = ηmirr,δ = 1, X β m i r r and X δ m i r r , respectively, were the symmetric of X β and X δ about X b e s t . Conversely, HALSGWJA includes in Equation (21) the X b e s t + X β + X δ 3 X δ term that forces the optimizer to define wolves that are always at least better than the δ wolf.
The best three search agents amongst X α , X β , X δ , X β m i r r and X δ m i r r are stored by HALSGWJA as the new α, β, and δ-wolves for the next iteration. The two worst search agents are compared by HALSGWJA with the rest of the population and may replace X w o r s t and the second-worst design of the population. This strategy also accounts for X β m i r r and X δ m i r r not improving any of the α, β, and δ wolves. By replacing the two worst designs of population, it is possible to improve the average quality of search agents and finally achieve a higher probability of generating high-quality trial designs in the subsequent iteration.
  • Step 5. Convergence check
Standard deviations on optimization variables and penalty function values of candidate designs have to decrease as the search process progresses towards the global optimum. For this reason, HALSGWJA normalizes the standard deviation of the design vectors with respect to the average design vector X a v e r = i = 1 N P O P X i / N P O P of the population and the standard deviation of the penalty function values with respect to the average penalty function value W p , a v e r = i = 1 N P O P W p ( X i ) / N P O P . The termination criterion used by HALSGWJA is
M a x S T D X 1 X a v e r , X 2 X a v e r , , X N P O P X a v e r X a v e r ; S T D W p , 1 W p , a v e r , W p , 2 W p , a v e r , , W p , N P O P W p , a v e r W p , a v e r ε c o n v
The convergence limit ε c o n v is 10−7. Penalty function and cost function values coincide in feasible solutions or in the case of unconstrained optimization problems.
Steps 2 through 5 are repeated until HALSGWJA converges to the global optimum.
It should be noted that the convergence criterion of Equation (22) utilized by HALSGWJA is different from the typical stopping condition usually adopted by metaheuristic optimizers that terminate search when the limit number of iterations specified by the user is completed. HALSGWJA directly drives all candidate designs towards the optimum solution by performing multiple line searches (inherently based on elitist strategies) that generate high-quality designs very likely to improve the current best record or at least a very large fraction of the current population. This makes it no longer necessary to perform many optimization iterations, and the computational cost of optimization process is significantly reduced.
  • Step 6. Terminate optimization process
HALSGWJA terminates the optimization process and writes output data (i.e., optimum design and optimized cost function value) in the results file.
Algorithm 1 presents the pseudo-code of the HALSGWJA algorithm developed in this study, while Figure 1 shows the algorithm flow chart.
Algorithm 1 Pseudo-code of the HALSGWJA algorithm
START HALSGWJA
  • Step 1. Initialization
  • Set population size NPOP and limit number of iterations Nitermax.
  • Set iteration number as ITER = 1.
  • Randomly   generate   the   population   of   N POP   candidate   designs   X i (i = 1,…,NPOP) with Equation (1).
  • Denominate the population as Πiter-initial that should be updated in the current iteration.
  • For i = 1, …, NPOP
  • Evaluate   cost   function   W X i   and   constraint   functions   G X i ,   H X i   for   each   X i .
  • Evaluate   penalized   cost   W p X i   for   each   candidate   design   X i with Equations (2) and (3).
  • end for
  • Rank   candidate   designs   X i   included   in   the   population   by   sorting   W p X i   values   from   lowest   to   highest .   Define   X o p t   and   X w o r s t , respectively, as the best and worst candidate designs of population yielding the minimum and maximum penalized cost values.
  • Set   the   best   three   individuals   with   the   lowest   values   of   W p   as   α ,   β   and   δ   wolves :   let   X α ,   X β   and   X δ   be   the   corresponding   design   vectors .   The   α   wolf   is   the   current   best   record   X o p t     X α   with   the   corresponding   penalized   cost   W p , opt   = W p   ( X α ) .
  • For i = 1, …, NPOP
  • Step 2-A.   Update individuals   X i   to   preliminary   trial   designs   X i , t r p r e l with classical GWO Equations (4)–(10)
  • Step 2-B. Refine preliminary trial designs  X i , t r p r e l   as   X i , t r  using approximate gradients
  • end for
  • Store   trial   designs   X i , t r  in the Πtr population that should include better designs than the original population.
  • For i = 1, …, NPOP
  • Compute   the   variation   of   penalized   cost   function   Δ W p ( i , k ) =   W p X i     W p X k   and   the   corresponding   step   size   Δ S ( i , k ) = X i X k   if   design   was   perturbed   from   X i   to   X k .
  • Determine   average   gradient   of   penalized   cost   function   γ ( i , k ) = Δ W p ( i , k ) / Δ S ( i , k )   for   each   X i .
  • Find   candidate   designs   X F S T , b e t t e r   and   X F S T , w o r s e ,   respectively ,   ranking   above   and   below X i , and yielding the largest values of γ(i,k).
  • Define   trial   designs   X i , t r with Equation (11).
  • end for
  • Define threshold value ρw with Equations (12) and (13).
  • Step 3. Evaluate trial designs  X i , t r  and exploit them or re-explore search space with JAYA schemes
  • For i = 1, …, NPOP
  • If   W ( X i , t r )     ρ w W ( X o p t )
  • Define   X b e s t , J A Y A   and   X w o r s t , J A Y A   for   X i , t r with Equation (14) and Equation (15), respectively.
  • Exploit   trial   design   X i , t r   and   update   it   into   X i , t r with the JAYA scheme of Equation (16).
  • If   W ( ( X i , t r ) )   <   W ( X i , t r )
  • Drop   X i , t r   and   include   ( X i , t r ) in the updated population Πiter-final for current iteration.
  • else
  • Drop   ( X i , t r )   and   include   X i , t r in the updated population Πiter-final for current iteration.
  • end if
  • end if
  • If   W ( X i , t r )   >   ρ w W ( X o p t )
  • Define   X b e s t , J A Y A   and   X w o r s t , J A Y A   for   X i , t r with Equation (14) and Equation (17), respectively.
  • Explore   search   space   combining   populations   Π iter-initial   and   Π iter .   Define   X i , J A Y A   with   Equation   ( 19 ) .   Update   X i , t r   into   X i , t r with the JAYA scheme of Equation (18).
  • If   W ( ( X i , t r ) )     ρ w W ( X o p t )
  • Drop   X i , t r   and   include   ( X i , t r ) in the updated population Πiter-final for current iteration.
  • end if
  • If   W ( ( X i , t r ) )   >   ρ w W ( X o p t )
  • Define   the   new   trial   design   X i , t r ( m i r r ) with the mirroring strategy of Equation (20).
  • Evaluate   X i , t r ( m i r r ) following the same steps described for the other trial designs to finally update population Πiter-final.
  • end if
  • end if
  • end for
  • Step 4. Resort population, define new wolves α, β and δ,  update   X o p t   and   X w o r s t
  • Sort   the   updated   population   Π iter-final   with   respect   to   penalized   cost   function   values W p .
  • Update   positions   X α ,   X β   and   X δ of wolves α, β and δ, respectively.
  • Use elitist mirror strategy of Equation (21) to avoid stagnation of wolves α, β and δ. The optimizer is forced to define wolves that always are at least better than the δ−wolf.
  • Update ,   if   necessary ,   X α ,   X β   and   X δ or at least the worst two designs of population Πiter-final.
  • Set   the   α-wolf   as   the   current   best   record   X o p t     X α   with   penalized   cost   W p , opt = W p ( X α ) .
  • Step 5. Check for convergence
  • If the convergence criterion stated by Equation (22) is satisfied
  • Go to line 61 to terminate optimization process.
  • else
  • Continue the optimization process.
  • Update iteration number as ITER = ITER + 1.
  • Go to line 5.
  • end if
  • Step 6. Terminate optimization process and output optimal solution
  • Terminate optimization process.
  • Output   optimal   design   X o p t   and   optimal   cost   value   W ( X o p t ) .
END HALSGWJA.
HALSGWJA actually represents an advanced grey wolf optimization algorithm. In particular, HALSGWJA updates the population, trying to improve the current best record or at least the largest part of the current population as possible in each iteration. Adding the use of approximate gradient information into the HALSGWJA formulation facilitates the optimization search process to a great extent. The JAYA schemes and the other elitist strategies included in HALSGWJA optimize the sequence of exploration and exploitation phases. Consequently, the proposed algorithm may increase diversity and it always selects high-quality trial designs, yet performing a limited number of analyses. HALSGWJA presents a master–slave architecture where the slave algorithm JAYA refines/corrects trial designs generated by the master algorithm GWO.
Similarly to the simple hybrid GWO/JAYA algorithms developed by the present authors in [6,7], HALSGWJA does not need any new internal parameters with respect to its component algorithms GWO and JAYA. Hence, population size NPOP and limit number of iterations Nitermax remain the only hyper-parameters to be defined by the user. Interestingly, the 1.1 heuristic threshold used in [6,7] to accept/reject trial designs following the W( X i , t r ) ≤ 1.1 W( X b e s t ) criterion is replaced in HALSGWJA by the more robust, flexible, and adaptive criterion W( X i , t r ) ≤ ρwW( X b e s t ). The inherent ability of HALSGWJA to generate high-quality trial designs in each phase of the optimization process makes it possible to converge to the optimum design well before completing the Nitermax iterations.
While most GWO/JAYA implementations require NPOP × Nitermax function/constraint evaluations (i.e., structural analyses for structural optimization problems), HALSGWJA defines all descent directions utilized in the search process without any new constraint evaluations. The elitist strategies and the approximate gradient information implemented in HALSGWJA allow the reliable evaluation of the quality of all trial designs based only on cost function values. Interestingly, approximate gradient information is obtained at the beginning of each iteration by evaluating the rate of change in the penalized cost function with respect to the distance of each search agent from the current best record. Since penalized cost function values are available for all candidate designs once the new population has been formed, the gradient evaluation performed by HALSGWJA does not entail any additional analysis.
It will be shown in Section 3 that the HALSGWJA’s capacity to generate high-quality trial designs over the whole search process significantly reduces the computational cost of the optimization. The present algorithm does not need to work with many search agents because it can always generate high-quality designs that improve the current best record or at least many individuals of the population. Grey wolves hunt in nature in groups of 10–20 individuals (typical family pack includes 5–11 animals but it may also be formed by 2–3 families). For this reason, HALSGWJA utilized a population of 20 individuals. The validity of this setting was confirmed by sensitivity analysis.
The computational (i.e., time) complexity of the proposed HALSHGWJA algorithm results from the superposition of the complexities of different algorithmic phases: (i) initialization of candidate designs and population sorting; (ii) generation of preliminary trial designs X i , t r p r e l with classical GWO; (iii) refinement of preliminary trial designs X i , t r p r e l into X i , t r using approximate gradient information; (iv) evaluation of trial designs X i , t r and their modification into X i , t r or X i , t r with JAYA-based exploitation/exploration schemes or mirroring strategy; (v) preparation for the new iteration. The time complexity of each phase can be determined as follows:
(i)
The generation and sorting of initial population entail (NPOP × NDV) and (NPOP × logNPOP) operations, respectively; hence, the computational complexity of initialization phase is O(NPOP × (NDV + logNPOP)).
(ii)
The generation of preliminary trial designs X i , t r p r e l with classical GWO entails O(NPOP × NDV) operations in each optimization iteration.
(iii)
The refinement of preliminary trial designs X i , t r p r e l into trial designs X i , t r involves the computation of approximate gradients γ(i,k); hence, 2 × N P O P 2 operations to determine ΔWp(i,k) and ΔS(i,k), and NPOP operations to compute ΔWp(i,k)S(i,k). The computational complexity of this phase for each optimization iteration hence is O(NPOP + 2 × N P O P 2 ).
(iv)
The evaluation of trial designs X i , t r entails the verification of the elitist criterion W( X i , t r ) ≤ ρwW( X o p t ) (also defining threshold value ρw), the modification of X i , t r into X i , t r with JAYA-based exploitation/exploration schemes of Equations (16) and (18), and the mirroring strategy to define X i , t r if necessary. Hence, there are NPOP new operations for the elitist criterion (together with NPOP operations for defining ρw), NPOP × NDV new operations for the JAYA-based strategies (Equations (16) and (18) may be used in alternative to perturb the trial designs X i , t r that will be exploited or the trial designs for which re-exploration is necessary), and 2 × NDV new operations for the mirroring strategy of Equation (20). The computational complexity of this phase for each optimization iteration hence is O(2 × (NPOP + NDV) + NPOP + NDV).
(v)
The preparation of the new iteration includes the re-sorting of the population and another mirroring strategy (Equation (21)) to avoid the stagnation of wolves α, β, and δ. The computational complexity of this phase for each iteration is hence O(2 × NDV + NPOP × logNPOP).
By summing over the computational complexities of steps (i) through (v), the resulting total computational complexity of HALSGWJA over Niter iterations performed in the optimization process is O(NPOP × (NDV + logNPOP) + Niter × (NPOP + 2  × N P O P 2   +   3 × NPOP + 2 × NPOP × NDV +4 × NDV + NPOP × logNPOP)). For example, for NDV = 30 (i.e., the size of the largest test problem solved in this study) and NPOP = 10 (i.e., the population size used in the present optimization runs), the computational complexity of HALSGWJA would be O(325 + Niter × 875) vs. at most O(Niter × 400) recorded for typical GWO/JAYA formulations (see, for example, discussion in Ref. [6]). However, the inherent ability of HALSGWJA to generate very high-quality trial designs in each iteration allowed the present algorithm to efficiently explore/exploit search space. Consequently, HALSGWJA converged to the optimum in a few iterations, thus reducing the total number of operations entailed by the search process.

3. Test Problems and Optimization Results

The HALSGWJA algorithm developed in this study was tested on 20 real-world mechanical engineering problems. The first 19 problems belong to the CEC2020 suite of mechanical engineering problems (also indicated as RC15-RC33 in the literature). Table 1 presents the main features of the CEC2020 test problems, which include up to 30 design variables and 86 inequality constraints. The table lists the goal pursued in each test problem; all test cases are minimization problems, except problem 14, where the objective is to maximize the capacity of a rolling element bearing. Problems 4, 8, 12, 16, and 17 include either discrete or integer variables. Problems 8, 9, and 17 also include equality constraints. The full description of the CEC2020 test problems (including the number of design variables NDV, the number of inequality constraints G, and the number of equality constraints H for each problem) is given in Ref. [52] and recalled at different extents in Refs. [53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69] for the whole library, as well as in Refs. [6,14,32,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87] for specific problems. Hence, for the sake of brevity, this paper presents in detail only the formulations of problems 2, 12, and 19 that include the largest numbers of optimization variables and constraints (i.e., 14, 22, and 30 and 15, 86, and 30, respectively). The formulation of problem 14 also is recalled because this test case is the only maximization problem in the test suite and it may have different optima. The target values of objective function to be reached in the nineteen CEC2020 mechanical engineering problems are listed in Table 1, considering 11 significant digits.
Problem 2 aims to minimize the fabrication and operation costs of industrial refrigeration systems with constraints on geometry and heat transfer characteristics. The design variables of this highly nonlinear mechanical engineering optimization problem are as follows: evaporator shell diameter (x1), condenser shell diameter (x2), evaporator shell thickness (x3), condenser shell thickness (x4), evaporator tubesheet thickness (x5), condenser tubesheet thickness (x6), evaporator fluid velocity (x7), condenser fluid velocity (x8), evaporator tube passes (x9), condenser tube passes (x10), evaporator tube length (x11), condenser tube length (x12), condenser effectiveness (x13), and inlet/outlet fluid temperature difference (x14). All design variables vary between 0.001 and 5. The optimization problem is stated as follows:
M i n i m i z e   W x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , x 9 , x 10 , x 11 , x 12 , x 13 , x 14 = 63098.88 · x 2 · x 4 · x 12 + 5441.5 · x 2 2 · x 12 + 115055.5 · x 2 1.664 · x 6 + 6172.27 · x 2 2 · x 6 + 63098.88 · x 1 · x 3 · x 11 + 5441.5 · x 1 2 · x 11 + 115055.5 · x 1 1.664 · x 5 + 6172.27 · x 1 2 · x 5 + 140.53 · x 1 · x 11 + 281.29 · x 1 x 3 + 70.26 · x 1 2 + 281.29 · x 3 x 11 + 281.29 · x 3 2 + 14437 · x 1 2 · x 8 1.8812 · x 12 0.3424 · x 7 · x 9 1 · x 10 · x 14 1 + 20470.2 · x 1 2 · x 7 2.893 · x 11 0.316 Subject   to : g 1 X ¯ = 1.524 · x 7 1 1 0                                                                                                                                         g 2 X ¯ = 1.524 · x 8 1 1 0                                                                                                                                         g 3 X ¯ = 0.07789 · x 1 2 · x 7 1 · x 9 1 0                                                                                                             g 4 X ¯ = 7.05305 · x 1 2 · x 2 1 · x 8 1 · x 9 1 · x 10 · x 14 1 1 0                                                                                   g 5 X ¯ = 0.0833 · x 13 1 · x 14 1 0                                                                                                                             g 6 X ¯ = 47.136 · x 2 0.333 · x 10 1 · x 12 1.333 · x 8 · x 13 2.1195 + 62.08 · x 8 0.2 · x 10 1 · x 12 1 · x 13 2.1195 1 0       g 7 X ¯ = 0.04771 · x 8 1.8812 · x 10 · x 12 0.3424 1 0                                                                                                   g 8 X ¯ = 0.0488 · x 7 1.893 · x 9 · x 11 0.316 1 0                                                                                                           g 9 X ¯ = 0.0099 · x 1 · x 3 1 1 0                                                                                                                               g 10 X ¯ = 0.0193 · x 2 · x 4 1 1 0                                                                                                                             g 11 X ¯ = 0.0298 · x 1 · x 5 1 1 0                                                                                                                             g 12 X ¯ = 0.056 · x 2 · x 6 1 1 0                                                                                                                               g 13 X ¯ = 2 · x 9 1 1 0                                                                                                                                               g 14 X ¯ = 2 · x 10 1 1 0                                                                                                                                               g 15 X ¯ = x 12 · x 11 1 1 0                                                                                                                                          
Problem 12 aims to minimize the structural weight of a four-stage gearbox. The design variables correspond to positions of the gears (xgi, ygi) and pinions (xp1, yp1), blank thickness (bi), and number of teeth (Npi, Ngi), with i = 1,2,3,4. The variables can take the following discrete values: xp1, yp1, xgi, ygi ∈ {12.7, 25.4, 38.1, 50.8, 63.5, 88.9, 101.6, 114.3}; bi ∈ {3.175, 5.715, 8.255, 12.7}; 7 ≤ Npi, Ngi ≤ 76 (integers). Design constraints refer to the pitch, kinematics, contact ratio, gear strengths, assembly, and size. The feasible region of search-space is only 0.01% of the total search space, thus leading to the presence of many local solutions. The optimization problem is stated as follows:
M i n i m i z e   W x 1 , x 2 , x 3 , x 4 x 19 , x 20 , x 21 , x 22 =   π 1000 i = 1 4 b i c i 2 N p i 2 + N g i 2 N p i + N g i 2 Subject   to : g 1 X ¯ = 366000 π ω 1 + 2 c 1 N p 1 N p 1 + N g 1 N p 1 + N g 1 2 4 b 1 c 1 2 N p 1 σ N J R 0.0167 W K o K M 0 g 2 X ¯ = 366000 N g 1 π ω 1 N p 1 + 2 c 2 N p 2 N p 2 + N g 2 N p 2 + N g 2 2 4 b 2 c 2 2 N p 2 σ N J R 0.0167 W K o K M 0 g 3 X ¯ = 366000 N g 1 N g 2 π ω 1 N p 1 N p 2 + 2 c 3 N p 3 N p 3 + N g 3 N p 3 + N g 3 2 4 b 3 c 3 2 N p 3 σ N J R 0.0167 W K o K M 0 g 4 X ¯ = 366000 N g 1 N g 2 N g 3 π ω 1 N p 1 N p 2 N p 3 + 2 c 4 N p 4 N p 4 + N g 4 N p 4 + N g 4 2 4 b 4 c 4 2 N p 4 σ N J R 0.0167 W K o K M 0 g 5 X ¯ = 366000 π ω 1 + 2 c 1 N p 1 N p 1 + N g 1 N p 1 + N g 1 3 4 b 1 c 1 2 N g 1 N p 1 2 σ H c p 2 s i n ϕ c o s ϕ 0.0334 W K o K M 0 g 6 X ¯ = 366000 N g 1 π ω 1 N p 1 + 2 c 2 N p 2 N p 2 + N g 2 N p 2 + N g 2 3 4 b 2 c 2 2 N g 2 N p 2 2 σ H c p 2 s i n ϕ c o s ϕ 0.0334 W K o K M 0 g 7 X ¯ = 366000 N g 1 N g 2 π ω 1 N p 1 N p 2 + 2 c 3 N p 3 N p 3 + N g 3 N p 3 + N g 3 3 4 b 3 c 3 2 N g 3 N p 3 2 σ H c p 2 s i n ϕ c o s ϕ 0.0334 W K o K M 0 g 8 X ¯ = 366000 N g 1 N g 2 N g 3 π ω 1 N p 1 N p 2 N p 3 + 2 c 4 N p 4 N p 4 + N g 4 N p 4 + N g 4 3 4 b 4 c 4 2 N g 4 N p 4 2 σ H c p 2 s i n ϕ c o s ϕ 0.0334 W K o K M 0 g 9 12 X ¯ = N p i s i n 2 ϕ 4 1 N p i + 1 N p i 2 + N g i s i n 2 ϕ 4 + 1 N g i 1 N g i 2 + s i n ϕ N p i + N g i 2 + C R m i n π cos ϕ 0 g 13 16 X ¯ = d m i n 2 c i N p i N p i + N g i 0   g 17 20 X ¯ = d m i n 2 c i N g i N p i + N g i 0 g 21 X ¯ = x p 1 + N p 1 + 2 c 1 N p 1 + N g 1 L m a x 0 g 22 24 X ¯ = L m a x + N p i + 2 c i N p i + N g i i = 2,3.4 + x g i 1 0 g 25 X ¯ = x p 1 + N p 1 + 2 c 1 N p 1 + N g 1 0   g 26 28 X ¯ = N p i + 2 c i N p i + N g i x g i 1 i = 2,3.4 0 g 29 X ¯ = y p 1 + N p 1 + 2 c 1 N p 1 + N g 1 L m a x 0 g 30 32 X ¯ = L m a x + N p i + 2 c i N p i + N g i + y g i 1 i = 2,3.4 0 g 33 X ¯ = y p 1 + N p 1 + 2 c 1 N p 1 + N g 1 0 g 34 36 X ¯ = N p i + 2 c i N p i + N g i y g i 1 i = 2,3.4 0 g 37 40 X ¯ = L m a x + N g i + 2 c i N p i + N g i + x g i 0 g 41 44 X ¯ = x g i + N g i + 2 c i N p i + N g i 0     g 45 48 X ¯ = y g i + N g i + 2 c i N p i + N g i L m a x 0     g 49 52 X ¯ = y g i + N g i + 2 c i N p i + N g i 0     g 53 56 X ¯ = b i 8.255 b i 5.715 b i 12.70 0.945 c i N p i N g i 1 0 g 57 60 X ¯ = b i 8.255 b i 3.175 b i 12.70 0.646 c i N p i N g i 0 g 61 64 X ¯ = b i 5.715 b i 3.175 b i 12.70 0.504 c i N p i N g i 0 g 65 68 X ¯ = b i 5.715 b i 3.175 b i 8.255 0 c i N p i N g i 0     g 69 72 X ¯ = b i 8.255 b i 5.715 b i 12.70 N p i + N g i 1.812 c i 1 0 g 73 76 X ¯ = b i 8.255 b i 3.175 b i 12.70 N p i + N g i 0.945 c i 0 g 77 80 X ¯ = b i 5.715 b i 3.175 b i 12.70 N p i + N g i 0.646 c i 1 0 g 81 84 X ¯ = b i 5.715 b i 3.175 b i 8.255 N p i + N g i 0.504 c i 0   g 85 X ¯ = ω M I N ω 1 N p 1 N p 2 N p 3 N p 4 N g 1 N g 2 N g 3 N g 4 0   g 86 X ¯ = ω 1 N p 1 N p 2 N p 3 N p 4 N g 1 N g 2 N g 3 N g 4 ω M A X 0  
where c i = y g i y p i 2 + x g i x p i 2 , dmin = 25, Lmax = 127, Ko = 1.5, KM = 1.6, JR = 0.2, ϕ = 120°, W = 55.9, CRmin = 1.4, Cp = 464, σN = 2090,σH = 3290, ω1 = 5000, ωmin = 245, ωmax = 255.
The goal of test problem 14 was to maximize the dynamic load capacity of a rolling element bearing. The optimization variables are as follows: pitch diameter Dm (x1), balls diameter Db (x2), number of balls Z (x3, integer variable), inner raceway curvature coefficient fi (x4), outer raceway curvature coefficient fo (x5), minimum ball diameter limiter KDmin (x6), maximum ball diameter limiter KDmax (x7), outer ring strength parameter ε (x8), mobility condition parameter e (x9), and bearing width limiter ζ (x10). The design variables can vary as follows: 125 ≤ Dm ≤ 150; 10.5 ≤ Db ≤ 31.5; 4 ≤ Z ≤ 50; 0.515 ≤ fi,fo ≤ 0.6; 0.4 ≤ KDmin ≤ 0.5; 0.6 ≤ KDmax ≤ 0.7; 0.3 ≤ ε ≤ 0.4; 0.02 ≤ e ≤ 0.1; 0.6 ≤ ζ ≤ 0.85. The optimization problem is stated as
M a x i m i z e       W x 1 , x 2 , x 3 , x 4 , x 5 = f c   Z 2 / 3   D b 1.8             ,   i f   D b 25.4   mm 3.647 f c   Z 2 / 3   D b 1.4 ,   i f   D b > 25.4   mm Subject   to : g 1 X ¯ = ϕ o 2 arcsin D b D m Z + 1 0               g 2 X ¯ = 2 D b K D m i n D d 0               g 3 X ¯ = K D m a x D d 2 D b 0               g 4 X ¯ = ζ B w D b 0                               g 5 X ¯ = D m 0.5 D + d 0                 g 6 X ¯ = 0.5 + e D + d D m 0               g 7 X ¯ = 0.5 D D m D b ε D b 0               g 8 X ¯ = f i 0.515                                       g 9 X ¯ = f o 0.515                                      
The cost function W depends on the first five design variables only. The coefficient fc included in the cost function W expression depends on the γ = Db/Dm ratio and the inner/outer raceway curvature coefficients fi and fo as
f c = 37.91 1 + 1.04 1 γ 1 + γ 1.72 f i 2 f o 1 f o 2 f i 1 0.41 10 / 3 0.3 γ 0.3 1 γ 1.39 1 + γ 1 / 3 2 f i 2 f i 1 0.41
The following parameters/expressions are utilized to define the constraint functions: D = 160, d = 90, Bw = 30, T = Dd − 2Db, ϕo = 2πarccos D d / 2 3 T / 4 2 + D / 2 T / 4 D b 2 d / 2 + T / 4 2 2 D d / 2 3 T / 4 D / 2 T / 4 D b .
The objective of problem 19 is to optimize the material layout for a given set of loads, given the design search space and constraints related to system performance. A simply supported bar is discretized in 30 elements and loaded by a vertical force F at mid-bay. The optimizer minimizes the compliance of the structure (this is equivalent to maximize stiffness) by iteratively adjusting the material topology distribution within the design space. For that purpose, a “density” variable xe is assigned to each element to assess the contribution of that element to the global compliance/stiffness of the structure. This contribution is expressed through a power law. The optimization problem is stated as follows:
M i n i m i z e       W x 1 , x 2 , x 3 , x 4 x 27 , x 28 , x 29 , x 30 = u T K u =   e = 1 30 x e p u e T k e u e Subject   to : H 1 X ¯ = V X ¯ V o 1 = 0         H 2 X ¯ = K u F = 0
where the “density” variables must vary as 0 < x m i n ¯ x e 1 ´ . The p exponent of the density power law is set equal to 3. [K] and {u}, respectively, are the global stiffness matrix and the nodal displacement vector, with the corresponding element displacement vector ue and stiffness matrix ke of the e-th element. The two constraints in Equation (25) state that the total volume of the structure must be equal to some target value Vo and that the nodal forces must match the applied loads.
The last mechanical engineering optimization problem solved in this study regarded the optimal crashworthiness design of a vehicle; structural weight must be minimized so that the vehicle can adsorb the side impact from a moving barrier. The vehicle’s structure must be optimized to safely protect passengers, minimizing weight and preserving the vehicle’s performance. Figure 2 (adapted from Ref. [7]) illustrates the vehicle frame parts to be optimized; the moving barrier and the direction of the lateral impact also are shown in the figure.
The test problem has 11 design variables: inner thickness of B-pillar (x1), reinforcement thickness of B-pillar (x2), inner thickness of floor side (x3), thickness of cross members (x4), thickness of door beam (x5), reinforcement of door beltline (x6), thickness of roof rail (x7); shape factor for the B-pillar inner (x8) and shape factor for the floor side inner (x9); height of the barrier (x10), and the hitting position of the barrier with respect to the center of mass of the vehicle (x11). All variables are continuous except x8 and x9. The optimal crashworthiness design problem is formulated as
M i n i m i z e   W X = 1.98 + 4.90 x 1 + 6.67 x 2 + 6.98 x 3 + 4.01 x 4 + 1.78 x 5 + 2.73 x 7
Subject to:
G 1 X = 1.16 0.3717 x 2 x 4 0.00931 x 2 x 10 0.484 x 3 x 9 + 0.01343 x 6 x 10 1                                   G 2 X = 0.261 0.0159 x 1 x 2 0.188 x 1 x 8 0.019 x 2 x 7 + 0.0144 x 3 x 5 + 0.0008757 x 5 x 10 +         0.08045 x 6 x 9 + 0.00139 x 8 x 11 + 0.00001575 x 10 x 11 0.32                                       G 3 X = 0.214 + 0.00817 x 5 0.131 x 1 x 8 0.0704 x 1 x 9 + 0.03099 x 2 x 6 0.018 x 2 x 7 +                                   0.0208 x 3 x 8 + 0.121 x 3 x 9 0.00364 x 5 x 6 + 0.0007715 x 5 x 10 0.0005354 x 6 x 10 +         0.00121 x 8 x 11 + 0.00184 x 9 x 10 0.02 x 2 2 0.32                                                       G 4 X = 0.074 0.61 x 2 0.163 x 3 x 8 + 0.001232 x 3 x 10 0.166 x 7 x 9 + 0.227 x 2 2 0.32                 G 5 X = 28.98 + 3.818 x 3 4.2 x 1 x 2 + 0.0207 x 5 x 10 + 6.63 x 6 x 9 7.7 x 7 x 8 + 0.32 x 9 x 10 32         G 6 X = 33.86 + 2.95 x 3 + 0.1792 x 10 5.057 x 1 x 2 11.0 x 2 x 8 0.0215 x 5 x 10 9.98 x 7 x 8 +       22.0 x 8 x 9 32                                                                                                                         G 7 X = 46.36 9.9 x 2 12.9 x 1 x 8 + 0.1107 x 3 x 10 32                                                                               G 8 X = 4.72 0.5 x 4 0.19 x 2 x 3 0.0122 x 4 x 10 + 0.009325 x 6 x 10 + 0.000191 x 11 2 4                   G 9 X = 10.58 0.674 x 1 x 2 1.95 x 2 x 8 + 0.02054 x 3 x 10 0.0198 x 4 x 10 + 0.028 x 6 x 10 9.9           G 10 X = 16.45 0.489 x 3 x 7 0.843 x 5 x 6 + 0.0432 x 9 x 10 0.0556 x 9 x 11 0.000786 x 11 2 15.7
The objective function W X (structural weight of vehicle parts to be optimized) and constraints functions G X were fitted in [88] using response surface models to deal with the high nonlinearity of the problem. More details on this test problem are available in Refs. [7,70,72,73,76,81,83,84,85,87,88,89,90]. The best solution ever reported in the optimization literature, {0.5;1.21204;0.5;0.77908;0.5;1.49004;0.5;0.345;0.345;−28.9781;0.0001}, achieved a minimum structural weight of 21.38340 kg [7].
HALSGWJA was implemented in MATLAB Version 2025b as a standalone optimization code. Optimization runs were executed on a standard portable computer with 16 MB RAM memory and a 3.1 Mhz AMD processor. To gather statistically significant information on HALSGWJA’s computational efficiency, each test problem was solved by executing 25 independent optimization runs from different initial populations. Population size and the limit number of iterations of HALSGWJA were always set to 10 and 5000, respectively, except for in the CEC2020 problems 2 and 11, where the limit number of iteration was increased to 10,000. Similarly to in [6,7], initial populations always include candidate solutions with up to 1000% constraint violation to check the ability of the present algorithm to quickly direct search towards the region of the optimum solution. It should be noted that HALSGWJA always required much fewer analyses than the theoretical computational budget of NPOP × Nitermax analyses. The constraint evaluations performed in the optimization search for evaluating the updated population in each iteration accounted for more than 90% of the total computational cost of the optimization process. This confirms that the fast line search approach implemented by the present algorithm is computationally inexpensive.
HALSGWJA utilized the static penalty function (SPF) approach stated by Equations (2) and (3) to handle optimization constraints. In SPF, penalty factors are independent of the iteration number and remain constant over the entire search process (see the classical literature surveys [91,92]). Here, the penalty factor p included in Equation (2) was set equal to 2. Several constraint handling techniques were used by HALSGWJA’s competitors besides SPF: (i) the dynamic penalty function (DPF) method, where the penalty term increases over the optimization iterations; (ii) DPF method, where penalty to infeasible solutions increases when trial solutions move away from the feasible region and gradually decreases as search approaches the feasible region; (iii) hybrid augmented Lagrangian method with interior point penalty (the penalty term is small at points far from constraint boundaries and tends to infinity at points near constraint boundaries); (iv) linear combination of cost function and optimization constraints with or without variation in penalty factors; (v) adaptive penalty function (APF) approach, where an individual is reinitialized if it violates constraints; (v) death penalty or dubbed death penalty approaches, where an infeasible trial solution is eliminated or it is set an objective value high (low) enough for this solution to be eliminated throughout the optimization process if minimization (maximization) is the goal; (vii) hierarchical sorting (HS) prioritizing trial points with lower constraint values regardless of W values (HS differentiates between feasible and infeasible points always setting the lowest cost feasible solution as current best record; among infeasible points, a lower constraint violation is preferred; among infeasible and feasible points, feasible points are preferred); (viii) dynamic constraint tolerance, where the tolerance limit ε decreases through the optimization process and constraints are more relaxed in the early optimization cycles to maintain population diversity and then restricted gradually to find truly feasible solutions (for example, inequality constraints to be satisfied become G( X ) ≤ ε with ε ranging from 0.01 in the initial iteration to 0.001 in the final iteration). As mentioned above, HALSGWJA adopted the static penalty function approach with p = 2. However, similarly to in [6], a sensitivity analysis was carried out for HALSGWJA by varying the penalty factor p in Equation (2) from 1 to 1,000,000 as 100, 101, 102, 103, 104, 105, and 106. This simulated a dynamic scenario where the penalty depends on the distance from constraint boundaries. Furthermore, setting large values for the penalty factor amplifies constraint violation and forces the optimizer to generate very high-quality trial designs that actually satisfy design constraints. Remarkably, the largest deviation from the best optimized cost obtained by HALSGWJA for varying penalty factors always remained below 0.001%, thus confirming the insensitivity of HALSGWJA to constraint handling options.

3.1. Results of CEC 2020 Problems

The HALSGWJA algorithm developed in this study was compared with 18 other state-of-the art metaheuristic optimizers. HALSGWJA’s competitors can be grouped in three categories: (a) CEC2020 winners, (b) advanced variants of GWO and JAYA already proven to be very competitive with the CEC2020 winners, and (c) physics-based and animal behavior-inspired methods released between 2023 and 2025 that have been proven to be superior over highly performing or/and well established metaheuristic optimizers. In particular, the following algorithms were compared with HALSGWJA:
(i)
EnMODE (Enhanced Multi-Operator Differential Evolution) [54].
(ii)
COLSHADE (Linearly reduced population size Success History-based Adaptive Differential Evolution for Constrained Optimization) [55].
(iii)
Success History-based Adaptive Differential Evolution with a gradient-based repair strategy (En(L)SHADE) [56].
(iv)
Improved Multi-Operator Differential Evolution with a knowledge-guided information sharing strategy (IMODE-KG) [57].
(v)
Differential Evolution with self-adaptive Spherical Search (DESS) [58].
(vi)
Self-Adaptive Spherical Search [53].
(vii)
SDDS-SABC algorithm [59] combining Split-Detect-Discard-Shrink (which uses partitions to identify promising regions of search space) and Sophisticated Artificial Bee Colony (with modifications in the initialization process, employed and scout bees searching strategy).
(viii)
Improved Young’s Double-Slit Experiment (IYDSE) [60].
(ix)
Kepler Optimization Algorithm (KOA) [74].
(x)
Improved Kepler Optimization algorithm (CGKOA) [61], including the adaptive function strategy, the sinusoidal chaotic gravity strategy, the lateral crossover strategy, and the elite gold rush strategy.
(xi)
Multi-Strategy Fusion Enhanced Particle Swarm Optimization (MSFPSO) [62].
(xii)
Enhanced Snow Ablation Optimizer (ESAO) [63].
(xiii)
Improved Dwarf Mongoose Optimization (IDMO) [64].
(xiv)
The hybrid LMWOAGWO algorithm combining Lévy flight with modified Whale Optimization Algorithm (WOA) and GWO [65].
(xv)
Contact List Subpopulation Mixed Evolution Grey Wolf Optimizer (CSELGWO) [66]. This multi-strategy enhanced GWO variant obtains high-quality local information on search space, then generates a subpopulation that is updated with the main population through subpopulation mixed evolution, thus significantly improving population diversity and convergence accuracy. Levy Flight with archives and activation mechanisms serves to escape from local optima.
(xvi)
Marine Predators Social Group Optimization (MPSGO) [67].
(xvii)
Modified Artificial Hummingbird Algorithm (MAHA) [68].
(xviii)
Enhanced JAYA (EJAYA) [70]. This powerful JAYA variant updates designs based on current best and worst solutions (similar to classical JAYA), average solution, and historical solutions (these form an auxiliary population initially generated besides standard population and probabilistically permuted in the search process). Local exploitation utilizes upper/lower local attractors (weighted averages of best/worst and average solutions). Historical population guides global exploration.
The selected competitor algorithms make the evaluation of HALSGWJA’s performance highly significant. In fact, EnMODE [54], COLSHADE [55], and SASS [53] were CEC2020 winners, being the top ranked challengers of the CEC competition on the real-world engineering problems, as reported in [56,58,59,65,66,67,68]. As reported in [56], En(L)SHADE achieved a better normalized score (i.e., 0.2849 vs. 0.3371 and 0.7162, respectively) and a higher feasibility rate (98.25% vs. 97.89% and 74.39%, respectively) than SASS and COLSHADE over all the independent optimization runs performed for the whole CEC2020 library of real-world engineering optimization problems. The normalized score was computed by weighing problem dimensionality with the normalized adjusted objective function values of the best, mean, and medium solutions. SDDS-ABC [59] achieved a better Friedman’s test score than COLSHADE, specifically in the nineteen CEC2020 mechanical engineering problems considered in this study (2.32 vs. 2.37, see Ref. [59]).
IMODE-KG [57] outperformed other 12 metaheuristic optimizers including the winner algorithms EnMODE and SASS of the real-world mechanical engineering CEC2020 test suite. In fact, the Friedman’s test score of IMODE-KG was 2.2 vs. 3.2 and 4.3667 achieved by EnMODE and SASS, respectively. Interestingly, these results were obtained with a computational budget of only 10,000 analyses, which is much smaller than the typical computational budget adopted in the CEC2020 competition, where the limit number of function evaluations is 100,000 analyses for NDV ≤ 10 and 200,000 for 10 < NDV ≤ 30. For the same limited computational budget of 10,000 analyses, the advanced JAYA variant developed in [70]—EJAYA—was reported in [57] to be very competitive with EnMODE, SASS and IMODE-KG; in fact, it achieved a Friedman’s test score value of 3.7, which was almost in the center of the 2.2 to 4.3667 range found for those three algorithms.
It should be noted that, in 2024, Ref. [69] presented some numerical experiments on CEC2020 real-world mechanical engineering problems, also reporting the precise number of analyses required by COLSHADE, SASS, and EnMODE in comparison to the Piranha Predation Optimization Algorithm (PPOA) developed in [69]. The theoretical computation budget was set to the very large value of 1,000,000 analyses. Interestingly, COLSHADE, SASS, and EnMODE could complete their optimizations within an average number of analyses lower than 10,000 in some optimization problems, respectively, from 220 to 8999 analyses in problems 6, 7, 15, 17, 18, and 19 for COLSHADE; from 360 to 7605 analyses in problems 9, 17, and 19 for SASS; and from 215 to 9948 analyses in problems 1, 2, 6, and 17 for EnMODE. However, the success rate of COLSHADE, SASS, and EnMODE in finding target solutions was found to be 0 in five optimization problems (i.e., 4, 8, 10, 14, and 16). This, openly in contrast with the findings of the other studies that re-run COLSHADE, SASS, and EnMODE optimizations for benchmark, might have been caused by the way in which the three CEC2020 winners were implemented in Ref. [69].
DESS [58] outperformed 18 other metaheuristic optimizers including the top-ranked CEC2020 real-world engineering problem solvers EnMODE, SASS, and COLSHADE. In particular, DESS achieved a normalized score of only 0.1230 vs. 0.1378, 0.2427, and 0.4514, respectively, achieved by the three above-mentioned algorithms. MPSGO [67] and MAHA [68] (two nature-inspired algorithms released in 2024 that reproduce animal behaviors that may also be integrated with human social behavior to solve complex problems) were also highly competitive against CEC2020 top solvers. In particular, MPSGO obtained the same best design as EnMODE and COLSHADE in 17 of the 19 CEC2020 mechanical engineering problems, while MAHA ranked second overall after SASS and above EnMODE and COLSHADE in the CEC2020 whole real-world engineering problems.
The hybrid LMWOAGWO algorithm [65] combining GWO and WOA, and the advanced GWO variant developed in [66]—CSELGWO—were also highly competitive against the CEC2020 winners EnMODE and COLSHADE, ranking immediately after these two algorithms in 17 of the 19 CEC2020 mechanical engineering problems. In [65], a score of 3.23 is reported for LMWOAGWO vs. 2.14 to 2.45, while in [66], the reported score for CSELGWO was 2.54 vs. 2.41 to 2.5.
IYDSE [60], KOA [74], GKOA [61], and ESAO [63] are four relevant physics-based metaheuristic algorithms released between 2023 and 2025 that ranked first amongst, respectively, 8, 9, 9, and 10 other metaheuristic optimizers, including, among others, PSO and improved PSO variants, GWO and improved GWO variants, WOA and enhanced WOA, starling murmuration optimizer (SMO), artificial hummingbird algorithm (AHA), and dwarf mongoose optimization (DMO). IDMO [64] is another algorithm reproducing the dwarf mongoose behavior that was released in 2023, developed to improve the capability of its mother algorithm DMO of solving complex problems; interestingly, IDMO outperformed 12 other metaheuristic algorithms including highly cited methods such as GWO and WOA. MSFPSO [62], released in 2025, outperformed other 10 metaheuristic algorithms including 5 other PSO variants.
Table 2 compares the optimization results obtained by HALSGWJA in the CEC2020 mechanical engineering problems with those reported in the literature for the selected competitors. The data reported in the table for HALSGWJA’s competitors are relative to the best parameter settings indicated in the literature for each algorithm. For each test problem, the table reports the best (B), average (A), and worst (W) values of optimized cost and required number of analyses along with the corresponding values of standard deviation (STD) over the 25 independent runs. Statistical data on the computational cost of HALSGWJA’s competitors (i.e., the number of analyses and corresponding standard deviations) are reported when available. For the sake of brevity, the average number of analyses and corresponding standard deviations are reported in the same row and indicated as (A/D); furthermore, the A/B/W/STD and A/D nomenclature is indicated only for HALSGWJA and the first test problem. The data relating to EnMODE, COLSHADE, SASS, IYDSE, KOA, and EJAYA were merged from the original studies presenting these algorithms for the first time and subsequent studies reporting further investigations of those methods; the best solutions for each test case are then reported in Table 2. The results obtained by HALSGWJA in the pressure vessel design problem 4 variant with continuous variables are indicated in italics in the table.
It can be seen from Table 2 that HALSGWJA was the best optimizer overall in test problems 1 (speed reducer design), 2 (industrial refrigeration system), 3 (tension-compression spring design case 1), 4 (pressure vessel design), 5 (welded beam design), 6 (three-bar truss design), 7 (multiple disk clutch brake design), 8 (planetary gear train design), 9 (step-cone pulley design), and 11 (hydrostatic thrust bearing design), converging to the target optima of these problems and requiring fewer analyses than its 18 competitors to complete the optimization process. The target optimum costs indicated in Table 1 for the CEC2020 real-world mechanical engineering problems were also either reached or slightly improved by HALSGWJA in test cases 12 (four-stage gearbox design), 14 (rolling element bearing design), 15 (gas transmission compressor design), and 19 (topology optimization), again requiring fewer analyses than its 18 competitors to complete the optimizations.
It should be noted that the pressure vessel problem 4 actually presents two variants with either continuous or mixed optimization variables. Table 2 reports the corresponding optimization results; the (italics) notation is used for the HALSGWJA’s data relative to the continuous problem variant. The vessel comprises a cylindrical segment closed at its ends by two spherical caps. The four design variables are the thickness of the cylindrical segment (x1), the thickness of the spherical caps (x2), the radius of the curvature of spherical caps (x3), and the length of the cylindrical segment (x4). The 5885.333 optimum cost indicated in Table 1 refers to the problem variant with continuous variables. In the problem variant with mixed variables, thickness variables x1 and x2 are multiple integers of the available thickness of rolled steel plates (see Ref. [6]); the corresponding optimum cost is 6059.714. The mixed variable version of the problem was solved by all of the algorithms compared in Table 2, except for CGKOA [61], MAHA [68], and EJAYA [57,70], which instead solved the continuous variables problem variant. Regardless of the problem variant, HALSGWJA always converged to the target solution of this problem, requiring between 3510 and 6193 analyses only vs. at least 10,000 analyses required by its competitors.
Another interesting issue regards the rolling element bearing design problem 14, where the objective is to maximize the load bearing capacity. The target optimum cost of 14,614.136, given in Table 1, is obtained for the optimum design vector X o p t , 1 ¯ = {131.2;18;4;0.6;0.6;0.4;0.6;0.3;0.025;0.6}. The third design variable, the number of balls Z, must take an integer value and it is hence rounded in the optimization process; in this case, it was reduced from 4.476 to 4. Another solution reported in the literature for two nature-inspired metaheuristic algorithms, such as the multi-strategy enhanced northern goshawk optimization algorithm (MENGO) [77] and the chaos-based mayfly algorithm with opposition-based learning and Levy flight (COLMA) [78], had an optimized cost of 16,958.2; the number of balls was rounded from 4.51 to 5 and the final design was X o p t , 2 ¯ = {131.2;18;5;0.6;0.6;0.4071;0.6437;0.3;0.0632;0.6}. While designs X o p t , 1 ¯ and X o p t , 2 ¯ shared the same values of design variables x1, x2, x4, and x5, the corresponding optimized costs were different because the problem cost function W stated by Equation (25) also depends on the rounded value of x3 (Z), which changed from 4 to 5, passing from X o p t , 1 ¯ to X o p t , 2 ¯ . The differences in variables x6 to x10 observed between solutions X o p t , 1 ¯ and X o p t , 2 ¯ obviously did not affect the cost function values. Both X o p t , 1 ¯ and X o p t , 2 ¯ can be considered as target solutions for this problem; the 14,614.136 optimized cost was reached by SASS [53,58,59], DESS [58], SDDS-SABC [59], and MAHA [68], while the 16,958.2 optimized cost was reached by HALSGWJA and the remaining competitors. The high computational efficiency of the present algorithm was confirmed by its ability to converge to the 16,958.2 cost in all independent optimization runs.
Indeed, there are yet other solutions indicated in the literature for problem 14. For example, Ref. [75] reports, for the energy valley optimizer (a physics-based method), the optimized design X o p t , 3 ¯ = {125.7191;21.4256;11;0.515;0.515;0.4632;0.6999;0.3;0.06343;0.6042}, yielding a cost of 81,859.8. Teaching–learning-based optimization (TLBO) [14] also converged practically to the same cost, 81,859.74, with the corresponding optimum design X o p t , 4 ¯ = {125.7191;21.42559;11;0.515;0.515;0.424266;0.633948;0.3;0.068858;0.799498}. In Ref. [84], it is reported that for the artificial lemming algorithm (a nature-inspired method), the optimal design is X o p t , 5 ¯ = {125.7191;21.4256;11;0.515;0.515;0.4014;0.6138;0.3;0.0307;0.6001}, yielding a cost of 85,549.2. The same optimized cost value of 85,549.2 was also reported in Ref. [70] for EJAYA, with the corresponding optimal design X o p t , 6 ¯ = {125.7191;21.4256;11;0.515;0.515;0.4009;0.6228;0.3;0.1;0.6023}. However, since X o p t , 3 ¯ , X o p t , 4 ¯ , X o p t , 5 ¯ , and X o p t , 6 ¯ present the same values for x1, x2, x3, x4, and x5, the corresponding optimized values of W must coincide; it can be proven that the correct value of the optimized cost is 81,859.8, not 85,549.2. Interestingly, HALSGWJA always converged to the designs X o p t , 3 ¯ and X o p t , 4 ¯ , and the corresponding optimized cost of 81,859.8, with 0 standard deviation, by simply restarting the search process from the initial populations obtained by perturbing X o p t , 2 ¯ with the side constraint x3 = Z > 5. The cascade optimization process comprising the initial optimization towards X o p t , 2 ¯ and the subsequent perturbation towards X o p t , 3 ¯ , X o p t , 4 ¯ was always completed by HALSGWJA within 8000 analyses, much less than the 10,000 to 20,000 analyses required by the algorithms of Refs. [14,70,75,84].
In the robot gripper design problem 10, HALSGWJA converged to the best optimized cost of 2.5437852, which is larger than the 2.528792 target value listed in Table 1, but perfectly consistent with the best solution of 2.5438 reported in the literature for the CEC2020 winners. Similarly, the best solution obtained by HALSGWJA for problem 16 (tension/compression spring design case 2), corresponding to a minimum steel wire volume of 2.658722, was different from the 2.61388 target volume of Table 1 but totally consistent with all other solutions reported in the literature for this test case that always ranged from 2.658558 to 2.684243 (see Table 2). Remarkably, HALSGWJA was also the fastest optimizer in these two test cases.
In test problems 13 (ten-bar truss design) and 18 (Himmelblau’s function), HALSGWJA’s best optimized cost was, respectively, 524.5885 vs. the 524.4508 target (only 0.0263% cost penalty) and −30,664.873 vs. the −30,665.539 target (only 0.00217% cost penalty). Again, HALSGWJA ranked first for computational speed in both of these problems.
The optimized designs found by HALSGWJA in the CEC2020 problems are reported in Appendix A. In summary, the present algorithm practically never missed the target optimal design in any test problem, as the largest deviation from target optimum was only 0.0263% in the ten-bar truss design problem 13. Such excellent performance was achieved only by EnMODE [54,65,68] and COLSHADE [55,65,68], remarkably the two winners of the CEC2020 real-world mechanical engineering competition. Conversely, En(L)SHADE [56], IMODE-KG [57], DESS [58], SASS [53,59,68], and MSFPSO [62] missed the target optimum only in problem 12 on the weight minimization of a four-stage gearbox; their best optimized weights ranged between 36.2504 and 98.69 vs. the 35.359 target optimum (cost penalty hence ranged between 2.52% and 179%). MPSGO [67], MAHA [68], and EJAYA [57,70] also missed the target optimum only in one test problem: (i) MPSO in problem 11 (optimized cost of 1643.1 instead of the 1616.120 target, 1.7% cost penalty); MAHA and EJAYA, in problem 10, respectively, obtained optimized costs of 2.638958 and 2.601873 vs. the actual best cost of 2.5438 reported in the CEC2020 literature (cost penalty hence ranged between 2.28% and 3.74%). SSDS-ABC [59] missed the target optimum in two test problems: specifically, 4 (5979.985 vs. the 5885.333 target, 1.61% cost penalty) and 12 (43.14295 vs. the 35.359 target, 22.02% cost penalty), with a maximum penalty on optimized cost of 22.02%.
KOA/CGKOA [57,61] missed the target optimum in three test problems: specifically, 11 (2403.717 vs. the 1616.120 target, 48.73% cost penalty), 12 (36.6012 vs. the 35.359 target, 3.51% cost penalty), and 13 (529.4834 vs. the 524.45 target, 0.96% cost penalty), with a maximum penalty on optimized cost of 48.73%. IDMO [64] also missed the target design in three test problems: specifically, 2 (0.033960 vs. the 0.032213 target, 5.42% cost penalty), 11 (1640.9 vs. the 1616.120 target, 1.53% cost penalty), and 12 (37.461 vs. the 35.359 target, 5.945% cost penalty), with a maximum penalty on optimized cost of 5.945%. IYDSE [57,60] missed the target design in four test problems: specifically, 2 (3.29 · 10−2 vs. the 0.032213 target, 2.13% cost penalty), 10 (2.70 vs. the 2.5438 target, 6.14% cost penalty), 11 (2483.159 vs. the 1616.120 target, 53.65% cost penalty), and 13 (535.9481 vs. the 524.45 target, 2.19% cost penalty), with a maximum penalty on optimized cost of 53.65%. ESAO [63] also missed the target design in four test problems: specifically, 8 (0.53571 vs. the 0.52577 target, 1.89% cost penalty), 10 (2.5929 vs. the 2.5438 target, 1.93% cost penalty), 11 (1918.4 vs. the 1616.120 target, 18.7% cost penalty), and 12 (41.697 vs. the 35.359 target, 17.9% cost penalty), with a maximum penalty on optimized cost of 18.7%.
CSELGWO [66] missed the target average solution in test problems 2 and 8, with a maximum penalty on the average optimized cost of 1.077%, while LMWOAGWO [65] obtained worse average costs than the corresponding target values in test problems 2, 4, 9, 11, 13, and 16, with a maximum penalty on the average optimized cost of 24.5%. Data on best and worst optimized solutions were not given in Refs. [65,66].
It should be noted that the best optimization runs of SDDS-SABC were reported in [59] to obtain, in several test problems, significantly better solutions than the target ones listed in Table 1. The most relevant improvements would have occurred for (i) test problem 9, a step-cone pulley design (12.30213 vs. 16.07, that is 25.139% reduction); (ii) test problem 5, a welded beam design (1.452753 vs. 1.670218, that 13.02% reduction); (iii) test problem 3, a tension–compression spring design (case 1) (0.0112423 vs. 0.012665, 11.233%); (iv) test problem 7, a multiple disk clutch brake design (0.21881 vs. 0.23524, that 6.954% reduction); (v) test problem 15, a gas transmission compressor design (2,851,884.4 vs. 2,964,895.4, that is 3.812% reduction); and (vi) test problem 1, a speed reducer design (2963.911 vs. 2994.425, that is 1.019% reduction). However, these designs were not provided, and their feasibility was not verified in Ref. [59]. Furthermore, the worst optimized designs of the SDDS-SABC independent runs for test problems 3, 5, 7, 9, and 15 were worse than those obtained by HALSGWJA. The average optimized designs of SDDS-SABC in test problems 7 and 15 were also worse than those found by the present algorithm. Hence, SDDS-SABC was significantly less robust than HALSGWJA.
IYDSE [57,60] was reported to obtain a significantly better solution than the target one in the four-stage gearbox design problem 12: only 14.9 vs. 35.359. However, the corresponding design and constraint margins (to check for any violations) were not specified in the above-mentioned references. Furthermore, the largest optimized cost and standard deviation on optimized cost recorded in the independent optimization runs were significantly worse than those found by HALSGWJA. IYDSE also found a slightly better design than the target solution in the rolling element bearing problem 14: 16,981.126 vs. 16,958.2 (also obtained by HALSGWJA). While the optimized cost of 16,981.126 reported in Ref. [57] for IYDSE improved the suboptimal cost of 5.60 · 103 originally reported in Ref. [60] by a great extent, no evidence on the feasibility of the 16,981.126 solution was given in [57]. Furthermore, the reported average and worst optimized costs of 17,045.036 and 17,203.713 were not consistent with the fact that problem 14 is a maximization problem.
CGKOA [61] was reported to obtain a much lower structural weight than the target solution in the step-cone pulley design problem 9: only 8.5633 vs. 16.0699. The corresponding optimal design of CGKOA was not given in [61]. A deeper investigation into the optimized solutions of problem 9 reveals that the best feasible design ever reported in the literature has the minimum weight of 8.18 and was obtained in 2024 by the Secretary Bird Optimization Algorithm (SBOA) developed in Ref. [82]; the design vector for this solution was d1 = 17 mm, d2 = 28.3 mm, d3 = 50.8 mm, d4 = 84.5 mm, and w = 90 mm, where d1–2–3–4 and w, respectively, denote the pulley diameters and width. The best design obtained by HALSGWJA was d1 = 38.477 mm; d2 = 53.101 mm; d3 = 95.346 mm; d4 = 158.332 mm; and w = 47.913 mm, with a weight of 16.036, which is much closer to the classical solution of d1 = 38.4 mm; d2 = 52.9 mm; d3 = 70.5 mm; d4 = 84.5 mm; and w = 90 mm, yielding the target pulley structural weight of 16.097. In fact, since the cost function of problem 9 is expressed as W = 7200 × w × {d12[1 + (750/350)2] + d22[1 + (450/350)2] + d32[1 + (250/350)2] + d42[1 + (150/350)2]}, that is W = 7200 × w × {5.5918 d12+2.6531 d22+1.5102 d32+1.1837 d42}, cost function is mostly sensitive to the variations of d1 and d2. Interestingly, by removing the d1–2–3–4 > 35 mm side constraints originally included in the HALSGWJA optimizations, the present algorithm was able to converge to the 8.18 minimum weight design quoted in Ref. [82]. This was accomplished within about 10,000 analyses, hence in a much faster way than the computational budgets of 30,000 to 50,000 analyses indicated in [59,61,82] for the structural weights of 8.18, 8.5633, and 12.30213, respectively, obtained by SBOA, CGKOA, and SDDS-SABC.
The best optimized costs found by MSFPSO [62] in test problems 11 (hydrostatic thrust bearing design) and 15 (gas compressor transmission design) appear significantly lower than the corresponding target values reported in Table 1: only 137.08396 vs. 1616.120 and only 1,227,929.15 vs. 2,964,895.4, respectively. MSFPSO also obtained a slightly better optimized cost than the target value (i.e., 6032.55 vs. 6059.714) in the pressure vessel design problem 4 variant solved with mixed variables. However, the corresponding optimized designs and constraint margins of problems 11, 15, and 4 were not indicated in [62] and hence it is not possible to verify the feasibility of these designs. Furthermore, in problem 4, MSFPSO obtained a higher cost in its worst optimization run and a larger standard deviation on optimized cost than the present algorithm. MAHA [68] slightly improved the target optimum cost of the four-stage gearbox problem, achieving a minimum weight of 34.63987 vs. the 35.359 target, but the corresponding optimal design and information on constraint margins were not provided in Ref. [68]. Furthermore, the average and worst optimized weights found by MAHA were, respectively, about 10.7% and 36.9% higher than those of HALSGWJA, thus indicating a significantly lower robustness of MAHA with respect to the present algorithm.
The relevance of the present results is highlighted by the fact that the 18 selected competitors of HALSGWJA were very efficient metaheuristic algorithms that either (i) won the CEC2020 competition for real-world mechanical engineering problems (i.e., EnMODE, COLSHADE, SASS); (ii) implemented improved or hybrid variants of the HALSGWJA’s component algorithms GWO and JAYA (i.e., LMWOAGWO, CSELGWO, EJAYA) that effectively competed with the CEC2020 winners; (iii) were not based on GWO/JAYA but could, however, compete with the CEC2020 winners (i.e., En(L)SHADE, IMODE-KG, DESS, SDDS-ABC, MPSGO, MAHA); or (iv) outperformed highly cited methods like PSO, GWO, and WOA (i.e., IYDSE, KOA, GKOA, MSFPSO, ESAO, IDMO). Each selected competitor of HALSGWJA was, in turn, proven to outperform from 4 to 21 other metaheuristic algorithms.
For the sake of brevity, detailed comparisons between HALSGWJA and other GWO/JAYA variants and hybrid GWO-JAYA formulations that solved only some of the problems included in the CEC2020 library will not be made in this study. However, it should be noted that HALSGWJA significantly outperformed the Simple Hybrid Grey Wolf JAYA (SHGWJA) algorithm originally developed in 2024 by the present authors by hybridizing GWO and JAYA [6]. SHGWJA is the starting base of the HALSGWJA formulation, and it was proven in Ref. [6] to be very competitive (or even superior) against state-of-the-art metaheuristic algorithms, including CEC2020 winners, as well as with advanced variants of GWO and JAYA, the component algorithms selected in [6] and here for developing the hybrid metaheuristic formulations. SHGWJA was tested in seven engineering problems, also including four indicative CEC2020 test problems: (i) industrial refrigeration system design problem 2; (ii) tension-compression spring design (case 1) problem 3; (iii) pressure vessel design problem 4 (both variants with continuous and mixed variables); and (iv) welded beam design problem 5. The following facts should be highlighted.
  • In problem 2, both HALSGWJA and SHGWJA converged to the target optimum cost of 0.032213 but the present algorithm always completed the 20 independent optimization runs within only 4695 to 6258 analyses (the fastest optimization run of SHGWJA was completed within 5165 analyses), requiring, on average, only 5478 analyses (with a standard deviation of 1003 analyses) vs. the 8517 analyses required, on average, by SHGWJA (with a standard deviation of 2370 analyses). Furthermore, average and worst optimized costs and corresponding standard deviation were better for HALSGWJA being, respectively, 0.0322132 vs. 0.032215, 0.032216 vs. 0.032219, and 6.7082 × 10−7 vs. 3.671 × 10−6.
  • In problem 3, HALSGWJA converged to the target optimum cost of 0.012665 while SHGWJA converged to a slightly higher optimized cost, 0.0126665. The average and worst optimized costs and corresponding standard deviation recorded for HALSGWJA again were better than those of SHGWJA, being, respectively, 0.012668 vs. 0.012682, 0.012670 vs. 0.012692, and 2.1152 × 10−6 vs. 1.11 × 10−5. Furthermore, HALSGWJA required fewer analyses than SHGWJA, respectively, only 2103 vs. 2247 for the fastest optimization run and only 3316 vs. 6347 on average (with a standard deviation of only 1137 vs. 2246 analyses).
  • In both variants of problem 4, HALSGWJA and SHGWJA converged to the target optimum costs of 5885.331 (continuous problem) and 6059.714 (mixed problem), always reaching 0 standard deviation. However, HALSGWJA required fewer analyses than SHGWJA: (i) for the continuous problem, only 3510 vs. 6732 in the fastest optimization run, only 4672 vs. 9773 on average (with a standard deviation of only 1203 analyses vs. 2561 analyses); (ii) for the mixed problem, only 3852 vs. 7604 in the fastest optimization run, only 5104 vs. 9712 on average (with a standard deviation of only 1374 analyses vs. 2730 analyses). The number of analyses recorded for the slowest optimization runs of HALSGWJA was even lower than its counterpart recorded for the fastest optimization runs of SHGWJA: only 5713 vs. 6732 analyses for the continuous variables problem variant; only 6193 vs. 7604 analyses for the mixed variables problem variant.
  • In problem 5, HALSGWJA and SHGWJA converged to the target optimum cost of 1.670218 with 0 standard deviation. HALSGWJA again required fewer analyses than SHGWJA: only 2975 vs. 3691 for the fastest optimization run and only 3224 vs. 4438 on average (with a standard deviation of only 285 analyses vs. 424 analyses). Remarkably, the slowest optimization run of HALSGWJA required fewer analyses than the fastest optimization run of SHGWJA: only 3691 vs. 3791 analyses.
Although the data presented in Table 2 are very composite as they followed from the various settings chosen for the optimization runs of the 18 metaheuristic methods compared with HALSGWJA, the proposed algorithm is clearly superior to its competitors in terms of its capability to converge to the optimal solutions of the CEC2020 real-world mechanical engineering problems. Such a superiority becomes even more evident if one considers the very small number of analyses required by HALSGWJA in the optimization process. In fact, Table 2 also shows the number of analyses required by the fastest optimization run of HALSGWJA, converging to the best optimized cost of each test problem. The present algorithm always completed the optimization process between only 591 (four-stage gear-box design problem 12) and 7400 (hydrostatic thrust bearing design problem 11) analyses vs. at least 10,000 analyses required by its fastest competitors, such as IMODE-KG [57], IYDSE [57,60], KOA [57,74], and EJAYA [57,70].
Interestingly, IMODE-KG converged in 10,000 analyses to worse designs than HALSGWJA in (i) problem 3 (tension/compression spring design case 1), with an optimized cost of 0.0126742 vs. the 0.012665 target optimum reached by HALSGWJA in only 2013 analyses; (ii) problem 8 (planetary gear-train design), with an optimized cost of 0.526581 vs. the 0.523250 optimum cost reached by HALSGWJA in only 2095 analyses; and (iii) problem 11 (hydrostatic thrust bearing design), with an optimized cost of 1783.954 vs. the 1616.120 optimum cost reached by HALSGWJA within only 7400 analyses.
IYDSE also converged within 10,000 analyses to worse designs than HALSGWJA in (i) problem 4 (pressure vessel design variant with mixed variables), with an optimized cost of 6103.368 vs. the 6059.714 target optimum reached by HALSGWJA in only 3852 analyses; (ii) problem 11, with an optimized cost of 2483.159 vs. the 1616.120 optimum cost reached by HALSGWJA in only 7400 analyses; (iii) problem 13, with an optimized cost of 535.5885 vs. the 524.5888 to 524.6111 optimized costs obtained by HALSGWJA within 1911 to 4801 analyses only; and (iv) problem 18, with an optimized cost of −30,660.325 vs. the −30,664.873 optimum cost reached by HALSGWJA within only 4694 analyses.
KOA also converged within 10,000 analyses to worse designs than HALSGWJA in (i) problem 1 (speed reducer design), with an optimized cost of 2995.351 vs. the 2994.425 target optimum reached by HALSGWJA within only 2326 analyses; (ii) problem 11, with an optimized cost of 2403.717 vs. the 1616.120 optimum cost reached by HALSGWJA within only 7400 analyses; (iii) problem 13, with an optimized cost of 529.4834 vs. the 524.5888 to 524.6111 optimized costs obtained by HALSGWJA within 1911 to 4801 analyses only; (iv) problem 15, with an optimized cost of 2,965,375.3 vs. the 2,964,893.937 optimum cost reached by HALSGWJA within only 4377 analyses; and (iv) problem 18, with an optimized cost of −30,663.305 vs. the −30,664.873 optimum cost reached by HALSGWJA within only 4694 analyses.
EJAYA (a very advanced JAYA variant) also converged within 10,000 analyses to worse designs than HALSGWJA in (i) problem 8, with an optimized cost of 0.527347 vs. the 0.523250 optimum cost reached by HALSGWJA within only 2095 analyses; (ii) problem 10, with an optimized cost of 2.601873 vs. the 2.5437852 optimum cost reached by HALSGWJA within only 2095 analyses; and (iii) problem 13, with an optimized cost of 525.0392 vs. the 524.5888 to 524.6111 optimized costs obtained by HALSGWJA within 1911 to 4801 analyses only.
Another proof of the very high computational efficiency of HALSGWJA is that the optimization data listed in Table 2 for the CEC2020 top algorithms EnMODE [54,65,68], COLSHADE [55,65,68], and SASS [53,59,68] were relative to the classical computational budgets of (i) 100,000 analyses in test problems 1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, and 18, or (ii) 200,000 analyses in test problems 2, 12, and 19. In summary, the CEC2020 winners required at least one order of magnitude more analyses than the present algorithm to complete their optimizations.
Whilst Table 2 reports the computational cost of the best and worst optimization runs as well as the average number of analyses and the corresponding standard deviation recorded for HALSGWJA, similar data were not available for its competitors, except for the information on the average number of analyses required in the optimizations given in Ref. [69] for the CEC2020 winners EnMODE, COLSHADE, and SASS. The data provided in [69] actually served to dispose of a relevant comparison basis for the Piranha Predation Optimization Algorithm (PPOA) developed in [69]. While PPOA could converge to feasible designs only in 11 of the 19 test problems in spite of the theoretical computational budget of 1,000,000 analyses (requiring on average at least 26,900 analyses except for problems 1, 7, 17 and 19; however, PPOA was on average faster than HALSGWJA only in problem 17), according to the data reported in [69], EnMODE was on average faster than HALSGWJA in problems 1 (only 2277 analyses vs. 2538 analyses required by the present algorithm) and 17 (only 215 analyses vs. 1170 analyses required by the present algorithm). COLSHADE was on average faster than HALSGWJA in problems 17 and 18 (respectively, only 220 analyses vs. 1170 analyses, and only 2267 analyses vs. 4694 analyses). SASS was on average faster than HALSGWJA only in problem 17 (respectively, only 360 analyses vs. 1170 analyses). Since the number of analyses required by EnMODE, COLSHADE, and SASS in their best and worst optimization runs and the corresponding standard deviations were not specified in Ref. [69], it is not possible to directly compare the computational costs of EnMODE/COLSHADE/SASS evaluated in that study with those of HALSGWJA. In summary, the following aspects should be underlined for the results of Ref. [69]: (i) the CEC2020 winners converged in [69] to the target solutions of only 14–15 test problems, in open contrast with the documented results for their original implementations; (ii) the gap between HALSGWJA and each CEC2020 winner occurred only in one or two test problems out of 14–15 test problems successfully solved in [69]; (iii) the gap was just 11% in problem 1; (iv) the order of magnitude of the average number of analyses recorded in [69] for the CEC2020 winners always was comparable with its counterpart recorded for HALSGWJA.
HALSGWJA was also very robust in terms of optimized cost values and the required number of analyses in the independent optimization runs. In fact, HALSGWJA reached the 0 standard deviation on optimized cost in test problems 1, 4, 5, 6, 7, 11, 14, 15, and 19. For all other test problems, the ratio between the standard deviation in optimized cost and average optimized cost never exceeded 0.476%. Interestingly, HALSGWJA also achieved the lowest standard deviation overall in the test problem 12, yielding the above-mentioned largest standard deviation to the average optimized cost ratio of 0.476%. In fact, HALSGWJA obtained a standard deviation of only 0.1687 (optimized cost values from 35.29269 to 35.65141; all optimization runs required between 531 and 1591 analyses) vs. 0.49725 of SDDS-ABC [59] (optimized cost values from 43.14925 to 45.36397, within 20,000 analyses) and 0.56923 of MPSGO [67] (optimized cost values from 35.359 to 37.258, within 200,000 analyses). The CEC2020 winners EnMODE [54,65,68], COLSHADE [55,65,68], and SASS [53,59,68], respectively, obtained standard deviations on optimized cost equal to 0.59911 (i.e., 1.677% of the average optimized cost 35.728), 1.3677 (i.e., 3.736% of the average optimized cost 36.611), and 2.11484 (i.e., 5.491% of the average optimized cost 38.5141). MSFPSO [62], ESAO [63], and IDMO [64] exhibited the largest values of standard deviation on optimized cost for test problem 12, being, respectively, 126,054.2, 9.372 × 1014 (within 15,000 analyses), and 6.3419 × 1015 within 50,000 analyses. The data reported for problem 12 confirm the difficulty of many metaheuristic algorithms in converging to the target solution of this highly nonlinear problem, regardless of the selected initial population in each optimization run. Conversely, HALSGWJA also obtained the best standard deviation amongst all the algorithms compared in this study in this difficult problem, even disposing of a much lower computational budget.
As far as concerns HALSGWJA’s robustness in terms of computational cost, the ratio between the standard deviation in required analyses and the average number of analyses required by the present algorithm never exceeded 38.97%, with an average of 18.07% and a median of 16.7%. The robust convergence behavior of HALSGWJA is confirmed by Figure 3, which shows the optimization histories recorded in all CEC2020 problems for the present algorithm and its competitors. It can be seen that the convergence curves of best and average optimization runs of HALSGWJA practically coincided on average at about 49.2% of the optimization history of the best run. The overlap of HALSGWJA’s best and average convergence curves occurred after completing more than 75% of the best run optimization history only in four cases: the planetary gear train design problem 8 (i.e., 76.4%); the step-cone pulley problem 9 (i.e., 81.8%); the ten-bar truss design problem 13 (i.e., 78.5%); and the gear train design problem 17 (i.e., 87.2%, to reach cost function values below the 10−15 double precision limit). However, this was well balanced by four other cases where the overlap occurred before completing 25% of the best run optimization history: industrial refrigeration system problem 2 (i.e., 23.4%); three-bar truss design problem 6 (i.e., 8.51%); four-stage gearbox design problem 12 (i.e., 16.9%); and gas transmission compressor problem 15 (i.e., 11.4%).
Figure 3 clearly shows the superior convergence behavior of HALSGWJA with respect to its competitors. It should be noted that convergence curves for the whole CEC2020 real-world mechanical engineering problems library were available only for the DESS [58] and MPSGO [67] algorithms. However, since some convergence curves presented in Ref. [58] for the DESS algorithm either were out of scale or did not have enough resolution, they are not included in Figure 3 for the sake of clarity. Since the optimization histories of DESS and MPSGO were given in Refs. [58,67] with respect to the number of generations (also indicated as optimization iterations), while Figure 3 referred the curves to the number of analyses, the data of Refs. [58,67] were converted as follows.
DESS is a differential evolution-based algorithm that linearly reduces the population size from the initial value 6 × NDV2 as generations progress and distributes search agents over four sub-populations. Hence, in each generation, DESS would perform at least four new analyses if each sub-population included at most one search agent. Interestingly, in test problems 1, 6, 8, 12, and 15 (see Figure 3a,g,i,m,p), the HALSGWJA algorithm developed in this research directly converged to the target optima within fewer analyses than the generations required by DESS; hence, even in the ideal scenario where DESS performed only one new analysis per generation, it would, however, require more analyses than HALSGWJA. The number of generations of the DESS convergence curves shown in Figure 3l,n,q,s for test problems 11, 13, 16, and 18 was instead amplified by a factor of 4; the corresponding DESS curves were denoted as “amplified 4X” in the figure. This arrangement made the comparison of convergence curves more realistic. In Figure 3, the horizontal axis of convergence curves is hence labeled as “Number of analyses/generations” if the actual number of DESS generations is considered, or labeled as “Number of analyses” if the number of DESS generations is amplified.
It should be noted that the selected amplification factor of 4 for DESS generations was, however, much smaller than the initial population of 6 × NDV2 search agents used by DESS in its first generation, being, respectively, 96 (i.e., 6 × 42), 600 (i.e., 6 × 102), 54 (i.e., 6 × 32), and 150 (i.e., 6 × 52) for test problems 11, 13, 16, and 18. It can be seen from Figure 3l,n,q,s that DESS practically converged to the optimal solutions of test problems 11, 13, 16, and 18 within 1100, 555, 1000, and 800 generations, respectively; hence, by linearly decreasing population size from the corresponding values 96, 600, 54, and 150, the computational cost of DESS would be about 63,850, 75,270, 59,230, and 60,050 analyses, respectively, vs. only 7400, 1911, 4223, and 4494 analyses required by HALSGWJA.
Unlike HALSGWJA, which monotonically converged to the optimum solution in all test problems, DESS always exhibited a significantly oscillatory behavior often characterized by the presence of infeasible intermediate designs. This occurred because the spherical search operator implemented in DESS explores a spherical region around each population member using gradient repair to guide the solutions towards the feasible region, while HALSGWJA always relates approximate gradient information to the best candidate solutions currently stored in the population. The JAYA-based perturbation schemes and mirror strategies always force the present algorithm to generate high-quality trial designs and move in the opposite direction to any perturbation that leads to an increasing cost function.
The number of analyses relative to the MPSGO algorithm [67] was determined by multiplying the population size NPOP = 50 indicated in Ref. [67] by the corresponding number of optimization iterations. This was done because MPSGO hybridizes the Marine Predators Algorithm (MPA) and the Social Group Optimization (SGO) algorithm, and both component algorithms perform NPOP analyses in each optimization iteration, evaluating the cost function and constraints of all search agents. Interestingly, MPSGO was reported in [67] to always practically converge to the optimal solutions of CEC2020 real-world mechanical engineering problems within 100−200 optimization iterations. Hence, the number of analyses covered in Figure 3 for MPSGO’s convergence curves ranged between 5000 and 10,000. To make convergence curves more readable, the number of analyses of MPSGO was scaled by (i) 0.25 in the case of test problem 2 (see Figure 3b); (ii) 0.5 in the case of test problems 4 (see Figure 3e for the mixed variable problem variant), 5 (see Figure 3f), 6 (see Figure 3g), 7 (see Figure 3h), and 9 (see Figure 3j); and (iii) 0.75 in the case of test problem 10 (see Figure 3k). The scaled optimization histories of MPSGO are, respectively, denoted as “scaled 0.25×”, “scaled 0.5×”, and “scaled 0.75×” in Figure 3.
It can be seen that MPSGO was much slower than HALSGWJA in all test problems. This occurred even when MPSGO’s convergence curves were scaled. Furthermore, MPSGO’s optimization histories always presented several steps yielding marginal improvements in the cost function. This occurred over the whole optimization process. Such a behavior may be explained as follows. MPSGO divides the optimization search process into five phases: an inspiration phase dominated by SGO, where search agents try to imitate the best candidate design of the population; Brownian exploration dominated by MPA; a self-awareness phase (typical of modified SGO), where the search process shifts from exploration to exploitation; combined exploration and exploitation, characterized by Levi’s and Brownian random walks; and a high exploitation phase, where the optimizer focusses on local search to converge to the global optimum. However, two facts should be considered. Firstly, the five phases of MPSGO mentioned above all cover the same fraction of the search process, 1/5. Secondly, MPSGO adopts the classical approach of metaheuristic optimization: exploring first and exploiting later. Conversely, HALSGWJA dynamically mixes exploration and exploitation based on the current trend of optimization search. This allows HALSGWJA to monotonically converge to the optimum. Furthermore, the line search strategies implemented in HALSGWJA allow the regions of design space hosting the best solutions, and in all likelihood the global optimum, to be approached quickly.
Although Figure 3l might indicate that MPSGO converged to the optimal solution of the hydrostatic thrust bearing problem 11 before the present algorithm, it should be noted that MPSGO obtained a suboptimal solution with a higher cost than the target optimum cost achieved by HALSGWJA: 1643.1 vs. only 1616.120. HALSGWJA recovered the initial gap in cost function with respect to MPSGO (i.e., about 6520 vs. about 3251) within the first 1100 analyses and continued to smoothly reduce the cost function until reaching the target optimum cost at about 3000 analyses. Conversely, MPSGO did not change the cost function significantly over the first 1815 analyses and then drastically reduced W between 1816 and 1836 analyses, finally remaining trapped at the local minimum, 1643.1. In the rolling element bearing problem 14 (see Figure 3o), MPSGO converged to the target optimum cost of 16,958, always reducing the cost function, while HALSGWJA progressively increased it until reaching the target solution. The latter behavior certainly was more consistent with the nature of problem 14, which aimed to maximize the cost function. Interestingly, MPSGO was not even able to generate any better intermediate design than those elaborated in the worst optimization run of HALSGWJA.
Some representative convergence curves plotted in Figure 3 were also available for (i) COLSHADE [55,65,68], En(L)SHADE [56], and SASS [53,59,68] in the case of the four-stage gearbox problem 12 (see Ref. [56]); (ii) KOA [57,74] and CGKOA [61] in the case of test problems 3 (tension/compression spring design case 1), 4 (pressure vessel design with continuous variables), 6 (three-bar truss design), and 17 (gear train design); (iii) SHGWJA [6] in the case of test problems 2 (industrial refrigeration system), 3 (tension/compression spring case 1), and 4 (pressure vessel design with continuous or mixed variables); and (iv) the Attributes-based Information-learning Search Optimization (ISO) algorithm (mimicking people’s psychological assessment and learning behaviors for information resources, emphasizing psychological assessment, learning and recovering for information loss) (see Ref. [79]) in the case of the gear train design problem 17.
Figure 3m compares the convergence curves of the four-stage gearbox problem 12, the test case of CEC2020 real-world mechanical engineering library, including the largest number of optimization variables (22) and nonlinear constraints (86). The optimization histories reported in the literature for COLSHADE [55,65,68], En(L)SHADE [56], and SASS [53,59,68] were scaled in this study by a factor of 0.025 to fit their computational budget of 200,000 analyses into the plot of Figure 3m. Remarkably, the CEC2020 winner COLSHADE [55,65,68] converged to the target optimum cost of 35.359 within 3990 scaled analyses, while the best optimization run of the HALSGWJA algorithm developed here converged to the best cost overall of 35.294 within only 591 analyses. Furthermore, the worst optimization run of HALSGWJA required about 470 analyses only to find a cost function value of 36.265, similar to (i) the 36.2 optimized cost value finally reached by DESS within about 2265 generations (DESS actually obtained the 36.2 cost function value already after about 200, 245, 275, 1495, 1540, 1610, and 1630 generations; however, it always exhibited oscillatory behavior and shifted between feasible and infeasible intermediate designs in all these phases of optimization history); (ii) the 36.2 optimized cost value finally reached by En(L)SHADE within about 3990 scaled analyses; and (iii) the 36.25 optimized cost value finally reached by SASS within about 3575 scaled analyses. The superior convergence behavior of HALSGWJA is highlighted by the fact that COLSHADE, En(L)SHADE, and SASS could never find better intermediate designs than the present algorithm over the whole optimization history.
The best optimization runs of KOA [57,74] and CGKOA [61], shown in Figure 3c,d,g for problems 3, 4 (solved with continuous variables), and 6, were even outperformed by the average optimization runs of HALSGWJA when the present algorithm started its search from better designs than KOA/CGKOA. In the gear train design problem 17 (see Figure 3r), all optimization runs of HALSGWJA started from initial designs that were significantly worse than their counterparts generated for the KOA/CGKOA optimizations (i.e., the initial cost function values of HALSGWJA ranged from 0.0464 to 2.046 vs. the initial cost of only 1.4 × 10−8 recorded for KOA/GKOA); the ISO algorithm [79] also started from a very low cost value, only 10−6. However, the present algorithm required between 435 and 900 analyses only to recover such significantly large initial gaps in cost function and generate better intermediate designs than those of KOA/CGKOA and ISO. The multiple strategies implemented in KOA and CGKOA to balance exploration and exploitation, increase population diversity, eliminate poor quality individuals, accelerate the output of high-quality population, and explore elite groups from multiple perspectives were overcome by the inherent capability of HALSGWJA to generate new trial designs that are always likely to improve the current best record.
It should be noted that in the test problem 17, ISO [79] outperformed the other 12 state-of-the-art optimizers, including HALSGWJA components (i.e., GWO), components of CEC2020 winners (i.e., differential evolution), highly cited methods (i.e., particle swarm optimization, whale optimization algorithm, genetic algorithms, harmony search optimization), and a variety of other techniques (i.e., bee swarm algorithm, equilibrium optimizer, etc.). ISO was selected for comparison because its algorithmic structure is somewhat similar to GWO; there are three leading individuals that hierarchically share information with the rest of the population such as occurs in GWO, where wolves α, β, and δ guide the other wolves towards the prey. However, gradient information and line search made HALSGWJA definitely more effective than ISO in avoiding stagnation (and the presence of large steps in the optimization history) and repairing trial solutions that either may not improve the current best record or improve X b e s t just marginally.
Interestingly, Figure 3b–f shows that SHGWJA [6] was rather competitive with HALSGWJA in test problems 2, 3, 4, and 5. In fact, its convergence curves were much closer than those of MPSGO and KOA/CGKOA to the corresponding optimization histories recorded for the present algorithm. Such a behavior occurred because, similar to HALSGWJA, the SHGWJA algorithm hybridized GWO and JAYA, utilizing elitist strategies to accept/reject trial designs. This confirms the inherent efficiency of generating new trial designs that maximize the probability of improving the current best record.

3.2. Performance Ranking of HALSGWJA and Its Competitors in the CEC2020 Problems

HALSGWJA and its 18 competitors were also ranked with respect to five performance indicators: respectively, the best, average, and worst optimized cost values obtained in the independent optimization runs; the corresponding standard deviation on optimized cost (STD); and the number of analyses (No. analyses) required in the optimization process. Table 3 lists the score obtained by HALSGWJA for each performance category in the 19 CEC2020 real-world mechanical engineering problems solved in this study; “1” and “19”, respectively, denote that HALSGWJA was the best or the worst algorithm in a specific category. The table also reports how far HALSGWJA was from its best competitors when it could not rank amongst the top four optimizers (i.e., when HALSGWJA’s performance did not fall within the first quartile, defined in the present case by 4/19 = 0.2105, 21% of the compared algorithms) for some of the performance indicators listed above.
It can be seen from Table 3 that in the nineteen CEC2020 real-world mechanical engineering problems, HALSGWJA ranked first overall (i) 11 times for best optimized cost, (ii) 10 times for average optimized cost, (iii) 12 times for worst optimized cost, (iv) 10 times for standard deviation on optimized cost, (v) 19 times for required number of analyses. Furthermore, HALSGWJA ranked second overall four times for the best optimized cost, three times for the average optimized cost, one time for the worst optimized cost. Lastly, HALSGWJA ranked fourth overall another time for the standard deviation on optimized cost.
In summary, HALSGWJA performed very well in the CEC2020 real-world mechanical engineering test suite because it ranked in the first quartile 15, 13, 13, and 11 times, respectively, for the best/average/worst solutions, standard deviation on optimized cost, and it was always the fastest optimizer in all test problems. It should be noted that even when HALSGWJA ranked below the fourth position in some category, its performance should be considered satisfactory. In fact, while HALSGWJA ranked 12th in test problems 13 (10 bar truss design) and 16 (tension/compression spring design case 2) and 13th in test problem 17 (gear train design) and 14th in test problem 18 (Himmelblau’s function) in terms of cost function’s best values, the best solutions found by the present algorithm were only 0.311% (problem 13), 0.00617% (problem 16), and 0.00218% (problem 18) worse than those found by the top algorithms for this item, and the best cost of 2.3078 × 10−21 found by HALSGWJA in problem 17 was much below the double precision limit of 10−15. Furthermore, while HALSGWJA ranked between the 6th and 15th position in problems 2 (industrial refrigeration system), 3 (tension/compression spring design case 1), 13, 16, and 18 in terms of average optimized cost, the gap with respect to best average solutions obtained by the top algorithms for this item only ranged between 6.209 × 10−4% and 0.292%; the average cost of 3.5635 × 10−21 found by HALSGWJA in problem 17 again was much below the 10−15 double precision limit. Moreover, HALSGWJA ranked between the 5th and the 10th position in problems 2, 3, 13, 16, 17, and 18 in terms of worst optimized cost but the gap with respect to the top competitors for this item only ranged between 2.270 × 10−3% and 0.0918%; the worst cost of 4.9608 × 10−21 found by HALSGWJA in problem 17 once again was much below the 10−15 double precision limit. Finally, while HALSGWJA ranked between the 6th and the 13th position in problems 2, 3, 9 (step-cone pulley), 10 (robot gripper), 13, 16, 17, and 18 in terms of standard deviation on optimized cost, either the standard deviation value recorded for HALSGWJA in problem 17 was lower than the 10−15 double precision limit, and the ratio between standard deviation on optimized cost and target optimum cost ranged from only 1.11 × 10−5% to 0.0167% for problems 2, 3, 9, 10, 13, 16, and 18.
The very good performance of HALSGWJA is confirmed by Table 4, which shows the number of times that the present algorithm and its 18 competitors ranked first with respect to the selected performance indicators for each CEC2020 test problem. The table column labeled as “B+A+W” reports the sum of the scores relative to the best, average, and worst optimized costs categories. The “B+A+W+STD” column reports the sum of the scores relative to the best/average/worst optimized costs and the standard deviation on optimized cost categories. The “TOTAL” column reports the sum of the scores recorded for all performance indicators including also the number of analyses required in the optimization process. Remarkably, HALSGWJA achieved a total score of 62, ranking first overall ahead of En(L)SHADE [56], which totaled 60 points in this study and had been proven in [56] to be very competitive against CEC2020 winners. HALSGWJA also outperformed CEC2020 winners such as EnMODE [54,65,68], COLSHADE [55,65,68], and SASS [53,59,68] in view of its significantly higher computational speed that always allowed the present algorithm to converge to the target optima or at least to obtain very close solutions to target optima. Even considering Ref. [69], which reported the lower average values of analyses required by EnMODE, COLSHADE, and SASS with respect to HALSGWJA in problem 17 (i.e., 215, 220, and 360 vs. 1170) and for COLSHADE with respect to HALSGWJA in problem 18 (i.e., 2267 vs. 4694), the total score of HALSGWJA would be 60 points, remaining the best overall together with En(L)SHADE.
Without considering standard deviation and computational speed, HALSGWJA ranked seventh overall after En(L)SHADE, EnMODE, COLSHADE, MPSGO, SASS, and MAHA with respect to the “B+A+W” score based on the cost function values obtained in the independent optimization runs. In particular, HALSGWJA achieved a score of 33 points vs. 36 to 46 points scores obtained by the above-listed algorithms. This occurred because HALSGWJA could neither rank first nor second overall in at least two of the best/average/worst optimized cost categories for test problems 2, 3, 13, 16, 17, and 18 (see also Table 3). In order to analyze this issue, Wilcoxon signed-rank tests with a significance level of 0.05 were executed for problems 2, 3, 13, 16, 17, and 18, comparing two datasets, including the best/average/worst optimized costs recorded for HALSGWJA and one competitor. As expected, HALSGWJA always achieved (i) positive signed-rank values with respect to En(L)SHADE, EnMODE, COLSHADE, MPSGO, SASS, and MAHA and (ii) negative signed-rank values with respect to the other competitors. However, since this Wilcoxon test operated on data pairs, each of which included only three values, another Wilcoxon signed-rank test was executed, taking the ensemble of best/average/worst optimized costs obtained in all nineteen CEC2020 problems. Hence, the new datasets included at most 19 × 3=57 values per pair of compared algorithms (i.e., the pairs formed by HALSGWJA and one competitor at a time). Since target optima significantly varied in magnitude, ranging from −3.07 × 104 to 2.965 × 106, the best/average/worst optimized costs recorded for each test problem were normalized with respect to the corresponding target solutions listed in Table 1. Again, HALSGWJA achieved positive signed-ranks with respect to En(L)SHADE, EnMODE, COLSHADE, MPSGO, SASS, and MAHA only. These results were fully consistent with the “B+A+W” ranking stated in Table 4. The detailed results of the Wilcoxon signed-rank tests carried out in this study are not shown in the article for the sake of brevity.
HALSGWJA increased its rank to the fourth position in the “B+A+W+STD” score ranking after En(L)SHADE, COLSHADE, and EnMODE, with 43 points vs. 44 to 60 points. However, while the data related to EnMODE and COLSHADE were obtained by combining the optimization results of optimization reported in multiple studies, HALSGWJA’s performance evaluation relies only on the data gathered in the present study. Furthermore, the better performance of HALSGWJA’s competitors with respect to the “B+A+W” and “B+A+W+STD” scores was a direct consequence of their significantly higher computational budget. For example, in problem 2, En(L)SHADE, EnMODE, COLSHADE, MPSGO, SASS, and MAHA could complete their optimization runs within 200,000 analyses, while HALSGWJA completed all optimizations with between only 4695 and 6258 analyses. In order to gather further evidence of the HALSGWJA computational efficiency, another cycle of numerical trials was performed by restarting the optimization runs that previously missed the 0.0322130 target solution from initial populations of individuals in the neighborhood of the suboptimal solutions. Remarkably, HALSGWJA always converged to the target optimum cost of 0.0322130 (thus also achieving also standard deviation on optimized cost) within a total of 9000 analyses for all repaired cases, thus remaining well below the number of analyses required by En(L)SHADE, EnMODE (even considering the average number of 9948 analyses indicated in [69] for this algorithm), COLSHADE, MPSGO, SASS, and MAHA in their optimization runs. Hence, HALSGWJA should be considered as also ranking first for the average/worst costs and standard deviation on optimized cost; this would increase its score by 2 and 3 points, respectively, in the “B+A+W” and “B+A+W+STD” categories. Similarly, in problem 3, HALSGWJA completed all primary optimization runs with between 2103 and 4097 analyses, and it then repaired all suboptimal runs, converging to the target optimum cost of 0.012665 within a total number of analyses of 6000, well below the computational cost of its competitors. Hence, HALSGWJA would again increase its “B+A+W” and “B+A+W+STD” scores by 2 and 3 points, respectively. By summing the above-mentioned improvements that could be achieved in problems 2 and 3, the “B+A+W” and “B+A+W+STD” scores of HALSGWJA would rise to 37 and 49, respectively. This would put the present algorithm at in fifth place (after En(L)SHADE, EnMODE, COLSHADE, and MPSGO, but at the same level as SASS) and third (after En(L)SHADE and COLSHADE) rank overall in the “B+A+W” and “B+A+W+STD” categories, respectively. The total score, including the computational cost, would instead rise to 68 (or 66 considering Ref. [69]), thus outperforming En(L)SHADE by 8 (or 6) points.
Besides the ranking analysis described above, the classical ranking-based Friedman test was executed for each performance indicator considered in this study: best/average/worst optimized costs, standard deviation on optimized cost, and number of analyses. The results of the Friedman test are presented in Table 5, which shows the average ranks achieved by HALSGWJA and its competitors over the 19 CEC2020 problems for each performance indicator, as well as the corresponding cumulated average rank ΣAR (defined as the sum of average ranks for all performance indicators) and the mean cumulated average rank (defined as ΣAR/5). The table also shows the partially cumulated average rank for the best/average/worst optimized cost categories (indicated as “Partial Aver A+B+W” in the table). For the sake of brevity, Table 5 shows only the data relative to the seven best algorithms—HALSGWJA, EnMODE [54,65,68], COLSHADE [55,65,68], En(L)SHADE [56], SASS [53,59,68], MPSGO [67], and MAHA [68]—that resulted from Table 4. Ranks were determined obviously considering HALSGWJA and its 18 competitors, increasing from “1” (best algorithm) to “19” (worst algorithm). Average ranks were obtained by dividing the sum of ranks obtained in the test problems by 19. It should be noted that the ranking-based Friedman test is usually carried out to rank algorithms with respect to their best or average optimized solutions. However, since HALSGWJA and its competitors always converged to target optima or very close solutions to target optima, it was decided to exhaustively compare algorithms by investigating in detail all aspects related to their robustness and computational cost.
The results of the ranking-based Friedman test carried out in this study confirmed the mutual ranking of algorithms stated in Table 4. As far as concerns the best/average/worst optimized cost categories, the partially cumulated average rank increased from 6.526 for EnMODE and 6.788 for En(L)SHADE to 10.263 for HALSGWJA. However, the present algorithm obtained the second best average rank with respect to standard deviation on optimized cost (i.e., 4.105 vs. 3.053 of En(L)SHADE, the best algorithm for this category) and was the very best optimizer in terms of computational speed, achieving an average rank of 1 (i.e., the fastest algorithm in all test problems) vs. 5.474 obtained by COLSHADE (the second best algorithm for this category). Consequently, HALSGWJA obtained the lowest value of cumulated average rank ΣAR: only 15.368 vs. 18.158 of COLSHADE, 18.579 of EnMODE, 19.157 of En(L)SHADE, 21.894 of SASS, 23.508 of MPSGO, and 24.789 for MAHA. In summary, HALSGWJA was confirmed to be the best optimizer overall and outperformed CEC2020 winners like COLSHADE, EnMODE, and En(L)SHADE; its mean cumulated average rank was only 3.074 vs. the 3.632 to 3.831 mean cumulated ranks obtained by the other algorithms. These four algorithms were superior to SASS, MPSGO, and MAHA, which ranked on average between 4.379 and 4.958.

3.3. Results of the Car Side Impact Problem

The optimization results obtained in the car side impact problem by the HALSGWJA algorithm developed in this study were compared with those obtained by 25 other state-of-the-art metaheuristic optimizers. HALSGWJA’s competitors can be grouped in eight categories: (a) hybrid GWO-JAYA algorithms including elitist strategies such as the FHGWJA developed in [7]; (b) advanced GWO variants such as multi-strategy fusion improved gray wolf optimization (IGWO) [72], chaotic GWO, and adaptive dynamic self-learning grey wolf optimization algorithm (ASGWO) [73]; (c) advanced JAYA variants such as EJAYA [70]; (d) particle swarm optimization (PSO) [93] and comprehensive learning particle swarm optimizer (CLPSO) [94]; (e) efficient animal behavior-inspired methods such as cuckoo search optimization (CSO) [95], tunicate swarm algorithm (TSA) [96], starling murmuration optimizer (SMO) [76], improved continuous ant colony optimization algorithm (LIACOR) [97], krill herd algorithm (KH) [98], best variant of artificial bee colony (ABC) [99], whale optimization algorithm (WOA) [100], horned lizard optimizer algorithm (HLOA) [101], artificial lemming algorithm (ALA) [84], Harris hawks optimizer (HHO) [102], dwarf mongoose optimization (DMO) [103], starfish optimization algorithm (SFOA) [85], and firefly algorithm (FA) [87]; (f) efficient human behavior-inspired methods such as the attributes-based information-learning search optimization (ISO) algorithm [79] and the dream optimization algorithm (DOA) [83]; (g) efficient mathematics-inspired methods such as the exponential-trigonometric optimization algorithm (ETO) [81] and the arithmetic optimization algorithm (AOA) [104]; (h) high-performance optimizers such as the hybrid LSHADE-SPACMA algorithm [105], which combines history-based adaptive differential evolution with linear population size reduction and covariance matrix adaptation evolution strategy.
Table 6 reports the optimized designs and the corresponding structural weights obtained by HALSGWJA and its competitors. The solutions reported in the table were clustered according to those reported in the literature. In particular, IGWO, chaotic GWO, CSO, and TSA were compared in Ref. [72]; SMO, LIACOR, KH, Best ABC, CLPSO, and WOA were compared in Ref. [76]; DOA and HLOA were compared in Ref. [83]; ALA, HHO, AOA, DMO, and LSHADE-SPACMA were compared in Ref. [84]; and FA and PSO were compared in Ref. [87]. The (X,Y) notation used in the table indicates the referenced study “X” proposing a main HALSGWJA’s competitor and the study “Y” from which it was originally taken the algorithm compared to the main algorithm developed in “X”. Similarly to the presentation of CEC2020 results, data reported in Table 6 for HALSGWJA’s competitors are relative to the best parameter settings indicated in the literature for each algorithm.
It can be seen from Table 6 that HALSGWJA converged to the best solution overall, corresponding to the lowest weight of 21.37798 kg. This design was slightly better than the 21.38340 kg obtained by FHGWJA [7] and the 21.39473 kg obtained by IGWO [72]. It should be noted that while the optimized designs of HALSGWJA and FHGWJA actually converged to the discrete value of 0.345 indicated in the problem statement for variables x8 and x9, the corresponding optimal values found by IGWO and chaotic GWO did not coincide with the discrete values {0.192; 0.345} set for this test problem. Instead, IGWO- and GWO-optimized designs became infeasible as x8 was set equal to 0.192 or 0.345, violating constraints G6, G7, G8, and G9 from 13.9 to 23.7%.
Chaotic GWO, CSO, and TSA also converged to very good structural weights, ranging between 21.46164 and 22.70296 kg, although their optimized solutions did not include the prescribed discrete values of variables x8 and x9. Most of HALSGWJA’s competitors obtained optimized weights between 22.84238 and 22.88605 kg, ranking as follows: ALA, HHO, EJAYA, SMO, FA, LIACOR, ISO, PSO, AOA, DMO, LSHADE-SPACMA, ASGWO, KH, and Best ABC. Again, ALA, HHO, EJAYA, AOA, DMO, LSHADE-SPACMA, and ASGWO could not converge to the prescribed discrete values of x8 and x9. The last six competitors obtained optimized weights between 23.06244 and 23.9682 kg, ranking as follows: CLPSO, WOA, ETO, SFOA, HLOA and DOA; ETO and DOA missed the target discrete values of x8 and x9.
The optimized weights obtained by the 26 optimizers compared in Table 6 varied by 12%, from the lightest design overall being 21.37798 kg, obtained by HALSGWJA, to the heaviest design overall being 23.9682 kg, obtained by DOA. The high nonlinearity of this test problem and its particular formulation explain why the different algorithms found so many local optima. It should be noted that the cost function W( X ) stated by Equation (14) depends only on six optimization variables (i.e., x1, x2, x3, x4, x5, and x7), while only x10 enters in all constraint functions (see Equation (15)). The search space of the car side impact problem therefore includes many competitive solutions for which optimized values of design variables may be significantly different. This is confirmed by the fact that in spite of recording only 12% variation in optimized cost amongst the various optimized solutions, the first nine variables listed in Table 6 for these solutions varied up to 30.4%, while x10 and x11, respectively, varied by 1446.1% and 762.9% to satisfy all constraints. However, the optimization variables x1, x2, x3, x4, x5, and x7, included in the cost function W( X ), changed by 8.71%, 19.61%, 6.03%, 15.93%, 13.05%, and 5.88%, respectively, with an average dispersion of 11.54%. This was consistent with the 12% variation in optimized cost, considering that the cost function of the car side impact problem was fitted by a linear response surface. The inherent ability of HALSGWJA to generate high-quality trial designs that are always likely to improve the current best record allowed the present algorithm to avoid local minima.
Table 7 compares the best (indicated as “B–W” in the table), average (indicated as “A–W” in the table), and worst (indicated as “W–W” in the table) optimized weights and standard deviations on optimized weight (indicated as “W–W” in the table) obtained by HALSGWJA and its competitors in the different runs as well as the corresponding information on the number of structural analyses required in the optimization process (respectively, indicated as “B–NSA”, “A–NSA”, “W–NSA”, and “B–NSA” in the table). Statistical data on optimized weight were available only for FHGWJA [7], EJAYA [70], ISO [79], DOA, and HLOA, compared in Ref. [83], and SFOA [85], FA, and PSO, compared in [87]. No statistical data were available for the computational cost except for FHGWJA [7].
It can be seen from Table 7 that HALSGWJA always converged to the global optimum weight of 21.37798 kg in all optimization runs and completed the search process within the lowest number of structural analyses. The present algorithm required from 1240 to 1387 structural analyses to converge to the global optimum design with an average computational cost of only 1291 analyses. FHGWJA [7] was the second fastest optimizer, completing its best optimization run within 1367 structural analyses, just 10% more than the 1240 analyses required by the present algorithm. Interestingly, HALSGWJA was on average slightly faster than FHGWJA (1291 vs. 1309 analyses) but while the present algorithm always converged to the global optimum structural weight of 21.37798 kg in all runs, the latter algorithm converged two times to its worst optimized structural weight of 21.4145, completing these “worst” runs within 1110 and 1606 structural analyses, respectively. Such a large variation also explains why the average computational cost of FHGWJA (i.e., 1309 analyses) was lower than the computational cost recorded for the best optimization run of FHGWJA (i.e., 1367). As mentioned before, FHGWJA used elitist strategies and refined or repaired trial solutions while HALSGWJA directly used line search augmented by gradient information to generate trial designs. The average gradients defined by HALSGWJA as ΔWp(i,k)S(i,k) in perturbing each design X k stored in the population coincide with the actual gradients of a linear cost function if X k is feasible. This property became very useful in the case of the car side impact problem, where the objective function was right-linear.
DOA and HLOA ranked third in computational speed, completing their optimizations within 2000 structural analyses, but obtained the worst designs overall, with structural weights of between 23.6956 and 23.9682 kg (see Ref. [83] for details). All other algorithms completed the optimization process within 15,000 and 50,000 analyses.
Besides achieving 0 standard deviation on optimized cost, HALSGWJA was very robust also in terms of computational cost. In fact, the ratio between standard deviation on the number of structural analyses and the corresponding average value required in the optimization runs was only 6.58%, in line with the best values obtained for the CEC2020 real-world mechanical engineering problems. The excellent convergence behavior exhibited by HALSGWJA is confirmed by Figure 4, which compares the optimization histories recorded for the proposed algorithm in the best, average, and worst runs with those of its competitors FHGWJA [7], ISO [79], and ETO [81]. Since ISO and ETO, respectively, required 15,000 (i.e., the product of NPOP = 30 by 500 iterations) and 30,000 (i.e., the product of NPOP = 30 by 1000 iterations) structural analyses to complete their optimizations, the corresponding numbers of analyses were scaled by 0.1 and 0.05 to fit in the plot of Figure 4; similarly to in Figure 3, the notations “(scaled 0.1×)”and “(scaled 0.05×)” denote the optimization histories of ISO and ETO.
The superiority of HALSGWJA over its competitors was evident in spite of the fact that the present algorithm started its optimizations from rather conservative designs. For example, the structural weight of the initial designs of HALSGWJA was 1.5 times larger than its counterpart for ISO (i.e., about 40 kg vs. about 24 kg) but the present algorithm always recovered this gap between only 20 and 35 structural analyses regardless of considering the best or worst optimization runs. ETO started its optimization search from a best search agent weighing about 24 kg but then reduced structural weight very slowly. Furthermore, HALSGWJA required, respectively, (i) less than 110 structural analyses to find a feasible intermediate design weighing about 21.6 kg, just 1% heavier than the global optimum weight of 21.37798 kg; (ii) only 17 structural analyses to find a feasible intermediate design weighing less than 23.9682 kg, the optimized weight of DOA; and (iii) only 18 structural analyses to find a feasible intermediate design weighing less than 23.6956 kg, the optimized weight of HLOA. The convergence curves of the best and average optimization runs of HALSGWJA practically coincided after only 1000 structural analyses, about 80% of the best run’s optimization history. Remarkably, the worst optimization run of HALSGWJA always generated better intermediate designs than their counterparts generated in the best optimization runs of FHGWJA, which was the only algorithm to be close enough to HALSGWJA in terms of convergence speed.
Similarly to what was observed in the case of CEC2020 real-world mechanical engineering problems, HALSGWJA outperformed its competitors in terms of its ability to converge to the global optimum in a very fast and robust way.

4. Conclusions

This study presented the novel hybrid metaheuristic algorithm, HALSGWJA, combining the grey wolf optimizer (GWO) and JAYA methods, enhanced by approximate line search. HALSGWJA utilizes line search augmented by approximate gradient information to generate trial designs that are always likely to be better than the current best record. This goal is reached at low computational cost because the generation and selection of trial design relies on cost function evaluations rather than on optimization constraints.
HALSGWJA was successfully tested on 20 real-world mechanical engineering optimization problems: (i) 19 test cases included in the CEC2020 library on real-world mechanical engineering and (ii) an example of optimal crashworthiness design. The extensive comparisons with 18 other state-of-the-art metaheuristic optimizers in the CEC2020 problems and 25 other optimizers in the car side impact problem proved without a shadow of a doubt the superiority of HALSGWJA over its competitors. In fact, the present algorithm practically converged to target optima in all test problems (the largest deviation from target cost was only 0.0263% for problem 13 of the CEC2020 library) and achieved very often 0 or nearly 0 standard deviation on optimized cost. Furthermore, HALSGWJA always ranked first in terms of computational speed, requiring fewer analyses than its competitors and very often exhibiting a moderate dispersion on the required number of analyses. Statistical analysis based on Wilcoxon signed-rank and ranking-based Friedman tests confirmed the efficiency of the proposed algorithm.
The results presented in this article support the conclusion that HALSGWJA is a powerful tool for real-world mechanical engineering optimization. Remarkably, this goal was achieved by developing a rather simple optimization formulation. The proposed algorithm does not present any theoretical limitation that may affect its search capability beyond the inherent limitation applied by the No Free Lunch Theorem, for which no metaheuristic algorithm can always outperform all other metaheuristic algorithms in all optimization problems. New investigations are currently evaluating the behavior of HALSGWJA in (i) large-scale problems with hundreds of variables and thousands of nonlinear constraints, (ii) discrete structural optimization problems with frequency constraints, and (iii) inverse problems such as mechanical characterization of anisotropic hyperelastic materials. The preliminary results obtained in problem types (i), (ii), and (iii) also seem to confirm the computational efficiency of HALSGWJA in optimization problems that include very high levels of nonlinearity and non-convexity.

Author Contributions

Conceptualization, methodology, validation, C.F. and L.L.; formal analysis, C.F. and C.I.P.; software, investigation, data curation, C.F.; writing—original draft preparation, C.F.; writing—review and editing, L.L. and C.I.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. HALSGWJA’s Optimized Designs for CEC2020 Real-World Mechanical Engineering Problems

PROBLEM 1—Speed reducer design (7 optimization variables)
X o p t = {3.50000;0.70000;17;7.30000;7.71533;3.35054;5.28665}
PROBLEM 2—Industrial refrigeration system (14 optimization variables)
X o p t = {0.001;0.001;0.001;0.001;0.001;0.001;1.524;1.524;5;2;0.001;0.001;0.0072934;0.087556}
PROBLEM 3—Tension/compression spring design (Case 1) (3 optimization variables)
X o p t = {0.051689;0.35673;11.2883}
PROBLEM 4—Pressure vessel design (4 optimization variables)
X o p t = {0.81205;0.4375;42.0984;176.6372} Mixed variables problem variant;
X o p t = {0.77817;0.38465;40.31962;200} Continuous variables problem variant.
PROBLEM 5—Welded beam design (4 optimization variables)
X o p t = {0.19883;3.33736;9.19202;0.19883}
PROBLEM 6—Three-bar truss design (2 optimization variables)
X o p t = {0.78865;0.40824}
PROBLEM 7—Multiple disk clutch brake design (5 optimization variables)
X o p t = {70;90;1;995.4261;2}
PROBLEM 8—Planetary gear train design (9 optimization variables)
X o p t = {40;21;14;19;14;69;5;2.25;2.5}
PROBLEM 9—Step-cone pulley (5 optimization variables)
X o p t = {38.47631;53.10083;95.34589;158.33150;47.91321}
PROBLEM 10—Robot gripper (7 optimization variables)
X o p t = {150;149.88285;200;1.26622 × 10−13;150;100.94276;2.29741}
PROBLEM 11—Hydrostatic thrust bearing design (4 optimization variables)
X o p t = {5.95551;5.38872;5.3587 × 10−6;2.25664}
PROBLEM 12—Four-stage gearbox design (22 optimization variables)
X o p t = {7;7;7;26;9;7;9;31;8.255;3.175;5.715;3.175;50.8;50.8;63.5;101.6;25.4;88.9;101.6;76.2;101.6;76.2}
PROBLEM 13—Ten-bar truss design (10 optimization variables)
X o p t = {0.0035200;0.0014242;0.0035191;0.0015087;6.4500 × 10−5;0.00045689;0.0022838;0.0024685;0.0012649;0.0012079}
PROBLEM 14—Rolling element bearing (10 optimization variables)
X o p t = {131.2;18;5;0.6;0.6;0.4071;0.6427;0.3;0.0632;0.6}
PROBLEM 15—Gas transmission compressor design (4 optimization variables)
X o p t = {50;1.17828;24.59258;0.38835}
PROBLEM 16—Tension/compression spring design (Case 2) (3 optimization variables)
X o p t = {9;1.22311;0.283}
PROBLEM 17—Gear train design (4 optimization variables)
X o p t = {13;30;53;51}
PROBLEM 18—Himmelblau’s function (5 optimization variables)
X o p t = {78;33;29.99701;44.98974;36.77737}
PROBLEM 19—Topology optimization (30 optimization variables). The solution shows all significant digits for the double precision limit adopted in the computations to demonstrate the convergence of optimal topology towards the uniform density of 1.
X o p t = {0.999999999999973;0.999999999999979;0.999999999999999;0.999999999999957;1;
0.999999999999996;1;0.999999999999996;1;1;0.99999999999951;0.999999999999976;
0.99999999999991;0.999999999999986;1;0.999999999999998;0.999999999999999;1;1;1;
0.999999999999846;0.999999999999931;0.999999999999918;0.999999999999744;
0.999999999999993;1;1;1;1;1}

References

  1. Mendes Platt, G.; Yang, X.-S.; Silva Neto, A.J. Computational Intelligence, Optimization and Inverse Problems with Applications in Engineering; Springer: Cham, Switzerland, 2019. [Google Scholar]
  2. Elaziz, M.A.; Elsheikh, A.H.; Oliva, D.; Abualigah, L.; Lu, S.; Ewees, A. Advanced metaheuristic techniques for mechanical design problems: Review. Arch. Comput. Methods Eng. 2022, 29, 695–716. [Google Scholar] [CrossRef]
  3. Turgut, O.E.; Turgut, M.S.; Kırtepe, E. A systematic review of the emerging metaheuristic algorithms on solving complex optimization problems. Neural Comput. Appl. 2023, 35, 14275–14378. [Google Scholar] [CrossRef]
  4. Abualigah, L. Metaheuristic Optimization Algorithms—Optimizers, Analysis, and Applications, 1st ed.; Elsevier: Amsterdam, The Netherlands, 2024. [Google Scholar]
  5. Cuevas, E.; Chavarin-Fajardo, A.; Ascencio-Piña, C.; Garcia-De-Lira, S. Optimization Strategies: A Decade of Metaheuristic Algorithm Development; Springer: Berlin/Heidelberg, Germany, 2025. [Google Scholar]
  6. Furio, C.; Lamberti, L.; Pruncu, C. Mechanical and civil engineering optimization with a very simple hybrid GreyWolf—JAYA metaheuristic optimizer. Mathematics 2024, 12, 3464. [Google Scholar] [CrossRef]
  7. Furio, C.; Lamberti, L.; Pruncu, C. An efficient and fast hybrid GWO-JAYA algorithm for design optimization. Appl. Sci. 2024, 14, 9610. [Google Scholar] [CrossRef]
  8. Lee, K.S.; Geem, Z.W. A new meta-heuristic algorithm for continuous engineering optimization: Harmony search theory and practice. Comput. Methods Appl. Mech. Eng. 2005, 194, 3902–3933. [Google Scholar] [CrossRef]
  9. Hwang, S.F.; He, R.S. Improving real-parameter genetic algorithm with simulated annealing for engineering problems. Adv. Eng. Softw. 2006, 37, 406–418. [Google Scholar] [CrossRef]
  10. Lamberti, L.; Pappalettere, C. Weight optimization of skeletal structures with multipoint simulated annealing. Comput. Model. Eng. Sci. 2007, 18, 183–221. [Google Scholar]
  11. Degertekin, S.O. Optimum design of steel frames using harmony search algorithm. Struct. Multidiscip. Optim. 2008, 36, 393–401. [Google Scholar] [CrossRef]
  12. Kaveh, A.; Talatahari, S. Particle swarm optimizer, ant colony strategy and harmony search scheme hybridized for optimization of truss structures. Comput. Struct. 2009, 87, 267–283. [Google Scholar] [CrossRef]
  13. Hasancebi, O.; Erdal, F.; Saka, M.P. Adaptive harmony search method for structural optimization. ASCE J. Struct. Eng. 2010, 136, 419–431. [Google Scholar] [CrossRef]
  14. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems. CAD Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  15. Yang, X.-S.; Gandomi, A.H. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef]
  16. Kazemzadeh Azad, S.; Hasancebi, O.; Kazemzadeh Azad, S.; Erol, O.K. Upper bound strategy in optimum design of truss structures: A big bang-big crunch algorithm based application. Adv. Struct. Eng. 2013, 16, 1035–1046. [Google Scholar] [CrossRef]
  17. Liu, Z.; Atamturktur, S.; Juang, C.H. Reliability based multi-objective robust design optimization of steel moment resisting frame considering spatial variability of connection parameters. Eng. Struct. 2014, 76, 393–403. [Google Scholar] [CrossRef]
  18. Kaveh, A.; Bakhshpoori, T.; Azimi, M. Seismic optimal design of 3D steel frames using cuckoo search algorithm. Struct. Des. Tall Spec. Build. 2015, 24, 210–217. [Google Scholar] [CrossRef]
  19. Kaveh, A.; Bakhshpoori, T. A new metaheuristic for continuous structural optimization: Water evaporation optimization. Struct. Multidiscip. Optim. 2016, 54, 23–43. [Google Scholar] [CrossRef]
  20. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp swarm algorithm: A bioinspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  21. Farshchin, M.; Maniat, M.; Camp, C.V.; Pezeshk, S. School-based optimization algorithm for design of steel frames. Eng. Struct. 2018, 171, 326–335. [Google Scholar] [CrossRef]
  22. Keshtegar, B.; Hao, P.; Wang, Y.; Hu, Q. An adaptive response surface method and Gaussian global-best harmony search algorithm for optimization of aircraft stiffened panels. Appl. Soft Comput. 2018, 66, 196–207. [Google Scholar] [CrossRef]
  23. Jafari, S.; Nikolaidis, T. Meta-heuristic global optimization for aircraft engines modelling and controller design; A review, research challenges and exploring the future. Prog. Aerosp. Sci. 2019, 104, 40–53. [Google Scholar] [CrossRef]
  24. Kaveh, A.; Biabani Hamedani, K.; Milad Hosseini, S.; Bakhshpoori, T. Optimal design of planar steel frame structures utilizing meta-heuristic optimization algorithms. Structures 2020, 25, 335–346. [Google Scholar] [CrossRef]
  25. Korkmaz, F.F.; Subran, M.; Yildiz, A.R. Optimal design of aerospace structures using recent meta-heuristic algorithms. Mater. Test. 2021, 11, 1025–1031. [Google Scholar] [CrossRef]
  26. Ficarella, E.; Lamberti, L.; Degertekin, S.O. Comparison of three novel hybrid metaheuristic algorithms for structural optimization problems. Comput. Struct. 2021, 244, 106395. [Google Scholar] [CrossRef]
  27. Gupta, S.; Abderazek, H.; Yildiz, B.S.; Yildiz, A.R.; Mirjalili, S.; Said, S.M. Comparison of metaheuristic algorithms for solving constrained mechanical design optimization problems. Exp. Syst. Appl. 2021, 183, 115351. [Google Scholar] [CrossRef]
  28. Abualigah, L.; Elaziz, M.A.; Khasawneh, A.M.; Alshinwan, M.; Ibrahim, R.A.; Al-ganess, M.A.A.; Mirjalili, S.; Sumari, P.; Gandomi, A.H. Meta-heuristic optimization algorithms for solving real-word mechanical engineering design problems: A comprehensive survey, applications, comparative analysis, and results. Neural Comput. Appl. 2022, 34, 4081–4110. [Google Scholar] [CrossRef]
  29. Ozturk, H.T.; Kahraman, H.T. Meta-heuristic search algorithms in truss optimization: Research on stability and complexity analyses. Appl. Soft Comp. 2023, 145, 110573. [Google Scholar] [CrossRef]
  30. Pham, V.H.S.; Nguyen Dang, N.T.; Nguyen, V.N. Efficient truss design: A hybrid geometric mean optimizer for better performance. Appl. Comput. Intell. Soft Comput. 2024, 2024, 4216718. [Google Scholar] [CrossRef]
  31. Houssein, E.H.; Gafar, M.H.A.; Fawzy, N.; Sayed, A.Y. Recent metaheuristic algorithms for solving some civil engineering optimization problems. Sci. Rep. 2025, 15, 7929. [Google Scholar] [CrossRef] [PubMed]
  32. Wang, J.; Ouyang, H.; Li, S.; Ding, W.; Gao, L. Equilibrium optimizer-based harmony search algorithm with nonlinear dynamic domains and its application to real-world optimization problems. Artif. Intell. Rev. 2024, 57, 166. [Google Scholar] [CrossRef]
  33. Akcay, O.; Ilkilic, C. Light-weight design of aerospace components using genetic algorithm and dandelion optimization algorithm. Int. J. Aeronaut. Space Sci. 2025, 26, 2898–2909. [Google Scholar] [CrossRef]
  34. Phuekpan, K.; Khammee, R.; Panagant, N.; Bureerat, S.; Pholdee, N.; Wansasueb, K. A comparison of modern metaheuristics for multi-objective optimization of transonic aeroelasticity in a tow-steered composite wing. Aerospace 2025, 12, 101. [Google Scholar] [CrossRef]
  35. Hossain, M.S.; Chao, O.Z.; Ismail, Z.; Noroozi, S.; Khoo, S.Y. Artificial neural networks for vibration based inverse parametric identifications: A review. Appl. Soft Comput. 2017, 52, 203–219. [Google Scholar] [CrossRef]
  36. Ficarella, E.; Lamberti, L.; Degertekin, S.O. Mechanical identification of materials and structures with optical methods and metaheuristic optimization. Materials 2019, 12, 2133. [Google Scholar] [CrossRef]
  37. Cuevas, E.; Gálvez, J.; Avalos, O. Recent Metaheuristics Algorithms for Parameter Identification; Springer: Cham, Switzerland, 2020. [Google Scholar]
  38. Hassan, H.; Tallman, T.Y. A comparison of metaheuristic algorithms for solving the piezoresistive inverse problem in self-sensing materials. IEEE Sens. J. 2021, 21, 659–666. [Google Scholar] [CrossRef]
  39. Di Lecce, M.; Onaizah, O.; Lloyd, P.; Chandler, J.H.; Valdastri, P. Evolutionary inverse material identification: Bespoke characterization of soft materials using a metaheuristic algorithm. Front. Robot. AI 2022, 8, 790571. [Google Scholar] [CrossRef] [PubMed]
  40. Zhai, H.; Hao, H.; Yeo, J. Benchmarking inverse optimization algorithms for materials design. APL Mater. 2024, 12, 021107. [Google Scholar] [CrossRef]
  41. Oliveira, M.A.; Inman, D.J. Performance analysis of simplified fuzzy ARTMAP and probabilistic neural networks for identifying structural damage growth. Appl. Soft Comput. 2017, 52, 53–63. [Google Scholar] [CrossRef]
  42. Li, H.; Zhang, Q.; Qin, X.; Yuantao, S. Raw vibration signal pattern recognition with automatic hyper-parameter-optimized convolutional neural network for bearing fault diagnosis. Proc. Inst. Mech. Eng. C. J. Mech. Eng. Sci. 2019, 234, 343–360. [Google Scholar] [CrossRef]
  43. da Silva Lopes Alexandrino, S.; Ferreira Gomes, G.; Simões Cunha Jr, S. A robust optimization for damage detection using multiobjective genetic algorithm, neural network and fuzzy decision making. Inverse Probl. Sci. Eng. 2020, 28, 21–46. [Google Scholar] [CrossRef]
  44. Guo, J.; Liu, C.; Cao, J.; Jiang, D. Damage identification of wind turbine blades with deep convolutional neural networks. Renew. Energy 2021, 174, 122–133. [Google Scholar] [CrossRef]
  45. Benaissa, B.; Hocine, N.A.; Khatir, S.; Riahi, M.K.; Mirjalili, S. YUKI algorithm and POD-RBF for elastostatic and dynamic crack identification. J. Comput. Sci. 2021, 55, 101451. [Google Scholar] [CrossRef]
  46. Hassani, S.; Dackermann, U. Optimization-based damage detection in composite structures using incomplete measurements. Structures 2023, 56, 104825. [Google Scholar] [CrossRef]
  47. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  48. Sharma, I.; Kumar, V.; Sharma, S. A comprehensive survey on grey wolf optimization. Recent Adv. Comput. Sci. Commun. 2022, 15, 323–333. [Google Scholar] [CrossRef]
  49. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Zamani, H.; Bahreininejad, A. GGWO: Gaze cues learning-based grey wolf optimizer and its applications for solving engineering problems. J. Comput. Sci. 2022, 61, 101636. [Google Scholar] [CrossRef]
  50. Tsai, H.-C.; Shi, J.-Y. Potential corrections to grey wolf optimizer. Appl. Soft Comp. 2024, 161, 111776. [Google Scholar] [CrossRef]
  51. Rao, R.V. Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems. Int. J. Ind. Eng. Comput. 2016, 7, 19–34. [Google Scholar] [CrossRef]
  52. Kumar, A.; Wu, G.; Ali, M.Z.; Mallipeddi, R.; Suganthan, P.N.; Das, S. A test-suite of non-convex constrained optimization problems from the real-world and some baseline results. Swarm Evol. Comput. 2020, 56, 100693. [Google Scholar] [CrossRef]
  53. Kumar, A.; Das, S.; Zelinka, I. A self-adaptive spherical search algorithm for real-world constrained optimization problems. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, Cancún, Mexico, 8–12 July 2020; pp. 13–14. [Google Scholar]
  54. Sallam, K.M.; Elsayed, S.M.; Chakrabortty, R.K.M.; Ryan, M.J. Multi-operator differential evolution algorithm for solving real-world constrained optimization problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  55. Gurrola-Ramos, J.; Hernàndez-Aguirre, A.; Dalmau-Cedeño, O. COLSHADE for real-world single-objective constrained optimization problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  56. Tang, H.; Lee, J. Adaptive initialization LSHADE algorithm enhanced with gradient-based repair for real-world constrained optimization. Knowl.-Based Syst. 2022, 246, 108696. [Google Scholar] [CrossRef]
  57. Yuan, Z.; Peng, L.; Dai, G.; Wang, M.; Zhang, W.; Zhou, Q.; Zheng, W. An improved multi-operator differential evolution via a knowledge-guided information sharing strategy for global optimization. Exp. Syst. Appl. 2025, 269, 126403. [Google Scholar] [CrossRef]
  58. Lee, J.; Mendoza, R.; Mendoza, V.M.P.; Jung, E. Differential evolution with spherical search algorithm for nonlinear engineering and infectious disease optimization problems. Appl. Soft Comp. 2025, 168, 112446. [Google Scholar] [CrossRef]
  59. Sharma, D.; Jabeen, S.D. Hybridizing interval method with a heuristic for solving real-world constrained engineering optimization problems. Structures 2023, 56, 104993. [Google Scholar] [CrossRef]
  60. Hu, G.; Guo, Y.; Zhong, J.; Wei, G. IYDSE: Ameliorated Young’s double-slit experiment optimizer for applied mechanics and engineering. Comput. Methods Appl. Mech. Eng. 2023, 412, 116062. [Google Scholar] [CrossRef]
  61. Hu, G.; Gong, C.; Li, X.; Xu, Z. CGKOA: An enhanced Kepler optimization algorithm for multi-domain optimization problems. Comput. Methods Appl. Mech. Eng. 2024, 425, 116964. [Google Scholar] [CrossRef]
  62. Shu, B.; Hu, G.; Cheng, M.; Zhang, C. MSFPSO: Multi-algorithm integrated particle swarm optimization with novel strategies for solving complex engineering design problems. Comput. Methods Appl. Mech. Eng. 2025, 437, 117791. [Google Scholar] [CrossRef]
  63. You, G.; Hu, Y.; Yang, Z.; Li, Y. Enhanced snow ablation optimizer using dynamic tangential flight and elite guidance strategy. Sci. Rep. 2025, 15, 10036. [Google Scholar] [CrossRef]
  64. Fu, S.; Huang, H.; Ma, C.; Wei, J.; Li, Y.; Fu, Y. Improved dwarf mongoose optimization algorithm using novel nonlinear control and exploration strategies. Exp. Syst. Appl. 2023, 233, 120904. [Google Scholar] [CrossRef]
  65. Yang, Q.; Liu, J.; Wu, Z.; He, S. A fusion algorithm based on whale and grey wolf optimization algorithm for solving real-world optimization problems. Appl. Soft Comp. 2023, 146, 110701. [Google Scholar] [CrossRef]
  66. Wang, Z.; Dai, D.; Zeng, Z.; He, D.; Chan, S. Multi-strategy enhanced Grey Wolf Optimizer for global optimization and real world problems. Clust. Comput. 2024, 27, 10671–10715. [Google Scholar] [CrossRef]
  67. Naik, A. Marine predators social group optimization: A hybrid approach. Evol. Intell. 2024, 17, 2355–2386. [Google Scholar] [CrossRef]
  68. Kaur, S.; Kaur, L.; Lal, M. Artificial hummingbird algorithm with chaotic-opposition-based population initialization for solving real-world problems. Neural Comput. Appl. 2024, in press. [Google Scholar] [CrossRef]
  69. Zhang, C.; Li, H.; Long, S.; Yue, X.; Ouyang, H.; Chen, Z.; Li, S. Piranha predation optimization algorithm (PPOA) for global optimization and engineering design problems. Appl. Soft Comput. 2024, 165, 112085. [Google Scholar] [CrossRef]
  70. Zhang, Y.; Chi, A.; Mirjalili, S. Enhanced Jaya algorithm: A simple but efficient optimization method for constrained engineering design problems. Knowl.-Based Syst. 2021, 233, 107555. [Google Scholar] [CrossRef]
  71. Ankrah, J.C.; Boafo Effah, F.; Twumasi, E. An enhanced semisteady-state Jaya algorithm with a control coefficient and a self-adaptive multipopulation strategy. J. Electr. Comput. Eng. 2025, 1, 3036909. [Google Scholar] [CrossRef]
  72. Qiu, Y.; Yang, X.; Chen, S. An improved gray wolf optimization algorithm solving to functional optimization and engineering design problems. Sci. Rep. 2024, 14, 14190. [Google Scholar] [CrossRef] [PubMed]
  73. Zhang, Y.; Cai, Y. Adaptive dynamic self-learning grey wolf optimization algorithm for solving global optimization problems and engineering problems. Math. Biosci. Eng. 2024, 21, 3910–3943. [Google Scholar] [CrossRef]
  74. Abdel-Basset, M.; Mohamed, R.; Azeem, S.A.A.; Jameel, M.; Abouhawwash, M. Kepler optimization algorithm: A new metaheuristic algorithm inspired by Kepler’s laws of planetary motion. Knowl.-Based Syst. 2023, 268, 110454. [Google Scholar] [CrossRef]
  75. Azizi, M.; Aickelin, U.; Khorshidi, H.A.; Shishehgarkhaneh, M.B. Energy valley optimizer: A novel metaheuristic algorithm for global and engineering optimization. Sci. Rep. 2023, 13, 226. [Google Scholar] [CrossRef]
  76. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. Starling murmuration optimizer: A novel bio-inspired algorithm for global and engineering optimization. Comput. Methods Appl. Mech. Eng. 2022, 392, 114616. [Google Scholar] [CrossRef]
  77. Li, K.; Huang, H.; Fu, S.; Ma, C.; Fan, Q.; Zhu, Y. A multi-strategy enhanced northern goshawk optimization algorithm for global optimization and engineering design problems. Comput. Methods Appl. Mech. Eng. 2023, 415, 116199. [Google Scholar] [CrossRef]
  78. Zhao, Y.; Huang, C.; Zhang, M.; Lv, C. COLMA: A chaos-based mayfly algorithm with opposition-based learning and Levy flight for numerical optimization and engineering design. J. Supercomput. 2023, 79, 19699–19745. [Google Scholar] [CrossRef]
  79. Wang, K.; Guo, M.; Dai, C.; Li, Z. A novel heuristic algorithm for solving engineering optimization and real-world problems: People identity attributes-based information-learning search optimization. Comput. Methods Appl. Mech. Eng. 2023, 416, 116307. [Google Scholar] [CrossRef]
  80. Wu, L.; Wu, J.; Wang, T. The improved grasshopper optimization algorithm with Cauchy mutation strategy and random weight operator for solving optimization problems. Evol. Intell. 2024, 17, 1751–1781. [Google Scholar] [CrossRef]
  81. Luan, T.-M.; Khatir, S.; Tran, M.-T.; De Baets, B.; Cuong-Le, T. Exponential-trigonometric optimization algorithm for solving complicated engineering problems. Comput. Methods Appl. Mech. Eng. 2024, 432, 117411. [Google Scholar] [CrossRef]
  82. Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary bird optimization algorithm: A new metaheuristic for solving global optimization problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
  83. Lang, Y.; Gao, Y. Dream Optimization Algorithm (DOA): A novel metaheuristic optimization algorithm inspired by human dreams and its applications to real-world engineering problems. Comput. Methods Appl. Mech. Eng. 2025, 436, 117718. [Google Scholar] [CrossRef]
  84. Xiao, Y.; Cui, H.; Khurma, R.A.; Castillo, P.A. Artificial lemming algorithm: A novel bionic meta-heuristic technique for solving real-world engineering optimization problems. Artif. Intell. Rev. 2025, 58, 84. [Google Scholar] [CrossRef]
  85. Zhong, C.; Li, G.; Meng, Z.; Li, H.; Yildiz, A.R.; Mirjalili, S. Starfish optimization algorithm (SFOA): A bio-inspired metaheuristic algorithm for global optimization compared with 100 optimizers. Neural Comput. Appl. 2025, 37, 3641–3683. [Google Scholar] [CrossRef]
  86. Cai, X.; Wang, W.; Wang, Y. Multi-strategy enterprise development optimizer for numerical optimization and constrained problems. Sci. Rep. 2025, 15, 10538. [Google Scholar] [CrossRef]
  87. Gandomi, A.H.; Yang, X.-S.; Alavi, A.H. Mixed variable structural optimization using Firefly Algorithm. Comput. Struct. 2011, 89, 2325–2336. [Google Scholar] [CrossRef]
  88. Youn, B.D.; Choi, K.K. A new response surface methodology for reliability-based design optimization. Comput. Struct. 2004, 82, 241–256. [Google Scholar] [CrossRef]
  89. Gu, L.; Yang, R.; Tho, C.-H.; Makowskit, M.; Faruquet, O.; Li, Y.L.Y. Optimisation and robustness for crashworthiness of side impact. Int. J. Veh. Des. 2001, 26, 348–360. [Google Scholar] [CrossRef]
  90. Youn, B.D.; Choi, K.; Yang, R.-J.; Gu, L. Reliability-based design optimization for crashworthiness of vehicle side impact. Struct. Multidiscip. Optim. 2004, 26, 272–283. [Google Scholar] [CrossRef]
  91. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar] [CrossRef]
  92. Coello Coello, C.A. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art. Comput. Methods Appl. Mech. Eng. 2002, 191, 1245–1287. [Google Scholar] [CrossRef]
  93. Eberhart, R.C.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995. [Google Scholar]
  94. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  95. Yang, X.S.; Deb, S. Engineering optimisation by cuckoo search. Int. J. Math. Model. 2010, 1, 330–343. [Google Scholar] [CrossRef]
  96. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  97. Omran, M.G.; Al-Sharhan, S. Improved continuous Ant Colony Optimization algorithms for real-world engineering optimization problems. Eng. Appl. Artif. Intell. 2019, 85, 818–829. [Google Scholar] [CrossRef]
  98. Gandomi, A.H.; Alavi, A.H. Krill herd: A new bio-inspired optimization algorithm. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 4831–4845. [Google Scholar] [CrossRef]
  99. Banharnsakun, A.; Achalakul, T.; Sirinaovakul, B. The best-so-far selection in artificial bee colony algorithm. Appl. Soft Comput. 2011, 11, 2888–2901. [Google Scholar] [CrossRef]
  100. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  101. Peraza-Vazquez, H.; Pena-Delgado, A.; Merino-Trevino, M.; Morales-Cepeda, A.B.; Sinha, N. A novel metaheuristic inspired by horned lizard defense tactics. Artif. Intell. Rev. 2024, 57, 59. [Google Scholar] [CrossRef]
  102. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  103. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Dwarf mongoose optimization algorithm. Comput. Methods Appl. Mech. Eng. 2022, 391, 114570. [Google Scholar] [CrossRef]
  104. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  105. Mohamed, A.W.; Hadi, A.A.; Fattouh, A.M.; Jambi, K.M. LSHADE with semi-parameter adaptation hybrid with CMA-ES for solving CEC 2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computations, Donostian-San Sebastian, Spain, 5–8 June 2017; pp. 145–152. [Google Scholar]
Figure 1. Flowchart of the HALSGWJA algorithm developed in this research.
Figure 1. Flowchart of the HALSGWJA algorithm developed in this research.
Applsci 15 12580 g001
Figure 2. Schematic of vehicle parts (adapted from [7]) optimized in the car side impact problem.
Figure 2. Schematic of vehicle parts (adapted from [7]) optimized in the car side impact problem.
Applsci 15 12580 g002
Figure 3. Convergence curves recorded for HLSGWJA and its competitors (SHGWJA [6], COLSHADE [55,65,68], En(L)SHADE [56], DESS [58], SASS [53,59,68], KOA/GKOA [57,61,74], MPSGO [67], ISO [79]) in the CEC2020 real-world mechanical engineering problems. Each sub-illustration included in (at) refers to a test case of the CEC2020 library.
Figure 3. Convergence curves recorded for HLSGWJA and its competitors (SHGWJA [6], COLSHADE [55,65,68], En(L)SHADE [56], DESS [58], SASS [53,59,68], KOA/GKOA [57,61,74], MPSGO [67], ISO [79]) in the CEC2020 real-world mechanical engineering problems. Each sub-illustration included in (at) refers to a test case of the CEC2020 library.
Applsci 15 12580 g003aApplsci 15 12580 g003b
Figure 4. Convergence curves recorded for HLSGWJA and its competitors (FHGWJA [7], ISO [79], ETO [81]) in the car side impact problem.
Figure 4. Convergence curves recorded for HLSGWJA and its competitors (FHGWJA [7], ISO [79], ETO [81]) in the car side impact problem.
Applsci 15 12580 g004
Table 1. Description of the CEC2020 real-world mechanical engineering test problems.
Table 1. Description of the CEC2020 real-world mechanical engineering test problems.
ProblemDescriptionObjectiveNDVGHOptimum
1Speed reducerMinimize weight71102.9944244658 × 103
2Industrial refrigeration system designMin. fabrication/operation costs141503.2213000814 × 10−2
3Tension/compression spring design (1) Minimize weight3301.2665232788 × 10−2
4Pressure vessel designMinimize fabrication cost4405.8853327736 × 103
5Welded beam designMinimize fabrication cost4501.6702177263 × 100
6Three-bar truss designMinimize weight2302.6389584338 × 102
7Multiple disk clutch brake designMinimize mass5702.3524245790 × 10−1
8Planetary gear train design optimizationMinimize max errors in gear ratios91015.2576870748 × 10−1
9Step-cone pulley problemMinimize weight5831.6069868725 × 101
10Robot gripper problemMinimize difference max/min forces7702.5287918415 × 100
11Hydrostatic thrust bearing designMinimize bearing power loss4701.6161197651 × 103
12Four-stage gearbox problemMinimize weight228603.5359231973 × 101
13Ten-bar truss designMinimize weight10305.2445076066 × 102
14Rolling element bearingMaximize load carrying capacity10901.4614135715 × 104
15Gas transmission compressor designMinimize cost of gas delivery4102.9648954173 × 106
16Tension/compression spring design (2)Minimize steel wire volume3802.6138840583 × 100
17Gear train designMatch target total gear ratio4110.0000000000 × 100
18Himmelblau’s functionMinimize quadratic model560−3.0665538672 × 104
19Topology optimizationMaterial layout for min compliance30300 2.6393464970 × 100
Table 2. Comparison of optimization results obtained by HALSGWJA and its competitors in the CEC2020 mechanical engineering problems.
Table 2. Comparison of optimization results obtained by HALSGWJA and its competitors in the CEC2020 mechanical engineering problems.
ProblemHALSGWJA
Present
EnMODE
[54,65,68] [69]
COLSHADE
[55,65,68] [69]
En(L)SHADE
[56]
IMODE-KG
[57]
DESS
[58]
SASS
[53,59,68] [69]
SDDS-SABC [59]IYDSE
[57,60]
1
Speed reducer
(B) 2994.4252994.42994.42.99 × 1032994.4252.89 × 1032994.4252963.911 12.99 × 103
(A) 2994.4252994.42994.42.99 × 1032994.4252.99 × 1032994.4252975.081 12.99 × 103
(W) 2994.4252994.42994.42.99 × 1032994.425N/A2994.4252982.083 13.10 × 103
(STD) 04.6412 × 10−134.5475 × 10−1301.1003 × 10−7N/A4.641 × 10−131.326910.216
(B) 2326
(A/D) 2538 ± 116100,000100,000100,00010,000100,000100,00020,00030,000
(W) 2728{2277}{12,930} {51,430}
2
Industrial
refrigeration system
0.03221300.0322130.0322133.22 × 10−2N/A3.22 × 10−20.0322130.0321969 13.29 × 10−2
0.03221320.0322130.0322133.22 × 10−2N/A3.22 × 10−20.0322130.03341434.99 × 10−2
0.03221600.0322130.0322133.22 × 10−2N/AN/A0.0322130.03354630.115
6.7082 × 10−73.1672 × 10−1800N/AN/A1.416 × 10−170.264390.0233
4695
5478 ± 1003200,000200,000200,000N/A200,000200,00020,00030,000
6258{9948}{37,190} {174,700}
3
Tension/compression spring (Case 1)
0.0126650.0126650.0126651.27 × 10−20.01267421.27 × 10−20.0126650.0112423 11.27 × 10−2
0.0126680.0127100.0126651.27 × 10−20.01327061.27 × 10−20.0126650.0121864 11.27 × 10−2
0.0126700.0127190.0126651.27 × 10−20.0149735N/A
N/A
0.0126650.01367561.27 × 10−2
2.1152 × 10−62.0138 × 10−51.0625 × 10−704.7759 × 10−401.589 × 10−31.34 × 10−7
2103
3316 ± 1137100,000100,000100,00010,000100,000100,00020,00030,000
4097{47,560}{14,630} {43,890}
4
Pressure vessel
6059.714 (5885.331)6059.76059.76.06 × 1036059.7146.06 × 1036059.7145979.9856103.368
6059.714
(5885.331)
6059.76062.26.06 × 1036256.7376.06 × 1036059.7145981.0446385.287
6059.714
(5885.331)
6059.76090.56.09 × 1036410.087N/A6059.7145985.0266810.762
0 (0)9.2825 × 10−138.359111.5155.130N/A3.713 × 10−122.96641185.361
3852 (3510)
5104 ± 1374
(4672 ± 1203)
100,000100,000100,00010,000100,000100,00020,00010,000
6193 (5713){N/A}{N/A} {N/A}
5
Welded beam
1.6702181.67021.67021.671.670221.670221.6702181.452753 11.67
1.6702181.67021.67021.671.670221.670221.6702181.580213 11.67
1.6702181.67021.67021.671.67022N/A1.6702181.6900361.67
00001.2677 × 10−9N/A2.266 × 10−160.968521.16 × 10−5
2975
3224 ± 285100,000100,000100,00010,000100,000100,00020,00030,000
3691{22,930}{16,060} {67,660}
6
Three-bar truss
263.8915263.90263.902.64 × 102N/A2.64 × 102263.8958263.6053 12.64 × 102
263.8915263.90263.902.64 × 102N/A2.64 × 102263.8958263.6724 12.64 × 102
263.8915263.90263.902.64 × 102N/AN/A
N/A
263.8958264.21982.64 × 102
0000N/A5.802 × 10−141.275 × 10−34.12 × 10−14
705
961 ± 168100,000100,000100,000N/A100,000100,00020,00030,000
1224{8581}{5594} {12,760}
7
Multiple disk clutch brake
0.2352430.235240.235240.2350.235240.2350.2352420.218881 10.235
0.2352430.235240.235240.2350.235240.2350.2352420.2364670.235
0.2352430.235240.235240.2350.23524N/A
N/A
0.2352420.2519750.235
01.1331 × 10−16001.4021 × 10−162.833 × 10−172.656 × 10−23.37 × 10−11
792
900 ± 85100,000100,000100,00010,000100,000100,00020,00030,000
1193{13,040}{4858} {12,910}
8
Planetary gear train
0.5232500.525770.525770.5260.5265810.5260.5259670.5232740.526
0.5257140.526910.541030.6330.5586750.5321.0015240.5321760.528
0.5291880.531210.746671.3000.674039N/A
N/A
3.5216560.5387560.537
1.2440 × 10−31.4402 × 10−30.0425730.2310.0365390.7251845.416 × 10−42.81 × 10−3
2095
3260 ± 1039100,000100,000100,00010,000100,000100,00020,00030,000
4364{N/A}{N/A} {N/A}
9
Step-cone
Pulley
16.0360116.07016.07016.1N/A16.116.0698712.30213 116.1
16.0403316.07016.07016.1N/A16.116.0698715.96217 116.1
16.0452916.07016.07016.1N/AN/A
N/A
16.0698717.1074616.1
4.6437 × 10−33.3335 × 10−1400N/A3.626 × 10−152.017844.30 × 10−6
4278
4424 ± 322100,000100,000100,000N/A100,000100,00020,00030,000
5000{38,090}{14,090} {4826}
10
Robot gripper
2.54378522.54382.54382.54N/A2.542.5437862.6731692.70
2.54378542.54382.54382.54N/A2.542.5437862.6789533.14
2.54378572.54382.54382.54N/AN/A2.5437862.6810853.47
2.8263 × 10−71.3501 × 10−1200N/AN/A00.2643870.218
3988
4645 ± 787100,000100,000100,000N/A100,000100,00020,00030,000
5302{N/A}{N/A} {N/A}
11
Hydrostatic thrust bearing
1616.1201616.11616.11.62 × 1031783.9541.62 × 1031616.1201839.6412483.159
1616.1201616.11639.01.62 × 1031918.3891.62 × 1031616.1201841.9073201.317
1616.1201616.12129.11.62 × 1032242.035N/A1616.1231845.0634174.048
01.7751 × 10−111007.30115.871N/A9.425 × 10−41.42788469.274
7400
8624 ± 724100,000100,000100,00010,000100,000100,00020,00010,000
9527{43,930}{66,620} {64,410}
12
Four-stage
gearbox
35.2926935.35935.35936.2N/A36.236.250443.1429514.92
35.4101235.72836.61139.6N/A39.538.514144.0125933.92
35.6514137.26840.93144.1N/AN/A45.506445.3639767.9
0.16870.599111.36772.0N/AN/A2.114820.4972514.9
591
1021 ± 318200,000200,000200,000N/A200,000200,00020,00030,000
1531{139,000}{111,700} {61,630}
13
Ten-bar truss
524.5888524.45524.455.24 × 102524.4805.24 × 102524.4576522.9644 1535.9481
524.6004524.45524.455.24 × 102525.0755.24 × 102524.4692523.0724 1545.6267
524.6111524.45524.455.24 × 102528.814N/A524.4820525.0326559.1333
0.01135193.7602 × 10−702.01 × 10−90.96481N/A6.620 × 10−33.572 × 10−25.8317
1911
3321 ± 1092100,000100,000100,00010,000100,000100,00020,00010,000
4801{27,750}{19,660} {N/A}
14
Rolling element bearing
16,958.20216,95816,9581.70 × 10416,958.2021.44 × 10414,614.13614,811.08516,981.1262
16,958.20216,95816,9581.70 × 10416,958.2041.58 × 10414,614.13614,835.50117,045.0362
16,958.20216,95816,9581.70 × 10416,958.238N/A14,614.13614,842.88617,203.7132
03.7130 × 10−12006.6535 × 10−3N/A03.0732249.492
2492
3117 ± 385100,000100,000100,00010,000100,000100,00020,00010,000
3681{17,970}{N/A} {52,340}
15
Gas transmission compressor
2,964,893.9372.9649 × 1062.9649 × 1062.96 × 1062,964,895.422.96 × 1062,964,895.42,851,884.4 12,965,825.50
2,964,893.9372.9649 × 1062.9649 × 1062.96 × 1062,964,895.422.96 × 1062,964,895.42,975,324.92,972,596.89
2,964,893.9372.9649 × 1062.9649 × 1062.96 × 1062,964,895.42N/A2,964,895.42,992,643.53,011,188.39
01.4258 × 10−901.43 × 10−99.2584 × 10−6N/A4.753 × 10−105.084167938.995
4377
5661 ± 932100,000100,000100,00010,000100,000100,00020,00010,000
6587{11,630}{8999} {58,880}
16
Tension/compression spring (Case 2)
2.6587222.65862.65862.662.6585592.662.6585592.6842432.65
2.6595502.81492.66182.662.6585592.662.6585592.6942802.77
2.6609993.63592.69952.662.658559N/A2.6585592.7108752.93
9.4671 × 10−40.36570.01110504.5168 × 10−16N/A4.532 × 10−164.179 × 10−20.103
4223
4705 ± 325100,000100,000100,00010,000100,000100,00020,00030,000
4996{N/A}{N/A} {N/A}
17
Gear train
2.3078 × 10−210007.7037 × 10−346.71 × 10−2701.67125 × 10−31.75 × 10−21
3.5365 × 10−2101.8807 × 10−1607.9172 × 10−148.22 × 10−161.8106 × 10−181.83165 × 10−33.25 × 10−15
4.9608 × 10−2101.2074 × 10−1506.5771 × 10−13N/A4.4913 × 10−172.64190 × 10−34.38 × 10−14
1.1169 × 10−2103.8137 × 10−1601.4824 × 10−13N/A8.9800 × 10−184.25832 × 10−39.83 × 10−15
860
1170 ± 456100,000100,000100,00010,000100,000100,00020,00030,000
1870{215}{220} {360}
18
Himmelblau
−30,664.873−30,666−30,666−3.07 × 104−30,665.539−3.07 × 104−30,665.539−30,665.834−30,660.325
−30,664.848−30,666−30,666−3.07 × 104−30,665.539−3.07 × 104−30,665.539−30,653.100−30,619.973
−30,664.834−30,666−30,666−3.07 × 104−30,665.539N/A−30,665.539−30,698.769−30,561.855
0.01956873.7130 × 10−12006.2543 × 10−5N/A7.426 × 10−120.26876525.447
4494
4694 ± 219100,000100,000100,00010,000100,000100,00020,00010,000
5001{21,190}{2267} {46,340}
19
Topology
Optimization
2.639352.63932.63932.642.6393472.642.6393472.63782412.64
2.639352.63932.63932.642.6393592.642.6393472.63819812.64
2.639352.63932.63932.642.639449N/A2.6393472.6395202.64
01.0175 × 10−15002.8581 × 10−3N/A4.532 × 10−163.153 × 10−41.34 × 10−15
2613
3021 ± 237200,000200,000200,00010,000200,000200,00020,00030,000
3526{31,740}{6496} {7605}
ProblemHALSGWJA
Present
KOA-CGKOA
[74]*, [57]+, [61]
MSFPSO
[62]
ESAO
[63]
IDMO
[64]
LMWOA
GWO [65]
CSELGWO
[66]
MPSGO
[67]
MAHA
[68]
EJAYA
[57,70]
1
Speed reducer
(B) 2994.4252995.351+2994.422994.42994.4N/AN/A2994.42994.4252994.430
(A) 2994.4252996.7722994.422994.42994.43.00 × 10329942994.42994.4252994.459
(W) 2994.4252998.9942994.42N/AN/AN/AN/A2994.42994.4252994.526
(STD) 00.9420861.40 × 10−129.3445 × 10−66.8768 × 10−50.9404.623 × 10−135.8987 × 10−54.5475 × 10−130.0250189
(B) 2326
(A/D) 2538 ± 11610,000N/A15,00050,000100,000100,000100,000100,00010,000
(W) 2728
2
Industrial
refrigeration system
0.03221300.0322130.0320.0322130.033960N/AN/A0.0322130.032213N/A
0.03221320.76148484.5624.6809 × 10130.0379294.01 × 10−20.032560.0322170.032213N/A
0.03221602.9499690.477N/AN/AN/AN/A0.0322260.032213N/A
6.7082 × 10−71.29582166.8482.0934 × 10143.5133 × 10−34.43 × 10−31.868 × 10−33.3268 × 10−62.0812 × 10−17N/A
4695
5478 ± 100350,000N/A15,00050,000200,000200,000200,000200,000N/A
6258
3
Tension/compression spring (Case 1)
0.0126650.0126650.012670.0126720.012665N/AN/A0.0126650.01266520.012665
0.0126680.0126650.012700.0126890.0126651.27 × 10−20.012670.0126650.01266530.012668
0.0126700.0126650.01272N/AN/AN/AN/A0.0126650.01266620.012687
2.1152 × 10−602.64 × 10−51.1829 × 10−52.1898 × 10−85.32 × 10−56.232 × 10−94.6867 × 10−92.7129 × 10−74.6331 × 10−6
2103
3316 ± 113750,000N/A15,00050,000100,000100,000100,000100,00015,000
4097
4
Pressure vessel
6059.714 (5885.331)5885.434*6032.556059.86059.7N/AN/A6059.75885.3545885.333
6059.714
(5885.331)
5885.4346052.116060.66063.86.58 × 10360596059.75885.4145885.886
6059.714
(5885.331)
N/A6090.53N/AN/AN/AN/A6059.75885.8545894.777
0 (0)1.265 × 10−820.940.6372510.4744585.5342.6584 × 10−50.162481.734
3852 (3510)
5104 ± 1374
(4672 ± 1203)
50,000N/A15,00050,000100,000100,000100,000100,00016,000
6193 (5713)
5
Welded beam
1.6702181.67021.670221.67091.6702N/AN/A1.67021.6702181.670397
1.6702181.67021.670221.67151.67021.671.6701.67021.6702181. 670742
1.6702181.67021.67022N/AN/AN/AN/A1.67021.6702181.671558
07.10 × 10−169.11 × 10−164.9309 × 10−45.8316 × 10−94.03 × 10−43.93 × 10−161.1292 × 10−72.2205 × 10−163.1593 × 10−4
2975
3224 ± 28550,000N/A15,00050,000100,000100,000100,000100,00010,000
3691
6
Three-bar truss
263.8915263.8958*263.8915263.90263.90N/AN/A263.90263.8958N/A
263.8915263.8958263.8915263.90263.902.64 × 102263.9263.90263.8958N/A
263.8915263.8958263.8915N/AN/AN/AN/A263.90263.8958N/A
0001.2012 × 10−91.7053 × 10−131.02 × 10−42.076 × 10−141.0469 × 10−41.7053 × 10−13N/A
705
961 ± 16850,000N/A15,00050,000100,000100,000100,000100,000N/A
1224
7
Multiple disk clutch brake
0.2352430.235240.235240.235240.23524N/AN/A0.235240.2352430.235243
0.2352430.235240.235240.236060.235240.2350.23520.235240.2352430.235243
0.2352430.235240.23524N/AN/AN/AN/A0.235240.2352430.235243
01.14 × 10−161.14 × 10−162.2368 × 10−32.8715 × 10−161.68 × 10−72.411 × 10−161.1117 × 10−1601.952 × 10−12
792
900 ± 8550,000N/A15,00050,000100,000100,000100,000100,00010,000
1193
8
Planetary gear train
0.5232500.52577N/A0.535710.52524N/AN/A0.525770.5259670.527347
0.5257140.52696N/A0.564430.527590.5260.52990.528150.5308090.542460
0.5291880.53319N/AN/AN/AN/AN/A0.533200.5438460.673032
1.2440 × 10−30.0015832N/A0.0438322.8802 × 10−35.60 × 10−44.480 × 10−31.9494 × 10−34.2605 × 10−30.0256885
2095
3260 ± 103950,000N/A15,00050,000100,000100,000100,000100,00010,000
4364
9
Step-cone
pulley
16.036018.5633♣ 316.0902716.11616.090N/AN/A16.07016.0699N/A
16.040338.5633 316.5950016.76316.09016.516.216.07016.0699N/A
16.045298.5633 317.13636N/AN/AN/AN/A16.07016.0699N/A
4.6437 × 10−39.27 × 10−150.521570.315693.8548 × 10−80.2090.26084.6376 × 10−73.5527 × 10−15N/A
4278
4424 ± 32250,000N/A15,00050,000100,000100,000100,000100,000N/A
5000
10
Robot gripper
2.54378522.54552.543792.59292.5528N/AN/A2.54422.6389582.601873
2.54378542.63822.682502.87682.6979N/AN/A2.54962.6389583.135123
2.54378572.79913.11950N/AN/AN/AN/A2.59662.6389583.883629
2.8263 × 10−70.0717850.232850.254760.12596N/AN/A0.0105194.4409 × 10−160.429461
3988
4645 ± 78750,000N/A15,00050,000N/AN/A100,000100,00010,000
5302
11
Hydrostatic thrust bearing
1616.1202403.717+137.08396 41918.41640.9N/AN/A1643.11624.9901625.443
1616.1203168.443143.78383 43224.51744.51.83 × 1031.616 × 1032350.31625.4311631.510
1616.1204551.426270.73097 4N/AN/AN/AN/A2475.11625.4501767.661
0514.89829.8803876.251.21555.212.572.54560.090142926.272
7400
8624 ± 72410,000N/A15,00050,000100,000100,000100,000100,000150,000
9527
12
Four-stage
gearbox
35.2926936.601298.6941.69737.461N/AN/A35.35934.63987 5N/A
35.4101243.8296115448.924.8979 × 10142.7239 × 1015N/AN/A35.69939.19013N/A
35.6514163.54411255.69N/AN/AN/AN/A37.25848.81480N/A
0.16877.3255126054.209.372 × 10146.3419 × 1015N/AN/A0.569233.48167N/A
591
1021 ± 31850,000N/A15,00050,000N/AN/A200,000200,000N/A
1531
13
10 bar truss
524.5888529.4834+524.50833525.98524.45N/AN/A524.45524.4095525.0392
524.6004534.2848527.44007532.62527.075.29 × 10 2524.4524.45524.4591528.3298
524.6111542.7236532.35484N/AN/AN/AN/A524.45524.5082535.6228
0.01135193.620973.025643.58572.96322.324.989 × 10−32.9397 × 10−30.02578012.55928
1911
3321 ± 109210,000N/A15,00050,000100,000100,000100,000100,00010,000
4801
14
Rolling element bearing
16,958.20216,960.674+16,958.2016,95816,958N/AN/A16,95814,614.13616,958.202
16,958.20216,970.53116,958.2016,95816,9581.70 × 1041.461 × 10416,95814,614.13616,958.203
16,958.20216,986.50616,958.20N/AN/AN/AN/A16,95814,614.13616,958.206
07.0760600.0139766.6483 × 10−89.2915.5701.8190 × 10−127.0516 × 10−4
2492
3117 ± 38510,000N/A15,00050,000100,000100,000100,000100,00010,000
3681
15
Gas transmission compressor
2,964,893.9372,965,375.3+1,227,929.1542.9649 × 1062.9649 × 106N/AN/A2.9649 × 1062,964,895.0592,964,896.4
2,964,893.9372,967,464.31,227,929.1542.9790 × 1062.9790 × 1062.96 × 1062.965 × 1062.9649 × 1062,964,895.3402,964,916.3
2,964,893.9372,971,180.31,227,929.154N/AN/AN/AN/A2.9649 × 1062,964,895.9452,965,063.0
01506.4042.39 × 10−1016,0728.33 × 10−106.151.125 × 10−92.5505 × 10−90.18320934.9261
4377
5661 ± 93210,000N/A15,00050,000100,000100,000100,000100,00010,000
6587
16
Tension/compression spring (Case 2)
2.6587222.65862.658562.65862.6586N/AN/A2.65862.6585582.658560
2.6595502.66062.668792.93132.65862.712.6662.65862.6585592.662352
2.6609992.69952.69949N/AN/AN/AN/A2.65862.6585592.700421
9.4671 × 10−49.1532 × 10−30.018190.24269.982 × 10−110.09480.016441.6402 × 10−122.7129 × 10−70.0103828
4223
4705 ± 32550,000N/A15,00050,000100,000100,000100,000100,00010,000
4996
17
Gear train
2.3078 × 10−210006.7660 × 10−19N/AN/A001.350 × 10−18
3.5365 × 10−216.57 × 10−22002.3015 × 10−169.53 × 10−187.917 × 10−15001.315 × 10−13
4.9608 × 10−216.90 × 10−210N/AN/AN/AN/A001.444 × 10−12
1.1169 × 10−212.02 × 10−21003.7492 × 10−161.54 × 10−173.838 × 10−14003.559 × 10−13
860
1170 ± 45650,000N/A15,00050,000100,000100,000100,000100,00010,000
1870
18
Himmelblau
−30,664.873−30,663.305+−30,665.54−30,666−30,666N/AN/A−30,666−30,665.539−30,665.54
−30,664.848−30,652.862−30,665.54−30,666−30,666−3.07 × 104−3.067 × 104−30,666−30,665.539−30,665.54
−30,664.834−30,625.231−30,665.54N/AN/AN/AN/A−30,666−30,665.539−30,665.53
0.01956878.6337909.9915 × 10−55.4957 × 10−80.3181.109 × 10−115.8638 × 10−81.4552 × 10−119.6829 × 10−4
4494
4694 ± 21910,000N/A15,00050,000100,000100,000100,000100,00010,000
5001
19
Topology
Optimization
2.639352.63932.639352.63932.6393N/AN/A2.63932.6393472.639364
2.639352.63932.639352.63932.65832.642.6392.63932.6393472.661795
2.639352.63932.63935N/AN/AN/AN/A2.63932.6393472.747870
01.39 × 10−159.11 × 10−162.8925 × 10−105.4519 × 10−21.36 × 10−153.392 × 10−1504.4409 × 10−160.0281439
2613
3021 ± 23750,000N/A15,00050,000200,000200,000200,000200,00010,000
3526
1 The optimized design corresponding to this solution was not shown in Ref. [59]. 2 The optimized design corresponding to this solution was not shown in Ref. [60]. 3 The optimized design corresponding to this solution was not shown in Ref. [61]. 4 The optimized design corresponding to this solution was not shown in Ref. [62]. 5 The optimized design corresponding to this solution was not shown in Ref. [68]. Average number of analyses for the successful optimization runs of Ref. [69]. * The reported data are taken from Ref. [74]. + The reported data are taken from Ref. [57]. The reported data are taken from Ref. [61].
Table 3. HALSGWJA ranks obtained in the CEC2020 mechanical engineering problems.
Table 3. HALSGWJA ranks obtained in the CEC2020 mechanical engineering problems.
Test ProblemBestAverageWorstSTD No. Analyses
Speed reducer 11111
Industrial refrigeration system design17
(0.0322132 vs. 0.032213)
5
(0.0322160 vs. 0.032213)
6
(6.7082 × 10−7 vs. 0)
1
Tension/compression spring design (1) 17
(0.012668 vs. 0.012665)
5
(0.012670 vs. 0.012665)
8
(2.1152 × 10−6 vs. 0)
1
Pressure vessel design 11111
Welded beam design 11111
Three-bar truss design 11111
Multiple disk clutch brake design 11111
Planetary gear train design optimization 11141
Step-cone pulley problem 11110
(4.6437 × 10−3 vs. 0)
1
Robot gripper problem1116
(2.8263 × 10−7 vs. 0)
1
Hydrostatic thrust bearing design11111
Four-stage gearbox problem22111
Ten-bar truss design12
(524.5885 vs. 522.9644)
10
(524.6004 vs. 523.0724)
7
(524.6111 vs. 524.45)
7
(0.0113519 vs. 0)
1
Rolling element bearing22211
Gas transmission compressor design21111
Tension/compression spring design (2)12
(2.658722 vs. 2.658558)
6
(2.659550 vs. 2.658559)
5
(2.660999 vs. 2.658559)
7
(9.4671 × 10−4 vs. 0)
1
Gear train design13
(2.3078 × 10−21 vs. 0)
8
(3.5365 × 10−21 vs. 0)
6
(4.9608 × 10−21 vs. 0)
7
(1.1169 × 10−20 vs. 0)
1
Himmelblau’s function14
(−30,664.873 vs. −30,665.54)
15
(−30,664.848 vs. −30,665.54)
10
(−30,664.834 vs. −30,665.53)
13
(0.0195687 vs. 0)
1
Topology optimization22111
Table 4. Number of first rankings achieved by HALSGWJA and its competitors in the CEC2020 real-world mechanical engineering problems.
Table 4. Number of first rankings achieved by HALSGWJA and its competitors in the CEC2020 real-world mechanical engineering problems.
AlgorithmBestAverageWorstSTD No. AnalysesB+A+WB+A+W+STDTOTAL
HALSGWJA-Present1110121019334362
EnMODE [54,65,68] 14131430414444
COLSHADE [55,65,68]131213110384949
En(L)SHADE [56] 161416140466060
IMODE-KG [57] 108902272729
DESS [58] 1413N/AN/A0272727
SASS [53,59,68] 12131230374040
SDDS-SABC [59] 86100151515
IYDSE [57,60] 105500202020
KOA [57,74]—CGKOA [61] 108620242626
MSFPSO [62]129740283232
ESAO [63]115N/A10161717
IDMO [64]127N/A00191919
LMWOAGWO [65]N/A8N/A00888
CSELGWO [66]N/A9N/A00999
MPSGO [67]13131230384141
MAHA [68]14121020363838
EJAYA [57,70]105301181819
Table 5. Results of ranking-based Friedman test carried out for all performance indicators.
Table 5. Results of ranking-based Friedman test carried out for all performance indicators.
AlgorithmBestAverageWorstSTD No. AnalysesCumul. Aver. Rank ΣAR
(Partial Aver B+A+W)
Mean Cumul. Aver. Rank ΣAR/5
HALSGWJA-Present3.6843.6322.9474.105115.368 (10.263)3.074
EnMODE [54,65,68] 1.3682.9472.2115.4216.63218.579 (6.526)3.716
COLSHADE [55,65,68]1.5793.6842.8424.5795.47418.158 (8.105)3.632
En(L)SHADE [56] 2.0522.7891.9473.0539.31619.157 (6.788)3.831
SASS [53,59,68] 2.8423.4212.9475.4217.26321.894 (9.210)4.379
MPSGO [67]2.6143.0522.8955.6319.31623.508 (8.561)4.702
MAHA [68]2.8953.3163.4735.7899.31624.789 (9.684)4.958
Table 6. Optimization results obtained by HALSGWJA and its competitors in the car side impact problem.
Table 6. Optimization results obtained by HALSGWJA and its competitors in the car side impact problem.
Algorithmsx1 (mm)x2 (mm)x3 (mm)x4 (mm)x5 (mm)x6 (mm)x7 (mm)x8x9x10 (mm)x11 (mm)Structural Weight (kg)
HALSGWJA Present 0.500001.213010.500000.776110.500001.489750.500000.3450.345−29.10350.0001121.37798
FHGWJA [7]0.500001.212040.500000.779080.500001.490040.500000.3450.345−28.97810.0001021.38340
EJAYA [70]0.500001.116310.500001.302280.500001.500000.500000.3450.32680−19.57090.0083822.84297
IGWO [72]0.500000.886410.500001.257810.648090.913720.500001.000000.524831.9479015.371921.39473
Chaotic GWO0.84252 0.50000 0.50000 1.361860.83852 0.86445 0.576790.94066 0.24622 4.55062 12.3084 21.46164
CSO [72,95]0.78544 0.56882 0.50000 1.34514 0.82107 0.86975 0.74586 0.89302 0.39281 0.89510 1.1799722.00444
TSA [72,96]0.50315 0.89471 0.50000 1.40533 0.86396 0.84245 0.59580 0.86533 0.06579 0.12364 2.06338 22.70296
ASGWO [73]0.500041.134540.500091.279050.500201.499960.500050.344960.33248−16.3332−2.1491222.87188
SMO [76]0.50001.116340.50001.302240.50001.500000.50000.3450.345−19.5660.00000122.84298
LIACOR [76,97]0.50001.115930.50001.302930.50001.500000.50000.1920.345−19.640−0.00000322.84299
KH [76,98]0.5000 1.14747 0.5000 1.26118 0.5000 1.5000 0.5000 0.345 0.345 −13.998 −0.8984 22.88596
Best ABC [76,99]0.50001.305390.50001.103120.50000.500000.50000.3450.34514.21320.330622.88605
CLPSO [76,94]0.50611.173790.50131.247060.50371.495600.50000.3450.345−9.59853.362723.06244
WOA [76,100]0.50001.092760.50001.412330.50001.454970.50000.3450.192−24.038−3.178923.12717
ISO [79]0.50001.116420.50001.302110.5000 1.5000 0.50000.3450.345−19.5525−0.0015266322.843
ETO [81]0.502821.24140.516041.22010.603341.38780.50000.748320.067472.2526−7.281823.2574
DOA [83]0.50811.20210.53181.30520.57191.49540.55570.30300.2585−24.81713.404723.9682
HLOA [83,101]0.50001.06690.80161.0704 0.50401.48730.50000.1920.192−29.97863.211923.6956
ALA [84]0.50001.11630.50001.30200.50001.50000.50001.00000.2391−19.5747−0.002222.84238
HHO [84,102]0.50001.11370.50001.30650.50001.50000.50001.00000.5997−20.04300.001022.84277
AOA [84,104]0.50001.11790.50001.30050.50001.50000.50000.85250.1021−19.2807−1.309822.84756
DMO [84,103]0.50001.10770.50001.31750.50001.50000.50000.66470.5054−21.1276−0.261622.84771
LSHADE-SPACMA [84,105]0.50001.10590.50001.3213 0.50001.50000.50000.73550.9547−21.42760.401822.84992
SFOA [85]0.50001.23400.50001.18700.87500.89200.40000.3450.1921.50000.572023.56158
FA [87]0.500001.360000.500001.202000.500001.120000.500000.3450.1928.87307−18.998122.84298
PSO [87,93]0.500001.116700.500001.30208 0.500001.500000.500000.3450.192−19.5494−0.0043122.84474
Table 7. Comparison of statistical data on optimized weight and computational cost for HALSGWJA and its competitors in the car side impact problem.
Table 7. Comparison of statistical data on optimized weight and computational cost for HALSGWJA and its competitors in the car side impact problem.
AlgorithmsB−W
(kg)
A−W
(kg)
W−W
(kg)
STD−W
(kg)
B−NSAA−NSAW−NSASTD−NSA
HALSGWJA Present 21.3779821.3779821.37798012401291138785
FHGWJA [7]21.3834021.390521.41450.007125136713091606/1110343
EJAYA [70]22.8429722.94398 23.261910.17098N/A27,000N/AN/A
IGWO [72]21.39473N/AN/AN/AN/A30,000N/AN/A
Chaotic GWO21.46164N/AN/AN/AN/A30,000N/AN/A
CSO [72,95]22.00444N/AN/AN/AN/A30,000N/AN/A
TSA [72,96]22.70296N/AN/AN/AN/A30,000N/AN/A
ASGWO [73]22.87188N/AN/AN/AN/A15,000N/AN/A
SMO [76]22.84298N/AN/AN/AN/A30,000N/AN/A
LIACOR [76,97]22.84299N/AN/AN/AN/A30,000N/AN/A
KH [76,98]22.88596N/AN/AN/AN/A30,000N/AN/A
Best ABC [76,99]22.88605N/AN/AN/AN/A30,000N/AN/A
CLPSO [76,94]23.06244N/AN/AN/AN/A30,000N/AN/A
WOA [76,100]23.12717N/AN/AN/AN/A30,000N/AN/A
ISO [79]22.84322.856923.26120.0764 N/A15,000N/AN/A
ETO [81]23.2574N/AN/AN/AN/A30,000N/AN/A
DOA [83]23.968225.067325.95060.5111N/A2000N/AN/A
HLOA [83,101]23.6956 28.480334.5482 2.5047N/A2000N/AN/A
ALA [84]22.84238N/AN/AN/AN/A15,000N/AN/A
HHO [84,102]22.84277N/AN/AN/AN/A15,000N/AN/A
AOA [84,104]22.84756N/AN/AN/AN/A15,000N/AN/A
DMO [84,103]22.84771N/AN/AN/AN/A15,000N/AN/A
LSHADE-SPACMA [84,105]22.84992 N/AN/AN/AN/A15,000N/AN/A
SFOA [85]23.56158 23.56160 23.562038.23 × 10−5N/A50,000N/AN/A
FA [87]22.8429822.8937624.066230.16667N/A20,000N/AN/A
PSO [87,93]22.84474 22.8942923.213540.15017N/A20,000N/AN/A
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Furio, C.; Lamberti, L.; Pruncu, C.I. A Novel Hybrid Metaheuristic Algorithm for Real-World Mechanical Engineering Optimization Problems. Appl. Sci. 2025, 15, 12580. https://doi.org/10.3390/app152312580

AMA Style

Furio C, Lamberti L, Pruncu CI. A Novel Hybrid Metaheuristic Algorithm for Real-World Mechanical Engineering Optimization Problems. Applied Sciences. 2025; 15(23):12580. https://doi.org/10.3390/app152312580

Chicago/Turabian Style

Furio, Chiara, Luciano Lamberti, and Catalin I. Pruncu. 2025. "A Novel Hybrid Metaheuristic Algorithm for Real-World Mechanical Engineering Optimization Problems" Applied Sciences 15, no. 23: 12580. https://doi.org/10.3390/app152312580

APA Style

Furio, C., Lamberti, L., & Pruncu, C. I. (2025). A Novel Hybrid Metaheuristic Algorithm for Real-World Mechanical Engineering Optimization Problems. Applied Sciences, 15(23), 12580. https://doi.org/10.3390/app152312580

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop