Next Article in Journal
Stability of Cauchy–Stieltjes Kernel Families by Free and Boolean Convolutions Product
Next Article in Special Issue
Assessing Air-Pocket Pressure Peaks During Water Filling Operations Using Dimensionless Equations
Previous Article in Journal
Non-Stationary Fractal Functions on the Sierpiński Gasket
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mechanical and Civil Engineering Optimization with a Very Simple Hybrid Grey Wolf—JAYA Metaheuristic Optimizer

by
Chiara Furio
1,
Luciano Lamberti
1,* and
Catalin I. Pruncu
2
1
Department of Mechanics, Mathematics and Management, Polytechnic University of Bari, Via Edoardo Orabona, 4, 70125 Bari, Italy
2
School of Engineering and the Built Environment, Buckinghamshire New University, 59 Walton Street, Aylesbury HP21 7OG, UK
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(22), 3464; https://doi.org/10.3390/math12223464
Submission received: 8 October 2024 / Revised: 26 October 2024 / Accepted: 1 November 2024 / Published: 6 November 2024
(This article belongs to the Special Issue Mathematical Applications in Mechanical and Civil Engineering)

Abstract

:
Metaheuristic algorithms (MAs) now are the standard in engineering optimization. Progress in computing power has favored the development of new MAs and improved versions of existing methods and hybrid MAs. However, most MAs (especially hybrid algorithms) have very complicated formulations. The present study demonstrated that it is possible to build a very simple hybrid metaheuristic algorithm combining basic versions of classical MAs, and including very simple modifications in the optimization formulation to maximize computational efficiency. The very simple hybrid metaheuristic algorithm (SHGWJA) developed here combines two classical optimization methods, namely the grey wolf optimizer (GWO) and JAYA, that are widely used in engineering problems and continue to attract the attention of the scientific community. SHGWJA overcame the limitations of GWO and JAYA in the exploitation phase using simple elitist strategies. The proposed SHGWJA was tested very successfully in seven “real-world” engineering optimization problems taken from various fields, such as civil engineering, aeronautical engineering, mechanical engineering (included in the CEC 2020 test suite on real-world constrained optimization problems) and robotics; these problems include up to 14 optimization variables and 721 nonlinear constraints. Two representative mathematical optimization problems (i.e., Rosenbrock and Rastrigin functions) including up to 1000 variables were also solved. Remarkably, SHGWJA always outperformed or was very competitive with other state-of-the-art MAs, including CEC competition winners and high-performance methods in all test cases. In fact, SHGWJA always found the global optimum or a best cost at most 0.0121% larger than the target optimum. Furthermore, SHGWJA was very robust: (i) in most cases, SHGWJA obtained a 0 or near-0 standard deviation and all optimization runs practically converged to the target optimum solution; (ii) standard deviation on optimized cost was at most 0.0876% of the best design; (iii) the standard deviation on function evaluations was at most 35% of the average computational cost. Last, SHGWJA always ranked 1st or 2nd for average computational speed and its fastest optimization runs outperformed or were highly competitive with their counterpart recorded for the best MAs.

1. Introduction

Metaheuristic algorithms (MAs) nowadays represent the standard approach to complex engineering optimization problems. The popularity of these algorithms is demonstrated by the wide variety of applications of MAs to various fields of science, engineering and technology. For example, some areas that can be mentioned include static and dynamic structural optimization [1,2], mechanical characterization of materials and structural identification (also including damage identification) [3,4,5], vehicle routing optimization [6], optimization of solar devices [7], forest fire mapping [8], urban water demand [9], 3D printing process optimization [10], identification of COVID-19 infection and cancer classification [11,12] image processing including feature extraction/selection [13].
Unlike gradient-based optimizers, MAs are stochastic algorithms that do not use gradient information to perturb design variables. In metaheuristic optimization, new trial solutions are randomly generated according to the inspiring principle of the selected algorithm. The trial solutions generated in each iteration attempt to improve the best record obtained so far. Exploration and exploitation are the two typical phases of metaheuristic search. In exploration, optimization variables are perturbed to a great extent to try to quickly identify the best regions of the design space, while exploitation performs local searches in a set of selected neighborhoods of the most promising solutions. Exploration governs the optimization search in the early iterations, while exploitation dominates as soon as the optimizer converges toward the global optimum.
Several classifications of MAs have been proposed in the literature. The first distinction is between single-solution algorithms where the optimizer updates the position of only one search agent and population-based algorithms where the optimizer operates on a population of candidate designs or search agents. However, the single-point search is limited to a few classical algorithms, such as simulated annealing and tabu search. Hence, the vast majority of MAs are population-based optimizers and their classification relies on the inspiring principle that drives the metaheuristic search. In this regard, MAs can be roughly divided into four categories: (i) evolutionary algorithms; (ii) science-based algorithms; (iii) human-based algorithms; (iv) swarm intelligence-based algorithms.
Evolutionary algorithms imitate evolution theory and evolutionary processes. Genetic algorithms (GAs) [14,15], differential evolution (DE) [16,17], evolutionary programming (EP) [18], evolution strategies (ES) [19], biogeography-based optimization (BBO) [20] and black widow optimization (BWO) [21] fall in this category. GAs and DE algorithms certainly are the most popular evolutionary algorithms. Their success is demonstrated by the about 7000 citations gathered by Refs. [14,15] and the 23,030 citations gathered by Ref. [16] in the Scopus database over about 35 years (a total of about 860 citations/year up to October 2024). GAs are based on Darwin’s concepts of natural selection. Selection, crossover and mutation operators are used for creating a new generation of designs starting from the parent designs stored in the previous iteration. DE includes four basic steps: random initialization of the population, mutation, recombination and selection. GAs and DE mainly differ in the selection process for generating the next generation of designs that will be stored in the population.
Science-based MAs mimic the laws of physics, chemistry, astronomy, astrophysics and mathematics. Simulated annealing (SA) [22,23], charged system search (CSS) [24], magnetic charged system search (MCSS) [25], ray optimization (RO) [26], colliding bodies optimization (CBO) [27], water evaporation optimization (WAO) [28], thermal exchange optimization (TEO) [29], equilibrium optimizer (EO) [30] and light spectrum optimizer (LSO) [31] are examples of physics-based MAs. Generally speaking, the above-mentioned methods tend to reach the equilibrium condition of mechanical, electro-magnetic or thermal systems under external perturbations. Optics-based methods such as RO and LSO utilize the concepts of refraction and dispersion of the light to set directions for exploring the search space. SA certainly is the most popular science-based metaheuristic algorithm considering the about 33,940 citations gathered by Ref. [22] in the Scopus database over 42 years (about 810 citations/year up to October 2024). SA mimics the annealing process in liquid or solid materials that reach the lowest energy level (a globally stable condition) as temperature decreases. The SA search strategy is rather simple: (i) a new trial solution replaces the current best record if it improves it; (ii) otherwise, a probabilistic criterion indicates if solution may improve the current best record in the next iteration. The “temperature” parameter is used in SA for computing probability and is progressively updated as the optimization progresses. SA has an inherent hill-climbing capacity given by search strategy (ii) that allows local minima to be eventually bypassed in the next iterations.
The artificial chemical reaction optimization algorithm (ACROA) [32], gas Brownian motion optimization (GBMO) [33] and Henry gas solubility optimization (HGSO) [34] are examples of chemistry-based MAs that also rely on important physics concepts such as Brownian motion and Henry’s law. ACROA simulates interactions between chemical reactants: the positions of search agents correspond to concentrations and potentials of reactants and they are no longer perturbed when no more reactions can take place. GBMO utilizes the law of motion, the Brownian motion of gasses and turbulent rotational motion to search for an optimal solution; search agents correspond to molecules and their performance is measured by their positions. HGSO mimics the state of equilibrium of a gas mixture in a liquid; search agents correspond to gasses and their optimal positions correspond to the equilibrium distribution of gasses in the mixture. HSGO is the most popular algorithm in this sub-category considering the about 780 citations (up to October 2024) gathered by Ref. [34] five years after its release.
Big bang–big crunch optimization (BB-BC) [35], the gravitational search algorithm (GSA) [36], galaxy-based search algorithm (GBSA) [37], black hole algorithm [38], astrophysics-inspired grey wolf algorithm [39] and supernova optimizer [40] are examples of MAs that mimic astronomical or astrophysical phenomena, such as the expansion (big bang)–contraction (big crunch) cycles that lead to the formation of new star–planetary systems, the spreading of stellar material following supernovae explosions, gravitational interactions between masses, interactions between black holes and starsand the movement of spiral galaxy arms, which also introduces the concept of elliptical orbit in the hunting process of wolves. GSA and BB-BC are the most popular algorithms in this sub-category, considering the about 6030 and 1280 citations, respectively, gathered by Refs. [35,36] in the Scopus database over almost two decades (a total of about 415 citations per year).
The sine cosine algorithm (SCA) [41], the Runge–Kutta optimizer (RUN) [42] and the arithmetic optimization algorithm (AOA) [43] are inspired by mathematics. In the SCA, candidate solutions fluctuate outwards or towards the best solution using a mathematical model based on sine and cosine functions. RUN combines a slope calculation scheme based on the Runge–Kutta method with an enhanced solution quality mechanism to increase the quality of trial designs. The AOA uses the four basic arithmetic operators (i.e., addition, subtraction, multiplication, division) to perturb the design variables of the current best record: in particular, multiplication and division drive the exploration phase, while addition and subtraction drive the exploitation phase. The AOA and RUN, respectively, were associated with about 1830 and 705 citations in Scopus after only 3 years from their release (a total of 845 citations/year). The SCA also achieved a considerable number of citations, about 4040 over 8 years with an average of about 505 citations/year up to October 2024.
Human-based MAs mimic human activities (e.g., talking, walking, playing, exploring new places/situations), behaviors (e.g., natural tendency to look for the best and avoid the worst, parental cares, teaching–learning, decision processes) and social sciences (including politics). Tabu search (TS) [44,45] was the first human-based MA and was developed about 40 years ago. TS is inspired by the ancestral concept that sacred things cannot be touched: hence, in the optimization process, local search is performed in the neighborhood of candidate solutions and solutions that do not improve the design are stored in a database so that the optimizer will not explore them and their neighborhoods further. Harmony search optimization (HS), which simulates the music improvisation process of jazz players [46,47], teaching–learning-based optimization (TLBO) [48], which simulates the teaching–learning mechanisms in a classroom and JAYA [49] using just one single equation to perturb the optimization variables to approach the current best record and escape from the worst candidate solution, are the most popular human-based MAs according to the number of citations reported in the Scopus database. In particular, TLBO achieved the highest average number of citations/year (about 300) and was cited in about 3920 papers since its release in 2011 as of October 2024. TLBO is followed by JAYA (about 250 citations/year for about 2000 citations gathered over 8 years and a huge number of variants and hybrid schemes) and HS (about 235 citations/year for 5450 citations over 23 years).
The learning process is an important source of inspiration in human-based MAs. This is demonstrated by the group teaching optimization algorithm (GTOA) [50], another teaching–learning-based algorithm where students (i.e., candidate solutions) are divided into groups according to defined rules; the teacher (current best record) adopts specific teaching methods to improve the knowledge of each group. The mother optimization algorithm (MOA) [51] mimics the human interaction between a mother and her children, simulating the mother’s care of her children in education, advice and upbringing. The preschool education optimization algorithm (PEOA) [52] simulates the phases of children’s preschool education, including (i) the gradual growth of the preschool teacher’s educational influence, (ii) individual knowledge development guided by the teacher and (iii) individual increases in knowledge and self-awareness. The learning cooking algorithm (LCA) [53] mimics the cooking learning activity of humans: children learn from their mothers, and children and mothers learn from a chef. The decision making behavior of humans is instead simulated by the collective decision optimization algorithm (CDOA) [54]: candidate solutions are generated by operators reproducing the different phases of the decision process that can be experience-based, others-based, group thinking-based, leader-based and innovation-based.
The imperialist competitive algorithm (ICA) [55] and the political optimizer (PO) [56] are based on politics, another important human activity. The ICA simulates the international relationships between countries: search agents represent countries that are categorized into colonies and imperialist states; powerful empires take possession of the former colonies of weak empires. Imperialistic competitions direct the search process toward the powerful imperialist or the optimum points. PO simulates all the major phases of politics (i.e., constituency allocation, party switching, election campaign, inter-party election and parliamentary affairs); the population is divided into political parties and constituencies, thus facilitating each candidate to update its position with respect to the party leader and the constituency winner. Learning behaviors of the politicians from the previous election are also accounted for in the optimization process.
Swarm intelligence-based algorithms reproduce the social/individual behavior of animals (insects, terrestrial animals, birds, and aquatic animals) in reproduction, food search, hunting, migration, etc. Most of the newly published MAs belong to this category. Particle swarm optimization (PSO) [57,58,59], developed in 1995, is the most popular MA overall: in particular, the seminal studies [57,58] gathered about 73,550 citations over 29 years in the Scopus database; the average citation rate is about 2535 articles/year, more than three times higher than for SA. PSO simulates interactions between individuals of bird/fish swarms. If one leading individual or a group of leaders see a desirable path to go through (for food, protection, etc.), the rest of swarm quickly follows the leader(s), even in the absence of direct connections. In the optimization process, a population of candidate designs (the particles) is generated. Particles move through the search space and their positions and velocities are updated based on the position of the leader(s) and the best positions of individual particles in each iteration until the optimum solution is reached.
Insect behavior has also inspired MA experts to a great extent. Ant system and colony optimization (AS, ACO) [60,61,62], which mimic the cooperative search technique in the foraging behavior of real-life ant colonies, artificial bee colony (ABC) [63], which simulates the nectar search carried out by bees, the firefly algorithm (FFA) [64,65], which simulates the social behavior of fireflies and their bioluminescent communication, and ant lion optimizer (ALO) [66], which simulates the hunting mechanism of ant lions, fall in this sub-category. The interest of the optimization community in insect-based MAs is confirmed by the high number of citations achieved by the algorithms mentioned above: about 11,320 for AS, ACO (i.e., 405 citations/year since 1996), 6335 for ABC (i.e., 370 citations/year since 2007), 2720 for ALO (i.e., about 300 citations/year since 2015) and 2895 for FFA (i.e., 207 citations/year since 2010) up to October 2024.
The grey wolf optimizer (GWO) [67], coyote optimization algorithm (COA) [68], snake optimizer (SO) [69] and snow leopard optimization algorithm (SLOA) [70] are MAs simulating the behavior of terrestrial animals. GWO and COA, respectively, mimic the hunting behavior of grey wolves and coyotes; SO mimics the mating behavior of snakes. SLOA is somehow more general than GWO, COA and SO as it perturbs optimization variables by means of operators simulating a wide variety of behaviors of a snow leopard, including travel routes and movement, hunting, reproduction and mortality. GWO is one of the most popular MAs considering the about 13,700 citations reported in Scopus since its release in 2014 up to October 2024: the average number of citations per year is about 1370, the second best amongst all MAs after PSO.
Cuckoo search (CS) [71,72], the crow search algorithm (CSA) [73], starling murmuration optimizer (SMO) [74] and bat algorithm (BA) [75,76] are MAs that mimic the behavior of birds and bats. While BA updates the population of candidate designs simulating the echolocation behavior of bats, CS reproduces the parasitic behavior of some cuckoo species that mix their eggs with those of other birds to guarantee the survival of their chicks. CSA simulates crows’ behavior concerning how they hide their excess food and retrieve it when the food is needed. SMO explores the search space by reproducing the flying behavior of starlings: exploration is carried out by means of the separating and diving search strategies while exploitation relies on the whirling strategy. CS is the most popular MA in this sub-category: the Scopus database reports about 8570 citations for Refs. [71,72] over 15 years with about 570 citations/year up to October 2024. CS is followed by BA (about 6030 citations for Refs. [75,76] since 2010 with about 430 citations/year) and CSA (about 1765 citations since 2016 with about 220 citations/year).
Among MAs inspired by aquatic animals, the following methods have to be mentioned. The dolphin echolocation algorithm (DE) [77] simulates the hunting strategy of dolphins based on the echolocation of prey. The whale optimization algorithm (WOA) [78] mimics the social behavior of humpback whales and, in particular, their bubble-net hunting strategy. The salp swarm algorithm (SSA) [79] simulates swarming and foraging behaviors of ocean salps. The marine predators algorithm (MPA) [80] updates design variables by simulating random walk movements (essentially Brownian motion and Lévy flight) of ocean predators. The giant trevally optimizer (GTO) [81] mimics hunting strategies of giant trevally marine fish, including Lévy flight movement. WOA is the most popular algorithm in this sub-category: the Scopus database reports about 9790 citations since 2016 with an average rate of about 1225 citations/year. WOA is followed by SSA, which gathered about 3920 citations since 2017 with about 560 citations/year, and by MPA, which gathered about 1600 citations since 2020 with about 400 citations/year up to October 2024.
A large number of improved/enhanced variants for existing MAs, hybrid algorithms combining MAs with gradient-based optimizers, and hybrid algorithms combining two or more MAs have been developed by optimization experts (see, for example, Refs. [82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102], published in the last two decades). High-performance MAs were often selected as component algorithms in developing hybrid formulations. The common denominator of all those studies was to find the best balance between exploration and exploitation phases that hopefully resulted in the following: (i) the optimizer’s ability to avoid local minima (i.e., hill-climbing capability, especially for algorithms including SA-based operators), keeping the diversity in the population, and avoiding stagnation and premature convergence to false optima; (ii) a reduction in the number of function evaluations (structural analyses, analyses) required in the optimization process; (iii) high robustness in terms of low dispersion on optimized cost function values; (iv) a reduction in the number of internal parameters as well as in the level of heuristics entailed by the search process. In general, these tasks were accomplished by (i) combining highly explorative methods with highly exploitative methods; (ii) forcing the optimizer to switch again to exploration if the exploitation phase started prematurely or the design did not improve for a certain number of iterations in the exploitation phase; (iii) introducing new perturbation strategies (for example, chaotic perturbation of the design; mutation operators to avoid stagnation and increase the diversity of the population; Lévy flight/movements in swarm-based methods, etc.) for optimization variables tailored to the specific MA under consideration.
Continuous increases in computing power have greatly favored the development of new metaheuristic algorithms (including enhanced variants and hybrid formulations). However, some aspects should carefully be considered in metaheuristic optimization: (i) according to the No Free Lunch theorem [103,104], no metaheuristic algorithm can always outperform all other MAs in all optimization problems; (ii) MAs often require a very large number of function evaluations (analyses) for completing the optimization process, even 1–2 orders of magnitude higher than their classical gradient-based optimizer counterparts; (iii) the easiness of implementation, which is typically underlined as a definite strength point of Mas, is very often nullified by the complexity of algorithmic variants or hybrid algorithms combining multiple methods; and (iv) sophisticated MA formulations often include many internal parameters that may be difficult to tune.
It should be noted that newly developed MAs often add very little to the optimization field and their appeal quickly vanishes just a few years after their release. This is confirmed by the literature survey presented in this introduction. The “classical” MAs developed prior to 2014, such as GA, DE, SA, PSO, ACO, CS, BA, HS, GSA, BBBC and GWO, gathered a much higher number of citations/year (up to 2535 until October 2024) than the MAs developed in 2015-2024, except for WOA, SCA, AOA/RUN, SSA, MPA, ALO, JAYA and CSA (up to 1225 citations/year). For this reason, the present study focused on improving available MAs rather than formulating a new MA from scratch. The main goal was to prove that a very efficient hybrid metaheuristic algorithm can be built by simply combining the basic formulations of two well-established MAs without complicating the formulation of the hybrid optimizer both in terms of number of strategies/operators used for perturbing the design variables and number of internal parameters (including parameters that regulate the switching from one optimizer to another).
In view of the arguments above, a very simple hybrid metaheuristic algorithm (SHGWJA, where the acronym stands for Simple Hybrid Grey Wolf JAYA) able to solve engineering optimization problems with less computational effort than the other currently available MAs was developed by combining two classical population-based MAs, namely the Grey Wolf Optimizer (GWO) and the Jaya Algorithm (JAYA). GWO and JAYA were selected in this study as the component algorithms of the new optimizer SHGWJA because of their simple formulations without internal parameters, and in view of the high interest of metaheuristic optimization experts in these two methods, proven by the various applications and many algorithmic variants documented in the literature. These motivations are detailed as follows.
(1)
GWO, originally developed by Mirjalili et al. [67] in 2014, is the second most cited metaheuristic algorithm after PSO in terms of citations/year. However, GWO has a much simpler formulation than PSO and, unlike PSO, does not require any setting of internal parameters. GWO mimics the hierarchy of leadership and group hunting of the grey wolves in nature: the optimization search is driven by the three best individuals that catch the prey and attract the rest of the hunters. Applications of GWO to various fields of science, engineering and technology are reviewed in Refs. [105,106].
(2)
JAYA, originally developed by Rao [49] in 2016, is one of the most cited MAs released in the last decade. It utilizes the most straightforward search scheme ever presented in the metaheuristic optimization literature for a population-based MA: to approach the population’s best solution and move away from the worst solution, thus achieving significant convergence capability. Hence, optimization variables are perturbed by JAYA using only one equation, thus minimizing the computational complexity of the search process. In spite of its inherent simplicity, JAYA is a powerful metaheuristic algorithm that has been proven able to efficiently solve a wide variety of optimization problems (see, for example, the reviews presented in [107,108]). An interesting argument made in [107] may explain JAYA’s versatility. JAYA combines the basic features of evolutionary algorithms, in terms of the fittest individual’s survivability (such a feature is, however, common to all population-based MAs), and swarm-based algorithms where the swarm normally follow the leader during the search for an optimal solution. This hybrid nature, in addition to the inherent algorithmic simplicity, makes JAYA an ideal candidate to be selected as a component of new hybrid metaheuristic algorithms.
(3)
Both GWO and JAYA do not have internal parameters that have to be set by the user, except the parameters common to all population-based MAs, such as population size and the limit on the number of optimization iterations.
(4)
Both algorithms have good exploration capabilities but have weaknesses in the exploitation phase. The challenge is to combine their exploration capabilities to directly approach the global optimum. Elitist strategies may facilitate the exploitation phase, which only has to focus on a limited number of potentially optimal solutions to be refined.
Here, GWO and JAYA were simply merged, and the JAYA perturbation strategy was applied to improve the positions of the leading wolves, thus avoiding the search stagnation that may occur in GWO if the three best individuals of the population remain the same for many iterations. The original formulations of GWO and JAYA combined into the hybrid SHGWJA algorithm were slightly modified to maximize the search capability of the new algorithm and reduce the number of function evaluations required by the GWO and JAYA engines. However, while algorithmic variants of GWO and JAYA (including hybrid optimizers) usually include specific operators/schemes to increase the diversity of the population or/and improve the exploitation capability (this, however, increases the computational complexity of the optimizer) [105,106,107,108], SHGWJA adopts a very straightforward elitist approach. In fact, SHGWJA always attempts to improve the current best design of the population regardless of having generated new trial solutions in the exploration phase or exploitation phase. Trial solutions that are unlikely to improve the current best record stored in the population are directly rejected without evaluating constraints. This allows SHGWJA to reduce the computational cost of the optimization process by always directing the search process towards the best regions of the search space, and eliminating any unnecessary exploitation of trial solutions that cannot improve the current best record.
The proposed SHGWJA was successfully tested in seven “real-world” engineering problems selected from civil engineering, mechanical engineering and robotics. The selected test problems, including up to 14 optimization variables and 721 nonlinear constraints, regarded: (i) shape optimization of a concrete gravity dam (volume minimization); (ii) optimal design of a tension/compression spring (weight minimization); (iii) optimal design of a welded beam (minimization of fabrication cost); (iv) optimal design of a pressure vessel (minimization of forming, material, and welding costs); (v) optimal design of an industrial refrigeration system; (vi) 2D path planning (minimization of trajectory length); (vii) mechanical characterization of a flat composite panel under axial compression (by matching experimental data and finite element simulations). Test problems (ii) through (v) were included in the CEC 2020 (IEEE Congress on Evolutionary Computing) test suite for constrained mechanical engineering problems. Example (vii) is a typical highly nonlinear inverse problem in the fields of aeronautical engineering and mechanics of materials. Besides the seven “real-world” engineering problems listed above, two classical mathematical optimization problems (i.e., Rosenbrock’s banana function and Rastrigin’s function) with up to 1000 variables were solved in this study to evaluate the scalability of the proposed algorithm.
For all test cases, the optimization results obtained by SHGWJA were compared with those quoted in the literature for the best-performing algorithms. SHGWJA was compared with at least 10 other MAs, each of which had been reported in the literature to have outperformed in turn up to 35 other MAs. Comparisons with high performance algorithms and CEC competitions winners such as LSHADE and IMODE variants as well as with other MAs that were reported to outperform CEC winners (i.e., MPA and GTO among others) also are presented in the article.
The rest of the article is structured as follows. Section 2 recalls formulations of GWO and JAYA, while Section 3 describes the new hybrid optimization algorithm SHGWJA developed in this research. Test problems and implementation details are presented in Section 4, while optimization results are presented and discussed in Section 5. In the last section, the main findings are summarized, and directions of future research are outlined.

2. The GWO and JAYA Algorithms

2.1. The GWO Algorithm

The grey wolf optimization (GWO) algorithm reproduces the leadership hierarchy and hunting mechanism of grey wolves in nature [67]. GWO simulates the leadership hierarchy by defining four types of grey wolves, namely α, β, δ and ω. The hunting mechanism is simulated by different operations corresponding to searching for prey, encircling prey and attacking prey. The α wolf is the group leader and makes decisions regarding hunting, sleeping place, time to wake, etc. The β wolf helps the α in decision making or other pack activities. The lowest-ranking grey wolf is the ω, which must submit to dominant wolves. Delta wolves (scouts, sentinels, elders, hunters and caretakers) must submit to α and β wolves, but they dominate the ω wolves. Alpha, beta, and delta wolves estimate the position of the prey, and other wolves update their positions randomly around the prey.
GWO starts with defining the population of NPOP wolves covering the search space. The best three individuals stored in the population are ranked as α, β and δ. The rest of the population is ranked as ω. Prey is encircled by wolves during the hunt. This behavior is mathematically described as follows:
D = C · X p X i t
X i t + 1 = X p A · D
where it refers to the current iteration, A and C are coefficient vectors, X p is the position vector of the prey and X i t is the generic grey wolf (i.e., search agent) included in the population. The “ · ” notation denotes the term by term multiplication between vectors which results in another vector.
Vectors A and C are defined as follows:
A = 2 a · r 1 a
C = 2 r 2
where the components of the a vector decrease linearly from 2 to 0 as the optimization progresses, and r 1 , r 2 are random vectors in the [0, 1] interval.
In the hunting phase, grey wolves recognize their prey’s location and encircle it. However, in an optimization search, the location of the optimum (prey) is usually not known a priori. To simulate wolves’ hunting behavior, GWO assumes that the three best search agents corresponding to α, β and δ wolves have a better knowledge of the potential prey’s location (i.e., the target optimum). Hence, all other agents must update their positions based on the positions of the best three search agents, X α , X β and X δ . Such a task is accomplished by the following equations for the generic solution X i t included in the population:
D α = C 1 · X α X i t D β = C 2 · X β X i t D δ = C 3 · X δ X i t
X 1 = X α A 1 ·   D α X 2 = X β A 2 ·   D β X 3 = X δ A 3 · D δ
X i t + 1 = X 1 + X 2 + X 3 3
where the vectors X 1 ,   X 2   and   X 3 are defined for each individual X i of the population.
Equations (5) through (7) are utilized to update the positions of all search agents. The new population is ordered with respect to penalized cost function values to update the positions X α , X β and X δ . The prey’s position corresponds to the current best record. The process is repeated until the limit number of iterations or function evaluations defined by the user is completed.
In summary, in the optimization iterations, α, β and δ wolves estimate the prey’s position (target optimal solution corresponding to the current best record) and each search agent updates its distance from prey. The a parameter decreases from 2 to 0 in order to govern the exploration phase and exploitation phase, respectively. Candidate solutions tend to diverge from prey and search for another prey in the exploration phase when A > 1, while they converge towards the prey in the exploitation phase when A < 1.
The main drawbacks of GWO are premature convergence and low diversity because the population is moved towards only the three best search agents, the wolves α, β and δ. For this reason, Nadimi-Shahraki et al. [105] developed the gaze cues learning-based grey wolf optimizer (GGWO) algorithm. GGWO implemented new search strategies based on other typical behaviors of grey wolves, such as the gaze cues learning. Wolves may direct their attention towards some target simply by following the sight of other wolves that found the target sufficiently interesting. In order to implement this mechanism in the optimization search, GGWO defined three candidate solutions for updating each design X i of the population: (i) the X i G W O solution obtained with classical GWO, Equation (7); (ii) the X i N G C L solution generated with the neighbor gaze cues learning (NGCL) search strategy, where X i is updated considering the alpha wolf X α , and the neighborhood of X i formed by two wolves ranked immediately below X i and two wolves ranked immediately above X i ; (iii) the X i R G C L solution generated with the random gaze cues learning (RGCL) search strategy where X i is updated considering two randomly selected wolves from the population. The fitness values of the three candidate solutions X i G W O , X i N G C L and X i R G C L are compared and the best-fitted amongst the three individuals is provisionally set as the new X i , which replaces the old X i solution if it has a better fitness than the old X i .
Very recently, Tsai and Shi [106] proposed two GWO variants that correct the original formulation of GWO to remove causes of potential biases in the search process. The GWO–C1 variant does not use the C vector and the absolute value any more to compute the D vectors in Equation (5). Furthermore, in the GWO–C2 variant, Equation (6) is modified by replacing X α , X β and X δ with the current position X i t . Both the GWO–C2 and GGWO algorithms were very competitive in solving CEC test suites but GGWO fits better in the present study because it was tested in [105] in real engineering problems.

2.2. The JAYA Algorithm

The JAYA method is the simplest metaheuristic algorithm ever presented in the optimization literature [49]. It only utilizes one equation to perturb the design and does not include any internal parameter besides population size NPOP and the limit on the number of iterations Nitermax. Let Xj,i,it be the jth optimization variable (j = 1,…, NDV) for the ith search agent (i = 1,…, NPOP) in the itth iteration. The Xj,i,it value is modified in the current iteration as follows:
    X j , i , i t n e w = X j , i , it + r 1 , j , it X j , best , it X j , i , it r 2 , j , it X j , worst , it X j , i , it
where X j , i , it new is the updated value of X j , i , it ; r 1 , j , it and r 2 , j , it , respectively, are two random numbers extracted from the [0, 1] interval for the jth variable; X j , best , it and X j , worst , it , respectively, are the values of the jth variable for the best individual Xbest,it and worst individual Xworst,it currently included in the population.
The term r 1 , j , it X j , best , it X j , i , it describes JAYA’s ability to approach the best solution X b e s t , i t , while the term r 2 , j , it X j , worst , it X j , i , it describes its ability to escape from the worst solution X w o r s t , i t . Random factors r1 and r2 allow the design space to be efficiently explored while the absolute value X j , i , it in Equation (8) further enhances exploration [49]. It should be noted that using the absolute value X j , i , it in Equation (8) does not affect the search process in real engineering problems because optimization variables can usually take only positive values. However, in mathematical optimization problems where variables may take optimal values of 0, using the absolute value X j , i , it in Equation (8) actually limits the movements X j , best , it X j , i , it and X j , worst , it X j , i , it , thus reducing biases in the convergence to zero values.
The new trial solution X i n e w defined by Equation (8) is compared with the previous solution X i p r e in the population. If X i n e w is better than X i p r e , the population is updated by replacing X i p r e with X i n e w . This is reiterated until the limit number of iterations/function evaluations is reached. In most implementations, the computational cost of JAYA is NPOP × Nitermax function evaluations. JAYA variants [93,109] have tried to save computational cost by directly rejecting worse trial solutions than the search agents stored in the population; however, this strategy failed to find global optima in many structural optimization problems different from truss weight minimization. A similar approach was followed in [110]: the new trial solution X i n e w generated by the SJAYA variant replaces the X i p r e solution that is being currently perturbed not only if it improves X i p r e but also if it has the same cost (or penalized cost in constrained optimization).
Improving the balance of exploration and exploitation in JAYA is the most important concern of optimization experts. In this regard, Rao and Saroj [111] implemented in their SAMP-JAYA algorithm an adaptive scheme to divide the population into m sub-populations. Subpopulations are independently updated and then merged. Their number is increased to (m + 1) if the best record X b e s t , i t is improved in the current iteration or it is decreased to (m − 1) if X b e s t , i t is not improved. In this way, exploration characterizes the search process if iterations are successful while exploitation is activated if the optimizer cannot find a better solution over the whole search space covered by the sub-populations.
In [98], JAYA was hybridized with the Rao-1 optimizer into the EHRJAYA algorithm. Optimization variables are perturbed with JAYA or Rao-1 schemes based on a probabilistic threshold. This allows for improving the population diversity and solving situations of insufficient exploration and exploitation that may occur if the same perturbation strategy is utilized throughout the optimization process. A self-adaptive classification learning strategy assigns the perturbation equation to each individual based on its rank in the population. Each perturbation equation includes coefficients that adaptively vary in the optimization process. Population is continuously reduced in size to facilitate a smooth transition from exploration to local exploitation.
Zhang et al. [112] developed the variant EJAYA, which uses population information more efficiently than classical JAYA to balance exploration and exploitation. EJAYA updates optimization variables considering the current best and worst solutions (like classical JAYA), the current mean solution (obtained by averaging the NPOP search agents of the population), and the historical solutions (a population of candidate solutions initially generated besides the standard population and probabilistically permuted in the optimization process). In EJAYA, local exploitation relies on defined upper and lower local attractors (weighted averages of best/worst and average solution), while global exploration is guided by the historical population.
Fuzzy clustering competitive learning, experience learning and Cauchy mutation mechanisms were recently included in the I-JAYA algorithm [113]. The fuzzy clustering competitive learning mechanism effectively utilizes population information, speeding up convergence. Experience learning allows for balancing between exploiting previously visited regions and exploring new regions of the search space. Cauchy mutation reduces the risk of being stuck in local optima by fine-tuning the quality of the current best solution.

3. The SHGJA Algorithm

The simple hybrid metaheuristic algorithm SHGWJA developed in this research combined the GWO and JAYA methods to improve search by means of elitist strategies. The new algorithm is very simple and efficiently explores the search space. This allows for limiting the number of function evaluations. The new algorithm is now described in detail. A population of NPOP candidate designs (i.e., wolves) is randomly generated as follows:
x j i = x j L + ρ j i   ( x j U x j L )           j = 1 , , N D V i = 1 , , N P O P
where NDV is the number of optimization variables; x j L and x j U , respectively, are the lower and upper bounds of jth variable; ρ j i   is a random number extracted in the interval (0, 1).
Considering the classical statement of the optimization problem where the goal is to minimize the function W( X ) of NDV variables (stored in the design vector X ), subjected to NCON inequality/equality constraint functions of the form Gk( X ) ≤ 0, the penalized cost function Wp( X ) is defined as follows:
W p X = W X + p · ψ
where p is the penalty coefficient. The penalty function ψ is defined as follows:
ψ = k = 1 N c o n max 0 , g j 2
The penalized cost function Wp( X ) obviously coincides with the cost function W( X ) if the trial solution X is feasible. Candidate solutions are sorted with respect to penalized cost function values: the current best record corresponds to the lowest value of ψ .
  • Step 1. Generation of new trial designs with classical GWO
The best three search agents achieving the lowest penalized cost values are set as the wolves α, β and δ. Let X α , X β and X δ be the corresponding design vectors. It obviously holds that Wp( X α ) < Wp( X β ) < Wp( X δ ). The α wolf is also set as the current best record X o p t and the corresponding penalized cost function Wp( X α ) is set as Wp,opt.
Each individual X i of the population is provisionally updated with the classical GWO Equations (1) through (7). As in [67], the components of the a vector linearly decrease from 2 to 0 as the optimization goes by in SHGWJA. Let X i , t r denote the new trial solution obtained by perturbing the generic individual X i stored in the population in the previous iteration.
  • Step 2. Evaluation and refinement of new trial designs: Elitist strategy and JAYA schemes
Each new trial design X i , t r generated in Step 1 via classical GWO with Equations (1) through (7) is evaluated by SHGWJA using two new operators. In GWO implementations, each new design X i , t r is compared with its counterpart solution X i previously stored in the population. If the new design X i , t r is better than the old design X i , it is stored in the updated population in the current iteration, replacing the old design. However, this task requires a new evaluation of the constraint functions for each new X i , t r .
In order to reduce the computational cost, SHGWJA implements an elitist strategy, retaining only the trial solutions that are likely to improve the current best record. Hence, SHGWJA initially compares only the cost function W( X i , t r ) of the new trial solution X i , t r with 1.1 times the cost function W( X o p t ) of the current best record. If the new trial design does not improve the current best record—i.e., if W( X i , t r ) > 1.1W( X o p t ) holds— it is not necessary to process it further and the old design X i is provisionally maintained also in the new population.
The 1.1W( X o p t ) threshold has proven to be effective in all test problems solved in this study. Such a behavior may be explained in the following way. In the exploration phase, the optimizer assigns large movements to design variables and the probability to improve design is high: hence, the W( X i , t r ) > 1.1W( X o p t ) scenario will not be likely to occur. In the exploitation phase, the optimizer should bypass local minima to find the global optimum: like SA, the optimizer may accept candidate solutions slightly worse than X o p t . Looking at the probabilistic acceptance/rejection criterion used by advanced SA formulations [84,93], it can be seen that the threshold level of acceptance probability is 0.9 if the γ ratio between the cost function increment recorded for the trial solution X i , t r with respect to the current best record X o p t and the annealing temperature T is 0.1; hence, the probability of provisionally accepting some design worse than X o p t and improving it in the next iterations is 90%. Since the initial value of the T temperature set in SA corresponds to the expected optimum cost (or the cost value of the current best record), it may be reasonably assumed that trial solutions up to 10% worse than X o p t would become better than X o p t in the next iterations.
However, the new design X i , t r is generated by classical GWO to try to approach the position of the best three individuals, the wolves α, β and δ. To avoid stagnation, the X i design is perturbed using a JAYA-based scheme if the classical GWO generation was unsuccessful and the trial solution X i , t r did not satisfy the condition W( X i , t r ) ≤ 1.1W( X o p t ). That is, the following holds:
X i , t r = X i + λ 1   · X o p t X i λ 2   · X w o r s t X i
If the classical GWO generation was successful and trial design X i , t r satisfied the condition W( X i , t r ) ≤ 1.1W( X o p t ), the JAYA scheme is applied directly to X i , t r as follows:
X i , t r = X i , t r + λ 1   · X o p t X i , t r λ 2   · X δ X i , t r
In Equations (12) and (13), λ 1 and λ 2 are two vectors including NDV random numbers in the interval [0, 1]. The absolute values of optimization variables (like the | X j , i , it | values in Equation (8)) are taken for each component Xj,i or Xj,i,tr of vectors X i or X i , t r , respectively.
The cost function W(( X i , t r )′) is evaluated also for the new trial design X i , t r defined by Equations (12) or (13). W(( X i , t r )′) is compared with 1.1W( X o p t ). The modified trial design ( X i , t r )′ is always directly rejected if it certainly does not improve the current best record, that is, if the condition W(( X i , t r )′) > 1.1W( X o p t ) is satisfied. If W(( X i , t r )′) ≤ 1.1W( X o p t ) and W( X i , t r ) ≤ 1.1W( X o p t ), W(( X i , t r )′) and W( X i , t r ) are compared. ( X i , t r )′ is used to update the population if W(( X i , t r )′) ≤ 1.1W( X o p t ) or X i , t r   is used if W(( X i , t r )′) > W( X i , t r ).
Equation (12) is related to exploration. In fact, it tries to involve the whole population in the formation of the new trial design ( X i , t r )′ because the α, β and δ wolves could not move the other search agents (i.e., X i ) to better positions of search space near the prey (i.e., X o p t ) with respect to the previous iteration. Since the main goal of the population renewal task is to improve each individual X i , SHGWJA tries searching on descent direction X o p t X i with respect to X i (the cost function certainly improves, moving from a generic individual towards the current best record) and escapes from the worst individual of population X w o r s t , which certainly may not improve X i .
Equation (13) is instead related to exploitation because it operates on a good trial solution dictated by wolves α, β and δ. This design is very likely to be close to X o p t or even to improve it. The δ wolf is temporarily selected as the worst individual of the population. This forces SHGWJA to locally search in a region of design space containing high-quality solutions like the three best individuals of the population.
  • Step 3. Re-sort population and update α, β, δ wolves, X o p t  and  X w o r s t
When all X i have been updated by SHGWJA via the classical GWO scheme based on Equations (1) through (7) or the JAYA-based schemes of Equations (12) and (13), and their quality has been evaluated using the elitist strategy W( X i , t r ) ≤ 1.1W( X o p t ), the population is re-sorted and the NPOP individuals are ranked with respect to penalty function values. Should Wp( X i , t r ) or Wp(( X i , t r )′) be greater than Wp( X i ), all new trial solutions generated/refined for X i are rejected and the old design X i is retained in the population.
SHGWJA sets the best three individuals of the new population as wolves α, β and δ with X α , X β and X δ design vectors, respectively. The best and worst solutions are set as X o p t and X w o r s t , respectively.
The present algorithm attempts to avoid stagnation by checking the ranking of wolves α, β and δ with an elitist criterion. This is achieved each time that the wolves’ positions are not updated in the current iteration. The elitist criterion adopted by SHGWJA relies on the concept of descent direction. S β = X o p t X β and S δ = X o p t X δ are obviously descent directions with respect to the positions X β and X δ of wolves β and δ. Hence, SHGWJA perturbs X β and X δ along the descent directions X o p t X β and X o p t X δ . The positions of β and δ wolves are “mirrored” with respect to X o p t as follows:
X β m i r r = 1 + η m i r r , β X o p t η m i r r , β X β X δ m i r r = 1 + η m i r r , δ X o p t η m i r r , δ X δ
In Equation (14), the random numbers ηmirr,β and ηmirr,δ are extracted in the (0, 1) interval. They limit step sizes to reduce the probability of generating infeasible positions. The best three positions amongst X α , X β , X δ , X β m i r r and X δ m i r r are set as the α, β and δ wolves in the next iteration. The two worst positions are compared with the rest of the population to see if they can replace X w o r s t and the second worst design of old population. The latter check also covers the scenario where X β m i r r and X δ m i r r could not improve any of the α, β and δ wolves.
Figure 1 illustrates the rationale of the mirroring strategy adopted by SHGWJA. The figure shows the following: (i) the original positions of the α, β and δ wolves (respectively, points Popt≡Pα, Pβ and Pδ of the search space); (ii) the positions of mirror wolves βmirr and δmirr (respectively, points Pβ,mirr and Pδ,mirr of search space); (iii) the cost function gradient vector 𝛻 W o p t evaluated at X o p t , that is, wolf α. The “mirror” wolves βmirr and δmirr— i.e.,   X β m i r r and X δ m i r r — defined by Equation (14) lie on descent directions and may even improve X o p t (i.e., the position of wolf α). In fact, the conditions 𝛻 W o p t × X β m i r r X o p t < 0 and 𝛻 W o p t × X δ m i r r X o p t < 0 hold, where “ × ” denotes the scalar product between two vectors. Since X δ m i r r X o p t is in all likelihood a steeper descent direction than X β m i r r X o p t , the δ wolf may have a higher probability than wolf β of replacing wolf α even though wolf β occupies a better position than wolf δ in the search space. The consequence of this elitist approach is that SHGWJA must perform a new exploration of the search space instead of attempting to exploit trial solutions that did not improve the design in the last iteration. Furthermore, the replacement of the two worst designs of population improves the average quality of search agents and increases the probability of defining higher-quality trial solutions in the next iteration.
  • Step 4. Convergence check
The standard deviations of design variables and cost function values of search agents decrease as the search approaches the global optimum. Therefore, SHGWJA normalizes standard deviations with respect to the average design X a v e r = i = 1 N POP X i / N POP and the average cost function W aver = i = 1 N POP W X i / N POP . The convergence criterion used by SHGWJA is the following:
M a x S T D X 1 X a v e r , X 2 X a v e r , , X N P O P X a v e r X a v e r ; S T D W 1 ,   W 2 , ,   W N P O P W a v e r ε c o n v
where the convergence limit ε conv is equal to 10−7. Equation (15) is based on the following rationale. The classical convergence criteria of optimization algorithms compare the cost function values with W o p t as well as the best solutions set as X o p t obtained in the last iterations, and stop the search process if both these quantities do not change more than the fixed convergence limit. Accounting for variation in the best solution X o p t allows for avoiding local minima if the optimizer enters a large region of the design space containing many competitive solutions with the same cost function values. However, this approach may not be effective in population-based algorithms where all candidate designs stored in the population should cooperate in searching for the optimum. For example, it may occur that the optimizer updates only sub-optimal solutions and leaves X o p t unchanged over many iterations. Should this occur, the optimizer would stop the search process while there is still a significant level of diversity in the population, which is a typical scenario of the exploration phase dominating the early stage of the optimization process. Equation (15) assumes instead that all individuals are in the best region of the search space which hosts the global optimum and cooperate in the exploitation phase: hence, diversity must decrease and search agents must aggregate in the neighborhood of X o p t . For this reason, in Equation (15), the standard deviation of search agents’ positions with respect to the average solution is normalized to the average solution to quantify population diversity. As all solutions come very close to X o p t , they coincide with their average and the search process finally converges. The standard deviation of solutions is hence normalized with respect to the average solution, thus having a dimensionless convergence parameter. The same approach is followed for cost function values by normalizing their standard deviation with respect to the average cost: competitive solutions almost coincide with the optimum only when are effectively close to it, that is, when the search process is near its end.
Steps 1 through 3 are repeated until SHGWJA converges to the global optimum.
  • Step 5. Terminate optimization process
SHGWJA terminates the optimization process and writes output data in the results file.
Algorithm 1 presents the SHGWJA pseudo-code. Figure 2 shows the flow chart of the proposed hybrid optimizer.
Algorithm 1 Pseudo-code of SHGWJA
START SHGWJA.
  • Set population size NPOP and generate randomly a population of NPOP candidate designs X i (i = 1,…,NPOP) using Equation (9).
  • For i = 1,…,NPOP
  •       Compute cost function and constraints of the given optimization problem for each candidate design X i .
  •       Compute penalized cost function W p X i for each candidate design X i using Equations (10) and (11).
  • end for
  • Sort population by values W p X i in ascending order. Set the best three individuals with the lowest W p values as wolves α, β and δ. Let X α , X β and X δ the design vectors for wolves α, β and δ, respectively.
  • Set the α wolf as current best record X o p t X α with penalized cost Wp,opt = Wp( X α ).
  • For i = 1,…,NPOP
  •       Step 1. Use classical GWO Equations (1)–(7) to provisionally update each individual  X i  of the population to  X i , t r
  •       Step 2. Evaluate trial design  X i , t r  and/or additional/new trial design  X i , t r
  •    If W( X i , t r ) ≤ 1.1W( X o p t )
  •       Keep trial design X i , t r and define additional trial design X i , t r using the JAYA strategy of Equation (13).
  •    else
  •       Reject trial design X i , t r and define the new trial design X i , t r using the JAYA strategy of Equation (12).
  •    end if
  •    If  W X i , t r > 1.1W( X o p t ) & W( X i , t r ) > 1.1W( X o p t )
  •       Reject also X i , t r and keep individual X i in the new population.
  •    end if
  •    If  W X i , t r ≤ 1.1W( X o p t ) & W( X i , t r ) ≤ 1.1W( X o p t ) & W X i , t r < W( X i , t r )
  •       Use trial design X i , t r to update the population.
  •    end if
  •    If    W X i , t r ≤ 1.1W( X o p t ) & W( X i , t r ) ≤ 1.1W( X o p t ) &   W X i , t r > W( X i , t r )
  •       Use trial design X i , t r to update the population.
  •    end if
  •    If Wp( X i , t r ) or W p X i , t r < Wp( X i )
  •    Keep X i , t r or X i , t r in the updated population.
  •    else
  •    Discharge X i , t r or X i , t r and keep X i also in the updated population.
  •    end if
  • end for
  • Step 3. Re-sort population, define new wolves α, β and δ, update    X o p t  and  X w o r s t
  • Sort the updated population by the values of W p in ascending order
  • Update the positions X α , X β and X δ of wolves α, β and δ, respectively.
  • Use the elitist mirror strategy of Equation (14) to avoid the stagnation of wolves α, β and δ.
  • Update, if necessary, positions X α , X β and X δ or at least the worst two individuals of the population.
  • Set the α wolf as the current best record X o p t X α with penalized cost Wp,opt = Wp( X α ).
  • Step 4. Check for convergence
  • If the convergence criterion stated by Equation (15) is satisfied
  • Terminate the optimization process.
  • Output the optimal design X o p t and optimal cost value W( X o p t ).
  • else
  • Continue the optimization process.
  • Go to line 6.
  • end if
END SHGWJA.
In summary, SHGWJA is a grey wolf-based optimizer, which updates the population by checking whether the α, β, δ wolves may effectively improve the current best record. The JAYA operators (indicated in red on the flowchart) and the elitist strategies included in SHGWJA enhance the exploration and exploitation phases, forcing the algorithm to increase the population diversity and select high-quality trial solutions without performing too many function evaluations. The classical GWO algorithmic structure is modified by the elitist strategy W( X i , t r ) ≤ 1.1W( X o p t ), by the JAYA-based strategies stated in Equations (12) and (13) to generate high-quality trial designs, and by the mirroring strategy stated in Equation (14) to increase the population diversity over the whole optimization search. These four operators introduced in the GWO formulation can be defined “simple modifications” because they are very easy to implement.
It should be noted that since classical GWO and JAYA formulations do not include any specific operator taking care of the exploitation phase, they may suffer from a limited exploitation capability. Conversely, SHGWJA performs exploration or exploitation based on the current trend in the optimization history, and, in particular, on the quality of the currently generated trial solution. If a trial solution does not improve the current best record, SHGWJA performs exploration to search for higher-quality trial solutions over the whole search space (JAYA strategy of Equation (12)). If a trial solution is good, SHGWJA performs exploitation to further improve the current best record (JAYA strategy of Equation (13)). These phases alternate over the whole optimization history and, hence, are dynamically balanced. Furthermore, SHGWJA continuously explores the search space to avoid stagnation of the α, β and δ wolves.
Since JAYA operators modify the trial solutions generated by GWO, the proposed algorithm is a “high-level” hybrid optimizer where both components concur to form the new design. Interestingly, SHGWJA does not require any new internal parameters with respect to classical GWO and JAYA. This feature is not very common in metaheuristic optimization because hybrid algorithms usually utilize new heuristic internal parameters to switch the search process from one component optimizer to another.
Another interesting issue is the selection of the population size NPOP. Increasing the population size in metaheuristic optimization may lead to performing too many function evaluations. This occurs because the total number of evaluations is usually determined as the product of population size NPOP with the limit number of iterations. Using a larger population size may improve the exploration capability but computational cost may significantly increase. However, a large population is not necessary if the optimizer can always generate high-quality designs that continuously improve the current best record or the currently perturbed search agents. Furthermore, grey wolves hunt in nature in groups including at most 10–20 individuals (a family pack is typically formed by 5–11 animals; however, the pack can be composed by up to 2–3 families). For this reason, all SHGWJA optimizations carried out here for engineering design problems were performed with a population of 10 individuals. Sensitivity analysis confirmed the validity of this setting also for mathematical optimization where the population size was increased to 30 or 50, consistent with the referenced studies on other MAs.
The last issue is the computational (i.e., time) complexity of the proposed SHGWJA method. In general, this parameter is obtained by summing over the complexity of different algorithm steps, such as the search agent initialization, optimization variables perturbation and population sorting. The computational complexity of classical GWO over Niter iterations performed is O(NPOP × NDV + Niter × NPOP × (NDV + logNPOP)). For each optimization iteration of SHGWJA, the following occurs: (i) the elitist strategy W( X i , t r ) ≤ 1.1W( X o p t ) introduces NPOP new operations; (ii) each JAYA-based strategy (Equations (12) and (13)) introduces NPOP × NDV new operations; (iii) the mirroring strategy of Equation (14) introduces 2NDV new operations. In summary, SHGWJA performs in each iteration NPOP + (NPOP + 2) × NDV more operations than classical GWO. For example, for the largest test problem solved in this study with NDV = 14 and NPOP = 10, SHGWJA performed in each iteration at most 10 + (12 × 14) = 178 more operations than classical GWO. In the worst-case scenario where all operators are used for all agents, the computational complexity of SHGWJA increases to O(140 + Niter × 328) from only O(140 + Niter × 150) for classical GWO. The levels of computational complexity reported in the literature for very efficient GWO/JAYA variants such as GGWO [105] and EHRJAYA [98] are O(140 + Niter × 150) and O(Niter × 160), respectively, significantly lower than for SHGWJA. However, the higher number of operations performed by SHGWJA in each iteration are always finalized to generate high-quality trial designs, thus allowing the present algorithm to better explore/exploit the search space than its competitors.

4. Test Problems and Implementation Details

4.1. Mathematical Optimization

The SHGWJA method developed here was preliminarily tested with two classical mathematical optimization problems: the Rosenbrock problem (“Banana function”) and the Rastrigin problem. The NDV-dimension Rosenbrock problem is stated as follows:
M i n i m i z e   W X = j = 1 N D V 1 100 x j + 1 x j 2 2 1 x j 2
S u b j e c t   t o     a x j a
where a is the bound of optimization variables, which may range from 2 to 100 according to the selected test suite. The Rosenbrock problem is classified in mathematical optimization as a nonlinear unimodal problem; the global minimum of W is 0 for all optimization variables equal to 1. Unimodal optimization problems give information on the exploitation capability of metaheuristic algorithms.
The NDV-dimension Rastrigin problem is stated as follows:
  M i n i m i z e   W X = j = 1 N D V x j 2 10 c o s 2 π x j 2 + 10
S u b j e c t   t o     5.12 x j 5.12
The Rastrigin problem is a multimodal problem with the global minimum W = 0 for all optimization variables equal to 0. Multimodal optimization problems give information on the exploration capability of metaheuristic algorithms because the number of local optima exponentially increases with NDV, although convergence to the global minimum may become faster.
The most common settings used in the literature for these mathematical optimization problems are NPOP = 30, NDV = 30 or NPOP = 50 and NDV = 50. However, in order to cover the different studies presented in the literature, the Rosenbrock function and the Rastrigin function were minimized in this research for the following combinations of population size NPOP and number of NDV variables:
(i)
NPOP = 10 and NDV = 10, 30, 50, 100;
(ii)
NPOP = 30 and NDV = 30, 100, 500, 1000;
(iii)
NPOP = 50 and NDV = 30, 50, 100.
SHGWJA was extensively compared with the best-performing MAs applied to these test problems, such as the following: (i–ii) improved GWO [39] and JAYA [95,110,111,113] variants that also include chaotic perturbation of the design variables and hybridization with sequential quadratic programming; (iii–iv) equilibrium optimizer (EO) [30] and its improved version (mEO) [92]; (v) light spectrum optimizer (LSO) [31]; (vi) Runge–Kutta optimizer (RUN) [42]; (vii–viii) the arithmetic optimization algorithm (AOA) [43] and its improved version IAOA that forces the optimizer to switch back to exploration if the design does not improve over a certain number of iterations [96]; (ix) the mother optimization algorithm (MOA) [51]; (x) the preschool education optimization algorithm (PEOA) [52]; (xi) the learning cooking algorithm (LCA) [53]; (xii) the snow leopard optimization algorithm (SLOA) [70]; (xiii) the giant trevally optimizer (GTO) [81]; (xiv–xvi) the basic marine predators algorithm (MPA) [80], its improved version by self-adaptive weight parameter tuning and a dynamic social mechanism (IMPA) [101], and a hybrid algorithm combining MPA and gorilla troops optimizer (EGTO) [102]; (xvii) the hybridized differential evolution naked mole-rat algorithm (SaDN) [94]; (xviii) the modified sine cosine algorithm (MSCA) [99]; (xix) hybrid slime mould simulated annealing (hSM-SA) [100]; (xx–xxi) particle swarm optimization variants [114] based on the global best (GPSO) or on the presence of an aging leader with challengers (ALC-PSO) and the NDWPSO hybrid algorithm [115] combining PSO, differential evolution and whale optimization (three of the most cited MAs); (xxii–xxv) high-performance optimizers such as the covariance matrix adaptation evolution strategy (CMA-ES) [116] and IEEE CEC competitions winners such as success history-based adaptive differential evolution with linear population size reduction (LSHADE) [117], its variant including covariance matrix learning with Euclidean neighborhood (LSHADE-cnEpSin) [118] and the hybrid LSHADE-SPACMA algorithm [119] combining LSHADE and CMA-ES; and (xxvi–xxviii) the hybrid harmony search (hybrid HS), hybrid big bang–big crunch (hybrid BBBC) and hybrid fast simulated annealing (HFSA) algorithms developed in [93].

4.2. Concrete Gravity Dam Shape Optimization

The first design problem regarded civil engineering: the shape optimization of a concrete gravity dam. The dam cross-section to be optimized is shown in Figure 3. The concrete volume Vdam per unit width was minimized by varying the nine shape variables (x1,x2,x3,x4,x5,x6,x7,x8,x9) in Figure 3. The dam height is Hdam = 150 m, water elevation in the normal state in dam upstream is 145 m, sediment height is 7 m.
This optimization problem included eight constraints (six of which are nonlinear) and is stated as follows:
M i n i m i z e   W X = H d a m X 1 + 1 2 X 2 X 3 + X 3 X 4 + X 6 + 1 2 X 4 X 5 + X 5 X 6 + 1 2 X 6 X 7 + 1 2 X 8 X 9 S u b j e c t   t o   X 2 + X 4 + X 6 = H d a m X 4 + X 6 X 8 0               S F S 4                                                   S F O 1.5                                           0 σ U σ m a x                       0 σ D σ m a x                      
In Equation (18), SFS and SFO, respectively, are the safety factors against dam sliding and overturning; σU, σD and σmax, respectively, are the vertical fatigue specific loads in the upstream and downstream, and their fatigue limit. Shape variables were limited as follows: 1 ≤ X1,X3 ≤ 20 m; 1 ≤ X2 ≤ 50 m; 5 ≤ X4,X5,X6,X7 ≤ 100 m; and 50 ≤ X8,X9 ≤ 150 m.
SFS = (f·∑Fv + σ·b)/∑FH, where f is the static friction coefficient, σ is the allowable shear tension of cutting surface and b = (X1 + X3 + X5 + X7 + X9) is the dam base length. The dam is subject to the vertical forces Fv corresponding to the concrete weight W, the vertical components of the water pressure Fhv and sediment pressure PSV, the uplift force generated by the dam basement Fuplift, and the earthquake force FE. Forces W, Fhv and PSV are directed downward while forces Fuplift and FE are directed upward. The horizontal forces FH acting on dam in the positive X-direction are the horizontal components of the water pressure Fhh and sediment pressure PSO. Furthermore, SFO = ∑MR/∑Mo; MR and Mo, respectively, are torques of the resisting forces (W, Fhv, PSV) and driving forces (Fuplift, Fhh, PSO) acting on dam. The vertical fatigue loads σU and σD are defined as (∑Fv/b − 6∑Mo/b2) and (∑Fv/b + 6∑Mo/b2), respectively.
More details on this test problem are in [93,120,121]. The SHGWJA algorithm developed in this research was compared with the best optimizers quoted in the literature for this design example: the hybrid harmony search (hybrid HS), hybrid big bang–big crunch (hybrid BBBC) and hybrid fast simulated annealing (HFSA) algorithms developed in [93]. Such a comparison was very indicative because the hybrid optimizers of Ref. [93] combined metaheuristic search with approximate line search and gradient information while SHGWJA utilized only classical GWO and JA operators augmented by elitist strategies. The classical grey wolf optimizer (GWO), classical and improved JAYA formulations [93], multi-level cross entropy optimizer (MCEO) [120], flying squirrel optimizer (FSO) [121], modified harmony search optimization with adaptive parameter updating (mAHS, derived from [122]), modified big bang–big crunch with an upper bound strategy (mBBBC–UBS, derived from [123]) and modified sinusoidal differential evolution (MsinDE, derived from [124]) were also compared with SHGWJA. In particular, the original formulations of algorithms in Refs. [122,123,124] were enhanced by including the elitist strategy W( X i , t r ) ≤ 1.1W( X o p t ) implemented in SHGWJA.

4.3. Optimal Design of Tension/Compression Spring

The second design problem solved in this study regarded the weight minimization of the tension/compression spring shown in Figure 4. This optimization problem, taken from the field of mechanical design, included three design variables: wire diameter d (x1), mean coil diameter D (x2) and number of active coils (N). The spring must be designed under four constraints (three of which are nonlinear) on minimum deflection, shear stress and surge frequency.
The optimization problem was formulated as follows:
M i n i m i z e   W x 1 , x 2 , x 3 = x 3 + 2 x 2 x 1 2 Subject       to g 1 X ¯ = 1 x 2 3 x 3 71785 x 1 4 0                     g 2 X ¯ = 4 x 2 2 x 1 x 2 12566 x 1 3 x 2 x 1 4 + 1 5108 x 1 2 1 0   g 3 X ¯ = 1 140.45 x 1 x 2 2 x 3 0   g 4 X ¯ = x 1 + x 2 1.5 1 0
Design variables could vary as follows: 0.05 ≤ x1 ≤ 20 in; 0.25 ≤ x2 ≤ 1.3 in; 2 ≤ x3 ≤ 15. More details on this example of mechanical engineering design optimization can be found in [30,31,43,51,52,53,69,74,80,94,98,100,102,105,111,112,125,126,127,128]. This test problem was included in the mechanical engineering section of the CEC 2020 test suite on real-world constrained problems from various fields [74,126].
In this study, the proposed SHGWJA method was compared in detail to the following MAs: (i) gaze cues learning-based graygrey wolf optimizer (GGWO) [105], currently the most advanced GWO variant; (ii) astrophysics-based GWO (MGWO-III) [39] (another powerful GWO variant not directly compared with GGWO in [105]); (iii) JAYA variants such as EHRJAYA [98] (a hybrid optimizer combining JAYA and Rao-1 algorithms reported to be the best amongst 34 MAs), SAMP-JAYA [111] (which utilizes an adaptively variable number of sub-populations) and EJAYA [112] (which utilizes a larger sets of solutions than best/worst individuals of the population to form the new trial solutions); (iv) equilibrium optimizer (EO) [30]; (v) light spectrum optimizer (LSO) [31] (reported to be the best amongst 32 MAs); (vi–vii) the arithmetic optimization algorithm (AOA) [43] and its improved version IAOA that forces the optimizer to switch back to exploration if the design does not improve over a certain number of iterations [96]; (viii) the mother optimization algorithm (MOA) [51]; (ix) the preschool education optimization algorithm (PEOA) [52]; (x) learning cooking algorithm (LCA) [53]; (xi) snake optimizer (SO) [69]; (xii) starling murmuration optimizer (SMO) [74]; (xiii–xiv) the basic marine predators algorithm (MPA) [80] and the hybrid EGTO algorithm combining marine predators and gorilla troops optimizer [102]; (xv) the hybridized differential evolution naked mole-rat algorithm (SaDN) [94]; (xvi) hybrid slime mould simulated annealing (hSM-SA) [100]; (xvii) improved multi-operator differential evolution (IMODE) [125]; (xviii) success history-based adaptive differential evolution with gradient-based repair (En(L)SHADE) [126]; and (xix) the nine algorithms analyzed in Ref. [127], which run with a computational budget of 18,000 function evaluations. These comparisons should be considered very indicative because SHGWJA’s competitors are other GWO and JAYA formulations, other state-of-the-art metaheuristic methods (EO, AOA, MOA, PEOA, LCA, SO, SMO, SaDN, hSM-SA, MPA and EGTO, as well as the MAs analyzed in Ref. [127]), and high-performance optimizers (IMODE and En(L)SHADE), inherently suited for nonlinear optimization.

4.4. Optimal Design of Welded Beam

The third design problem solved in this study regarded the minimization of the fabrication cost of the welded beam shown in Figure 5. The optimization problem included four design variables: weld thickness (x1), length of attached part of bar (x2), bar height (x3) and bar thickness (x4). The weld must be designed under seven constraints on shear stress, the bending stress of the beam, the end deflection of the beam, the buckling load of the bar, geometric constraints on weld thickness and the weld/beam thickness aspect ratio, and the weld fabrication cost, which must not exceed five.
The optimization problem is formulated as follows:
M i n i m i z e   W x 1 , x 2 , x 3 , x 4 = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 x 2 + 14 Subject       to g 1 X ¯ = τ X ¯ τ max 0 g 2 X ¯ = σ X ¯ σ max 0           g 3 X ¯ = δ X ¯ δ max 0     g 4 X ¯ = P P c X ¯ 0   g 5 X ¯ = x 1 x 4 0     g 6 X ¯ = 0.125 x 1 0     g 7 X ¯ = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 x 2 + 14 5.0   0
where τ( X ¯ ) is the bar shear stress, which must not exceed the limit τmax = 13,600 psi; σ( X ¯ ) is the bar bending stress, which must not exceed the limit σmax = 30,000 psi; δ( X ¯ ) is the maximum deflection of the bar tip, which should not exceed the limit δmax = 0.25 in; P = 6000 lb is the applied load, which must be lower than the buckling load Pc( X ¯ ) of the beam modeled as a narrow rectangular bar.
Design variables could vary as follows: 0.1 ≤ x1,x4 ≤ 2 in; 0.1 ≤ x2,x3 ≤ 10 in. More details on this example of mechanical engineering design optimization are given in [30,31,43,51,52,53,69,74,80,94,100,102,105,111,112,115,121,125,127,129,130,131,132]. This test problem was included in the mechanical engineering section of the CEC 2020 test suite on real-world constrained problems from various fields [74,126,129,130,131]. Several variants of this problem have been proposed, thus leading to different target optimal solutions. In the classical formulation (variant 1), the moment of inertia of the structure J, the end deflection of the beam δ X ¯ , and the buckling load P c X ¯ are expressed, respectively, as follows: J = 2 2 · x 1 x 2 · x 2 2 12 + x 1 + x 3 2 2 , δ   X   ¯ = 6 P L 3 E x 3 2 x 4 , and P c X ¯ = 4.013 E L 2 x 3 2 x 4 6 36 1 x 3 2 L E 4 G ; the corresponding target optimum is 1.72485. In another formulation (variant 2), the moment of inertia is changed to J = 2 2 · x 1 x 2 · x 2 2 4 + x 1 + x 3 2 2 , with a corresponding target optimum of 1.695. Finally, the CEC 2020 library (variant 3) reported the expressions J = 2 2 · x 1 x 2 · x 2 2 4 + x 1 + x 3 2 2 , δ X ¯ = 4 P L 3 E x 3 2 x 4 , and P c X ¯ = 4.013 E L 2 x 3 2 x 4 6 30 1 x 3 2 L E 4 G , with a corresponding target optimum of 1.670.
In this study, the proposed SHGWJA algorithm was compared with GGWO [105], MGWO-III [39], EHRJAYA [98] (reported to be the best amongst 32 MAs), SAMP-JAYA [111], EJAYA [112], EO [30], LSO [31] (reported to be the best amongst 27 MAs), AOA [43], MOA [51], PEOA [52], LCA [53], SO [69], SMO [74], MPA [80], SaDN [94], hSM-SA [100], EGTO [102], NDWPSO [115] (combining PSO, differential evolution and whale optimization, three of the most cited MAs), the high performance computing hybrid LSHADE-SPACMA algorithm that combines history based adaptive differential evolution with linear population size reduction and covariance matrix adaptation evolution strategy [119,129], FSO [121], IMODE [125], En(L)SHADE [126], self-adaptive spherical search (SASS, another very efficient algorithm for CEC 2020 real-world problems) [130,131], and the Taguchi method integrated harmony search algorithm (TIHHSA, that can op-timize selection of internal parameters based on statistical analysis) [132]. SHGWJA was also compared with the nine MAs analyzed in Ref. [127] that operate on a computational budget of 18,000 function evaluations. Comparisons with the optimization literature were indicative also for this test problem given the variety of competitors, which included inherently robust optimization algorithms such as TIHHSA.

4.5. Optimal Design of Pressure Vessel

The fourth design problem solved in this study regarded the minimization of the forming, material and welding costs of the pressure vessel shown in Figure 6a. The vessel must be designed according to the ASME boiler code requirements. The optimization problem included four design variables schematized in Figure 6b: the thickness of cylindrical segment Ts (x1), the thickness of spherical caps Th (x2), the radius of curvature of spherical caps R (x3) and the length of cylindrical segment L (x4).
The optimization problem included four constraints and was formulated as follows:
M i n i m i z e   W x 1 , x 2 , x 3 , x 4 = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3 Subject to : g 1 X ¯ = x 1 + 0.0193 x 3 0 g 2 X ¯ = x 1 + 0.00954 x 3 0 g 3 X ¯ = π x 3 2 x 4 4 3 π x 3 3 + 1296000 0 g 4 X ¯ = x 4 240 0
In the continuous version of this design problem, optimization variables can vary as follows: 0 ≤ x1,x2 ≤ 100 in; 10 ≤ x3,x4 ≤ 200 in. In the mixed variable version of the problem, the thickness variables x1,x2 must be multiple integers of 0.0625 in, which is the available thickness of rolled steel plates. More details on this mechanical design example are in [30,31,43,51,52,53,69,74,80,81,92,94,100,102,105,111,112,115,121,125,127,129,132]. This test problem was included in the mechanical engineering section of the CEC 2020 test suite on real-world constrained problems from various fields [74,126,129,130,131].
In this study, SHGWJA was compared with other metaheuristic optimizers like GGWO [105], MGWO-III [39], EHRJAYA [98] (reported to be the best amongst 33 MAs), SAMP-JAYA [111], EJAYA [112], EO [30] and mEO [92], LSO [31] (reported to be the best amongst 31 MAs), AOA [43] and its improved version IAOA [96], MOA [51], PEOA [52], LCA [53], SO [69], SMO [74], MPA [80], GTO [81], SaDN [94], hSM-SA [100], EGTO [102], NDWPSO [115], FSO [121], IMODE [125], En(L)SHADE [126], SASS [130,131], and TIHHSA [132], as well as the nine algorithms analyzed in Ref. [127] (that used a computational budget of 25,000 function evaluations), following the same rationale outlined in Section 4.3 and Section 4.4 for the spring and the weld beam design problems.

4.6. Optimal Design of Industrial Refrigeration System

The fifth design problem regarded the optimal design of an industrial refrigeration system. This example, originally developed by Paul and Tay [133], became a part of the CEC-2020 benchmark test suite of real-world mechanical engineering problems [74,126,130,131,134,135]. The objective of the optimization is to minimize the fabrication and operation costs of the refrigeration system with constraints derived from the geometry and heat transfer characteristics. The mathematical formulation of this test problem is very complex because it presents 14 optimization variables, a highly nonlinear cost function, and 15 highly nonlinear inequality constraints. The optimization variables are the following: shell diameter of the evaporator Dse (x1), shell diameter of the condenser Dsc (x2), shell thickness of the evaporator Tse (x3), shell thickness of the condenser Tsc (x4), tubesheet thickness for the evaporator tse (x5), tubesheet thickness for the condenser tsc (x6), fluid velocity in the evaporator Ve (x7), fluid velocity in the condenser Vc (x8), number of tube passes for the evaporator Pe (x9), number of tube passes for the condenser Pc (x10), tube length for the evaporator Le (x11), tube length for the condenser Lc (x12), condenser effectiveness ε (x13), and inlet/outlet fluid temperature difference ΔTc (x14).
The optimization problem is stated as follows:
M i n i m i z e   W x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , x 9 , x 10 , x 11 , x 12 , x 13 , x 14 =     63098.88 · x 2 · x 4 · x 12 + 5441.5 · x 2 2 · x 12 + 115055.5 · x 2 1.664 · x 6 + 6172.27 · x 2 2 · x 6 + 63098.88 · x 1 · x 3 · x 11 + 5441.5 · x 1 2 · x 11 + 115055.5 · x 1 1.664 · x 5 + 6172.27 · x 1 2 · x 5 + 140.53 · x 1 · x 11 + 281.29 · x 1 x 3 + 70.26 · x 1 2 + 281.29 · x 3 x 11 + 281.29 · x 3 2 + 14437 · x 1 2 · x 8 1.8812 · x 12 0.3424 · x 7 · x 9 1 · x 10 · x 14 1 + 20470.2 · x 1 2 · x 7 2.893 · x 11 0.316
Subject       to : g 1 X ¯ = 1.524 · x 7 1 1 0     g 2 X ¯ = 1.524 · x 8 1 1 0 g 3 X ¯ = 0.07789 · x 1 2 · x 7 1 · x 9 1 0     g 4 X ¯ = 7.05305 · x 1 2 · x 2 1 · x 8 1 · x 9 1 · x 10 · x 14 1 1 0     g 5 X ¯ = 0.0833 · x 13 1 · x 14 1 0     g 6 X ¯ = 47.136 · x 2 0.333 · x 10 1 · x 12 1.333 · x 8 · x 13 2.1195 + 62.08 · x 8 0.2 · x 10 1 · x 12 1 · x 13 2.1195 1 0   g 7 X ¯ = 0.04771 · x 8 1.8812 · x 10 · x 12 0.3424 1 0   g 8 X ¯ = 0.0488 · x 7 1.893 · x 9 · x 11 0.316 1 0     g 9 X ¯ = 0.0099 · x 1 · x 3 1 1 0     g 10 X ¯ = 0.0193 · x 2 · x 4 1 1 0 g 11 X ¯ = 0.0298 · x 1 · x 5 1 1 0             g 12 X ¯ = 0.056 · x 2 · x 6 1 1 0   g 13 X ¯ = 2 · x 9 1 1 0   g 14 X ¯ = 2 · x 10 1 1 0 g 15 X ¯ = x 12 · x 11 1 1 0
All design variables vary between 0.001 and 5. The target optimum of this problem is POPT[0.001;0.001;0.001;0.001;0.001;0.001;1.524;1.524;5;2;0.001;0.001;0.0072934;0.087556] with the globally optimum cost of 0.032213 (see, for example, Ref. [134]).
The proposed SHGWJA algorithm was compared with the hybrid HS, hybrid BBBC and hybrid HSA algorithms developed in [93]; the SMO algorithm [74]; classical GWO; classical and improved JAYA formulations [93]; mAHS (derived from [122]); mBBBC–UBS (derived from [123]); MsinDE (derived from [124]); success history-based adaptive differential evolution with gradient-based repair (En(L)SHADE) [126]; self-adaptive spherical search (SASS) [130,131]; the hybrid algorithm SDDS-SABC combining Split–Detect–Discard–Shrink (which uses partitions to identify promising regions of the search space) and Sophisticated Artificial Bee Colony (with modifications in the initialization process, employed and scout bees searching strategy) [131]; and cuckoo search (CS) variants [134], following the same rationale outlined in Section 4.2, Section 4.3, Section 4.4, Section 4.5.

4.7. Two-Dimensional Path Planning

The sixth engineering problem solved in this study is a 2D path planning problem taken from the field of robotics. The goal was to search for the shortest path C between the start point A and the end point B, trying not to collide with seven obstacles located between A and B. Figure 7 shows the design space of this test problem (all quantities are expressed in mm). There are the start/end points A(1,1) and B(29,29) to be connected, and the 7 obstacles, O1(6,6), O2(7,24), O3(9,15), O4(16,23), O5(20,11), O6(23,6) and O7(24,23), which limit cir-cular areas of radii 3, 1, 3, 2, 2, 1 and 2 mm, respectively.
Because of the presence of obstacles between the points A and B, the C trajectory is no longer a straight line and becomes a complex non-smooth curve. The 2D path planning problem for a trajectory with NCP control points (xcr,ycr) connected by (NCP–1) segments is generally formulated as follows:
M i n i m i z e   W X = · r = 1 · N C P x c r + 1 x c r 2 + y c r + 1 y c r 2 + P e n a l t y
where the cost function W represents the length of the 𝒞 trajectory to be minimized. The first and last control points, respectively, are the start point A and the end point B. The “Penalty” term is 0 only if 𝒞 does not collide with the NOB obstacles.
The 𝒞 trajectory passes through NBP base points whose coordinates (xb,yb) are included as optimization variables. The trajectory is fitted by cubic splines over the base points A and B. As the base points are updated by the optimizer, splines are sampled in NCP points to describe the path trajectory. If an obstacle is defined by a circular region centered at (xp,yp) with radius ρp, nonlinear optimization constraints are defined by the non-collision conditions:
S u b j e c t   t o   G X = ρ p x c r x p 2 + y c r y p 2 1 < 0 G X = ρ p x b i x p 2 + y b i y p 2 1 < 0         p = 1 , , N O B r = 1 , , N C P i = 1 , N B P
Equation (24) states that the distance between any control point (xcr,ycr) or any base point (xbi,ybi) of the trajectory and the center (xp,yp) of an obstacle must be longer than the radius ρp of the circular region limited by the obstacle. This will avoid the intersection between the trajectory and the obstacle.
In general, the optimization problem stated by Equations (23) and (24) includes 2·NBP design variables and (NCP + NBP) × NOB nonlinear constraints. More details on the formulation of path planning problems in robotics can be found in Refs. [136,137,138].
In the present study, there were NBP = 3 base points, NOB = 7 obstacles and the trajectory was sampled by NCP = 100 points. The coordinates of the three base points defined six optimization variables, that could vary between 1 and 29 mm. This test problem was certainly challenging as it included a highly nonlinear cost function, 721 (i.e., (100 + 3) × 7) nonlinear constraints and fewer base points than obstacles. The “Penalty” term in Equation (23) was defined as the sum of the squares of violations of constraint inequalities (24) multiplied by 1000 in order to amplify the effect of collisions.
The proposed SHGWJA algorithm was compared with the hybrid HS, hybrid BBBC and hybrid SA algorithms developed in [93], classical GWO, classical and improved JAYA, mAHS (derived from [122]), mBBBC–UBS (derived from [123]), MsinDE (derived from [124]) and dynamic differential annealed optimization (DDAO) [134]. The original computational budget selected in [136] for this problem was 10,000 function evaluations. The comparison with the above-mentioned algorithms is very indicative because all methods tested in Ref. [93] converged to their optimal solutions on average within only 6000 function evaluations.

4.8. Mechanical Characterization of a Flat Composite Panel Under Axial Compression

The seventh engineering problem solved in this study regarded the identification of elastic properties and ply orientations of a flat IM7/977-2 graphite–epoxy composite panel (43 cm long, 16.5 cm wide and 3 mm thick) used in aircraft structures. The panel is unstiffened and subject to axial compression that may cause buckling. The panel manufacturer specified both panel layup —1/3 of the layers oriented at 0° (longitudinal direction L), 1/3 at 90° (transverse direction T) and 1/3 at ±45°—and target values of the elastic constants—EL = 148,300 MPa, ET = 7450 MPa, GLT = 4140 MPa and νLT = 0.01510.
The inverse problem of identifying mechanical and stiffness properties of the axially compressed flat composite panel was formulated as an optimization problem including seven unknown material/orientation parameters as design variables: the four elastic constants EL, ET, GLT and νLT and the three ply orientations θo, θ90 and θ45 of the laminate (corresponding, respectively, to nominal ply angles 0°, 90° and ±45°). The optimization process entailed by this identification problem minimized the error functional Ω corresponding to the difference between the panel’s fundamental buckling mode shape measured experimentally and its counterpart computed by a finite element model simulating the experimental measurements. The buckling mode shape was normalized with respect to the maximum out-of-plane displacement wmax at the buckling onset. Hence, the target quantity of the optimization process was the normalized out-of-plane displacement wnorm defined as w/wmax. The optimization problem is stated as follows:
M i n i m i z e Ω ( E L , E T , G L T , ν L T , θ o , θ 90 , θ 45 ) = p = 1 N C N T w n o r m , F E M p w n o r m , E X P p w n o r m , E X P p 2 Subject to E L E T > 0 1 ν L T ( E T / E L ) > 0
In Equation (25), wnorm,EXPp and wnorm,FEMp, respectively, denote the experimentally measured and computed normalized out-of-plane displacements of the panel at the selected p-th control point. The error functional Ω was evaluated at NCNT control points. The inverse problem stated by Equation (25) is highly nonlinear because its cost function results from the combination of experimental measurements, numerical simulations and optimization. The optimizer must be able to deal with experimental uncertainties as well as modeling assumptions made in the finite element simulations. In this regard, the number of control points considered here was 170 vs. only 86 in Ref. [3] to increase the level of noise in the optimization process.
Figure 8a shows the white light double illumination projection moiré setup used in [3] for measuring the buckling mode shape of the panel. The panel was axially compressed by end-shortening its top edge while the bottom edge was fixed. The end-shortening was progressively increased up to 1.5 mm by moving the testing machine cross-bar downward. The testing machine, the GRlo grip applying the axial load to the panel’s top edge and the GRkc grip applying the kinematic constraints to the panel’s bottom edge, and the load cell LC measuring the load transferred to the panel are also shown in the figure. The moiré setup included two slide projectors (SPR 1 and SPR 2) and one standard digital camera (DC) mounted on a tripod. The optical axis of the camera is orthogonal to the panel surface. The projectors are symmetrically placed about the camera’s optical axis. Each projector projects a system of lines and the corresponding light wave fronts interfere to form an equivalent grating that travels in the space. This grating is projected onto the specimen surface and then modulated as the panel deforms under the applied load. By comparing the pattern of lines projected onto the deformed surface with the one projected onto the undeformed surface (reference configuration), it is possible to recover the out-of-plane displacement field of the panel. More details on the projection moiré technique and the experimental setup of Figure 8a are given in [3,139,140,141]. The phase distribution of the moiré pattern corresponding to the onset of buckling is shown in Figure 8b. The recorded images were processed with the HoloStrain Version 2.0 software [142].
The moiré measurements were simulated by the finite element model of Figure 8c. The FEM model was implemented in ANSYS® using 2465 eight-node shell elements. Convergence analysis was carried out to obtain mesh independent solutions preserving the correspondence between control nodes and image pixels. The panel’s critical load and corresponding buckled shape were determined with linear buckling analysis.
The error functional Ω in Equation (25) was defined by comparing moiré data and w-displacements computed by ANSYS at 170 control nodes located on the control paths AB, CD, EF and GH sketched in Figure 8c. The vertical path AB was selected in [3] because the observed panel’s buckling mode is a typical Euler mode with one half-wave and maximum out-of-plane displacement at the center of the panel. Here, we added the horizontal control paths to increase the number of control nodes involved in the definition of the Ω functional. The following bounds were set for the unknown parameters identified in the optimization process: 10,000 ≤ ET,EL ≤ 1,000,000 MPa, 500 ≤ GLT ≤ 20,000 MPa and 0.001 ≤ νLT ≤ 0.1; −50° ≤ θo ≤ 50°, 0° ≤ θ90 ≤ 91°, 10° ≤ θ45 ≤ 70°. The selected bounds were large enough in order not to affect the results of identification process.
The proposed SHGWJA algorithm was compared with the hybrid fast harmony search HFHS, hybrid fast big bang–big crunch HFBBBC and hybrid fast simulated annealing HFSA algorithms specifically developed in [3] for material characterization inverse problems, standard GWO and JAYA, improved JAYA [93], mAHS (derived from [122]), mBBBC–UBS (derived from [123]), and MsinDE (derived from [124]).

4.9. Implementation Details

SHGWJA was implemented in the MATLAB® R2024 software environment with its own standalone code. Optimization runs were carried on a standard PC equipped with a 3.1 MHz AMD processor and 16 MB of RAM memory. In order to draw statistically significant conclusions, 20 independent optimization runs were carried out for each test problem. For the two mathematical problems (i.e., Rosembrock and Rastrigin functions), the number of design variables, population size and limit number of optimization iterations were set according to the literature in order to complete the same number of function evaluations (also see details on parameters setting in Section 4.1). For the seven engineering design optimization examples, the population size and limit number of iterations were, respectively, set equal to 10 and 5000. In engineering problems, should all iterations be completed and all population individuals be evaluated in each iteration, the total computational budget of each optimization run would be 50,000 function evaluations. However, the proposed algorithm was able to complete optimizations within a much lower number of function evaluations with no need to complete all iterations. The parameter settings of SHGWJA’s competitors are those reported in the referenced literature.
The initial populations were defined with the following characteristics: (i) the population covers the whole design space, generated with the classical Equation (9) x j i = x j L + ρ j i   ( x j U x j L ) for the jth variable of the ith search agent stored in the population; (ii) the initial designs were all close to the lower bounds of the optimization variables, generated with the equation x j i = x j L + ρ j i   ( x j U x j L ) /100,000; and (iii) the initial designs were all close to the upper bounds of the optimization variables, generated with the equation x j i = x j U ρ j i   ( x j U x j L ) /100,000. Some initial designs significantly violated constraints (up to 1000% in the trajectory problem) and the corresponding cost function values were up to two orders of magnitude larger than target optima. This served to test the ability of SHGWJA to quickly approach the best region of the search space containing the optimum solution.
The constraint handling technique of each optimization algorithm compared with SHGWJA was always specified in the selected literature for each test problem. The most common approach was to use a static penalty function strategy with a constant penalty factor over the whole optimization process. The optimization runs specifically carried out in this study for SHGWJA, standard GWO, standard JAYA, mAHS, mBBBC-UBS, and MsinDE always adopted the penalty function approach stated by Equations (10) and (11). A sensitivity analysis was carried out by varying the penalty factor p in Equation (10) from 1 to 1,000,000 as 100, 101, 102, 103, 104, 105, and 106. This was carried out because setting large values for the penalty factor may amplify the effect of constraint violations and make it harder for the metaheuristic algorithm to generate feasible trial solutions. Remarkably, in all test problems, the largest deviation from the target optimum cost obtained by SHGWJA for the different penalty factor values never exceeded 0.001% of the target optimum cost.

5. Results and Discussion

5.1. Mathematical Optimization

Table 1 compares the optimization results obtained by SHGWJA and the other 31 MAs for the Rosenbrock problem. For each selected combination (NDV;NPOP), the table lists (when available) the best, average and worst optimized values, the corresponding standard deviation on the optimized cost (always indicated as ST Dev in all result tables), and the required function evaluations (NFE) for all optimizers. The limit number of function evaluations for SHGWJA was set equal to 15,000, which is the smallest number of function evaluations indicated in the literature for practically all SHGWJA’s competitors.
It can be seen from the table that SHGWJA always converged to the lowest values of the cost function, ranging from 1.000·10−15 for the parameter settings (NDV = 30; NPOP = 30) and (NDV = 30; NPOP = 50) commonly used in the literature to 5.179·10−13 for (NDV = 10; NPOP = 10). Remarkably, SHGWJA’s performance was practically insensitive to the selected combination of problem dimensionality and population size. In particular, the present algorithm always converged to a best solution very close to the target optimum cost of 0 within 15,000 function evaluations: the optimized cost obtained in the best runs of all (NDV;NPOP) settings never exceeded 5.179·10−13, while the standard deviation on the optimized cost never exceeded 2.669·10−12. Interestingly, for all (NDV;NPOP) settings, the average optimized cost and standard deviation on the optimized cost obtained by HSGWJA dropped to 10−28 when the convergence limit ε c o n v in Equation (14) was reduced to 10−15 and the computational budget was increased to 30,000 function evaluations, the same as for LCA and much less than for MOA (50,000), to obtain their reported null standard deviations.
The above results indicate that SHGWJA was the best optimizer overall. In fact, only the mother optimization algorithm (MOA) [51] and the learning cooking algorithm (LCA) [53] reached the 0 target solution in all optimization runs, but this required, respectively, 50,000 and 30,000 function evaluations. However, for the (NDV = 30; NPOP = 30) setting used by LCA, the best/average/worst optimized costs and standard deviation reached by SHGWJA within at most 15,000 function evaluations ranged between 1.001·10−15 and 7.545·10−14, very close numerically to the 0 target value. Furthermore, for NPOP = 50 set in MOA optimizations, SHGWJA obtained its best values: 10−15 for NDV = 30 and 100 with corresponding averages between 1.1·10−15 and 1.1·10−14, again very close to 0. The convergence curves provided in [51] for MOA and in [53] for LCA have low resolution and cannot be compared directly with those recorded for the present algorithm. However, SHGWJA’s intermediate solutions reached the cost of 10−7 within only 1000–1500 function evaluations in all optimization runs. Such behavior was fully consistent with the trends shown in [51,53].
The performance of SHGWJA was comparable to the hybrid harmony search (hybrid HS), hybrid big bang–big crunch (hybrid BBBC) and hybrid fast simulated annealing (HFSA) algorithms developed in [93]. In fact, the average optimized cost resulting from Table 1 for SHGWJA is 1.1·10−12, while the average optimized costs of hybrid HS/BBBC/SA were always above 1.197·10−11. The standard deviation on the optimized cost was on average 1.478·10−12 vs. the 7.4·10−12 average deviation of hybrid HS/BBBC/SA. However, the algorithms of Ref. [93] were run with a convergence limit ε c o n v of 10−15. The inspection of convergence curves reveals that hybrid HS/BBBC/SA actually stopped to improve the cost function already at about 80% of the search process in spite of their tighter convergence limit. This was due to the use of gradient information in the generation of new trial solutions. Conversely, SHGWJA kept reducing the cost function until the very last iterations of its search process.
SHGWJA also outperformed the two particle swarm optimization variants based on global best (GPSO) or including the presence of an aging leader with challengers (ALC-PSO) [114]. In fact, these PSO variants completed their best optimization runs within less than 6500 function evaluations but their best optimized cost was at most 3.7·10−7 vs. 1.001·10−15 to 4.789·10−13 obtained by SHGWJA for the same number of optimization variables (30) and similar population sizes (10 and 30). Furthermore, the average optimized cost and standard deviation of PSO variants, respectively, ranged between 7.6 and 11.7, and 6.7 and 15 vs. only 2.832·10−12 obtained by SHGWJA in the worst case.
Limiting our analysis to the classical setting (NDV = 30; NPOP = 30 ± 10) used in the metaheuristic optimization of unimodal functions and the very small computational budget of 15,000 function evaluations, the algorithms ranked as follows in terms of average optimized cost: SHGWJA, mEO, ALCO-PSO, GPSO, hSM-SA, AOA, EO, IAOA and LSHADE-SPACMA. When the computational budget increased to 30,000 function evaluations with (NDV;NPOP) ≤ 50, the ranking changed as follows: SHGWJA, MOA and LCA, SAMP-JAYA, GTO, IMPA, PEOA, mEO, ALC-PSO, EGTO, GPSO, hSM-SA, MGWO-III, SLOA, RUN, AOA, EO, MPA, CMA-ES, IAOA, LSHADE-SPACMA, LSHADE and LSHADE-CnEpSin. Increasing the population size to 70 and the number of function evaluations to 80,000 allowed the hybrid CJAYA-SQP algorithm (combining chaotic perturbation in JAYA’s exploration and sequential quadratic programming for exploitation) [95] to rank ninth between mEO and ALC-PSO; the hybrid NDWPSO algorithm (combining PSO, differential evolution and whale optimization) [115] ranked right before the high-performance algorithm LSHADE [117]. Interestingly, all JAYA variants except the steady-state JAYA (SJAYA) [110]) ranked in the top 10 algorithms, while high-performance algorithms like the LSHADE variants [117,118,119] and the basic formulation of the marine predators algorithm (MPA) [80] were not very efficient in the Rosenbrock problem. This confirms the advantage in selecting JAYA as a component algorithm for hybrid metaheuristic optimizers.
As mentioned before, SHGWJA was insensitive to parameter settings while all other algorithms showed a marked decay in performance as the number of optimization variables increased. For example, fixing the population size at 30 and varying the problem dimension from 30 to 1000, the average optimized cost went up to almost 1000 except for the giant trevally optimizer (GTO) [81] and equilibrium optimizer with a mutation strategy (mEO) [92] that limited this increase to 3.57·10−6 for NDV = 1000 and 6.2 for NDV = 500, respectively. For NDV = 1000, SHGWJA ranked first followed by hybrid HS/BBBC/SA and GTO with at most 2.53·10−8 average optimized costs; mEO had an average optimized cost still below 0.5; CJAYA-SQP, MGWO-III [39] (an astrophysics-based GWO variant), EO, leopard snow optimizer (LSO) [91] and SaDN [94] (hybridized differential evolution naked mole-rat) obtained average optimized costs between 13 and 99.
Table 2 presents the optimization results obtained by SHGWJA and its 29 MA competitors in the Rastrigin problem. The data arrangement is the same as for Table 1. The limit number of function evaluations for SHGWJA was also set to 15,000 for this multimodal problem. SHGWJA’s performance was again insensitive to the setting of NDV and NPOP. SHGWJA’s competitors were also much less sensitive to parameter settings than in the case of Rosenbrock’s problem. Such a behavior was somehow expected considering that the convergence of Rastrigin’s function to the global optimum becomes “easier” as the problem dimensionality increases.
It can be seen from Table 2 that most of the algorithms (including the astrophysics-based GWO variant MGWO-III [39], giant trevally optimizer (GTO) [81], improved marine predators algorithm (IMPA) [101], and improved arithmetic optimization algorithm (IOAO) [96]) converged to the target global optimum of 0 with a null average and standard deviation already for the lowest number of optimization variables. SHGWJA performed very well also in this multimodal optimization problem, converging to the best cost of 1.776·10−15 in eight cases, 2.842·10−14 in one case, and from 1.492·10−13 to 2.934·10−13 in the remaining two cases, always very close to the 0 target value. Furthermore, SHGWJA achieved a 0 standard deviation in three cases with all independent optimization runs converging to 1.776·10−15, and practically null standard deviation values ranging between 5.13·10−16 and 9.175·10−16 in the other five cases where the best value was 1.776·10−15. The average optimized cost and average standard deviation on the optimized cost obtained by SHGWJA for all (NDV;NPOP) combinations, respectively, were 3.171·10−13 and 6.606·10−13, hence, 1–2 orders of magnitude smaller than their counterpart values recorded for hybrid HS/BBBC/SA. Again, the hybrid HS/BBBC/SA algorithms of Ref. [93] stopped to significantly improve the cost function well before the end of the optimization iterations.
Interestingly, SHGWJA obtained cost function values and corresponding standard deviation values on the order of 10−28 by setting the convergence limit ε c o n v in Equation (14) to 10−15 and increasing the computational budget to 25,000 function evaluations, that is, (i) the same as for MPA, EGTO, SaDN and MSCA, (ii) less than for MGWO-III, LCA and GTO (30,000), (iii) much less than for LSO, MOA, PEOA, SLOA and IMPA (50,000), and CJAYA-SQP (80,000) to obtain their reported null standard deviations.
SHGWJA always required on average less than 14,000 function evaluations (i.e., the computational cost of NDWPSO, the fastest algorithm to reach the 0 target solution with a 0 standard deviation) to successfully complete the optimization process when the problem dimensionality was at most 100. The fastest optimization runs of SHGWJA converging to 1.776·10−15 were always completed within 12,877 function evaluations, less than the 14,000 evaluations required by NDWPSO for the single setting (NDV = 50; NPOP = 70); a more detailed inspection of convergence curves revealed that SHGWJA’s intermediate solutions always reached the cost of 10−7 within about 1000 function evaluations, practically the same behavior observed for NDWPSO. The I-JAYA algorithm (augmenting classical JAYA with fuzzy clustering competitive learning, experience learning and Cauchy mutation) rapidly reduced within only 15,700 function evaluations the cost function value to the 10−10 target value set in [113]. ALC-PSO also performed well in terms of optimized cost, which was reduced to 7.105·10−15 but required almost five times more function evaluations than SHGWJA (i.e., 74,206 vs. at most 15,000).
Since SHGWJA practically converged to the target optimum cost of 0 with a null or nearly null average optimized cost and standard deviation on the optimized cost within the lowest number of function evaluations in all cases, it should be considered the best optimizer also for Rastrigin’s problem. The MAs listed in Table 2 that failed to reach 0 average cost and 0 standard deviation by more than 10−7 obviously occupy the worst seven positions in the algorithm ranking as follows: AOA, MPA, LSHADE, LSHADE-SPACMA, SAMP-JAYA, LSHADE-cnEpSin and CMA-ES. All of these MAs were executed with NDV = 30 except LSHADE-SPACMA, which was run with NDV = 50.

5.2. Shape Optimization of Concrete Gravity Dam

Table 3 presents the optimization results for the concrete gravity dam design problem. The table also reports the average number of function evaluations NFE (structural analyses), the corresponding standard deviation, and the number of function evaluations required by the fastest optimization run for SHGWJA and some of its competitors.
The multi-level cross entropy optimizer (MCEO) [120] and the flying squirrel optimizer (FSO) [121] reduced the concrete volume per unit width of 10,502.1 m3 of the real structure to only 7448.774 m3. The hybrid harmony search, big bang–big crunch and simulated annealing algorithms developed in [93] found a similar solution to the improved JA [93] (a JAYA variant including line search to reduce the number of function evaluations entailed by the optimization) and significantly reduced the dam’s concrete volume to about 6831 m3. However, hybrid HS/BBBC/SA completed the optimization process within less function evaluations and converged to practically feasible designs while JAYA’s solution violated constraints by 4.06%. The 32% constraint violation reported for MCEO and FSO solutions refers to the fact that the constraint on the dam’s height X2 + X4 + X6 = 150 m (see Section 4.2) was not considered in [120,121].
It can be seen from Table 3 that the simple hybrid algorithm SHGWJA proposed in this study was the best optimizer overall. In fact, it could reduce the dam’s concrete volume by 2.68% with respect to the best solutions quoted in [93], reaching the optimal volume of 6647.874 m3. This optimal solution practically satisfied the design constraints, achieving a very low violation of 1.284·10−3%. The number of function evaluations performed in the SHGWJA optimizations was on average 50% lower than for the metaheuristic algorithms of Ref. [93] (i.e., only 10,388 analyses vs. 14,560 to 16,200 analyses) and 70% for MCEO [120], which required 35,000 function evaluations. FSO was reported in [121] to be considerably faster than particle swarm optimization and genetic algorithms, thus achieving a similar converge speed to MCEO.
Standard JAYA converged to the same optimal solution as SHGWJA, 6647.874 m3, but required 2.5 times the average number of function evaluations of SHGWJA (i.e., the JAYA’s computational budget was set to 25,000 function evaluations while SHGWJA converged within only 10,388 evaluations) and almost four times the function evaluations of the fastest SHGWJA’s optimization run (i.e., 25,000 vs. only 6584). Standard GWO obtained a slightly worse solution than SHGWJA and standard JAYA in terms of concrete dam volume (6654.927 m3 vs. 6647.874 m3). The GWO’s solution was also practically feasible but constraint violation increased to 7.209·10−3 vs. only 1.284·10−3% recorded for SHGWJA and standard JAYA. Furthermore, the computational cost of GWO was about five times higher than for SHGWJA (i.e., 50,000 analyses vs. only 10,388 analyses). This confirms the validity of the hybrid search scheme used by SHGWJA.
The modified sinusoidal differential evolution (MsinDE) (derived from [124]), the modified big bang–big crunch with upper bound strategy (mBBBC–UBS) (derived from [123]), and modified harmony search optimization with adaptive parameter updating (mAHS) (derived from [122]), respectively, ranked third, fifth and sixth in terms of optimized volumes that were very close to the global optimum found by SHGWJA and standard JAYA, ranging from 6652.513 to 6658.024 m3 vs. only 6647.874 m3. It should be noted that the SHGWJA’s elitist strategy W( X i , t r ) ≤ 1.1W( X o p t ) implemented also in mAHS, mBBBC–UBS and MsinDE was very effective because it allowed these algorithms to improve the best solution quoted in the literature and to significantly reduce the gap with respect to the best optimizer. In particular, the optimized volumes by mAHS, mBBBC–UBS and MsinDE were at most 0.15% larger than the SHGWJA’s optimum volume (i.e., only 10.150 m3 gap vs. 6647.874 m3) vs. the 0.89% seen for the original algorithms with respect to hybrid BBBC, the best optimizer in Ref. [93]. Furthermore, the optimized designs obtained by mAHS, mBBBC–UBS and MsinDE in their best runs were always practically feasible: in fact, constraint violation was, respectively, 1.311·10−3%, 3.376·10−3% and 3.653·10−3% vs. 1.437·10−3%, 6.733·10−3% and 6.598·10−3%, as reported in [93]. The computational cost of the best optimization runs was reduced compared to [93], saving about 10,000 function evaluations in the case of mAHS. However, mBBBC–UBS, mAHS and MsinDE still required on average more than three times the function evaluations required by SHGWJA (i.e., from 31,690 to 35,208 analyses vs. only 10,388 analyses) and their best optimization runs required 4.9 to 5.7 times more function evaluations than the present algorithm (i.e., from 32,004 to 37,545 analyses vs. only 6584 analyses).
Remarkably, SHGWJA achieved a 100% rate of success converging to the same optimal solution of 6647.874 m3 in all independent optimization runs with a null standard deviation on the optimized cost. None of the other algorithms referred to in Table 3 could obtain in all of their optimization runs better designs than the best volume of 6830.235 m3 quoted in [93] for hybrid BBBC. The standard deviation on the number of function evaluations required by SHGWJA was about 35% of the average number of function evaluations. These results prove the robustness of SHGWJA.
Further information on SHGWJA’s convergence behavior and robustness can be gathered from Figure 9, which compares the optimization history of the proposed algorithm and its competitors. The curves relative to best and average optimization runs of SHGWJA are shown in the figure. Since SHGWJA always converged to the global optimum of 6647.874 m3 in all optimization runs, its best run also corresponds to the fastest one. For the sake of clarity, the plot is limited to the first 18,000 function evaluations. The initial cost function value for the best individual of all algorithms ranged from 13,265.397 m3 to 17,351.728 m3, well above the global optimum of 6647.874 m3. The better convergence behavior of SHGWJA with respect to its competitors is clear from the very beginning of the search process. In fact, the optimized cost found on average by SHGWJA for a feasible intermediate design generated within the 6584 function evaluations of the best optimization run was only 0.933% larger than the target global optimum. Interestingly, in its best run, SHGWJA could generate a feasible intermediate design just 1% worse than the global optimum at approximately 69% of the optimization process. The mBBBC-UBS algorithm was the only optimizer close enough to SHGWJA in terms of convergence speed for the first 630 function evaluations and for 5400 to 6600 function evaluations.
Table 4 lists the optimized designs obtained by SHGWJA and its main competitors. SHGWJA and standard JAYA designs coincided as these algorithms converged to the same optimal solution and were just slightly different from the optimized designs of standard GWO and MsinDE. The new optimal solution yielding 6647.874 m3 concrete volume was very similar to the optimal solutions of the algorithms of Ref. [93]: all variables changed by at most 5% except X3, which was fixed to its lower bound by SHGWJA or close to the lower bound by GWO and MsinDE.
Figure 10 compares the dam’s optimized shapes (coordinates are expressed in meters) as found by SHGWJA and MsinDE, with the dam configurations reported in [93,120,121]. Since standard JAYA found the same optimum design as SHGWJA, while standard GWO, mAHS, mBBBC–UBS and MsinDE obtained very similar configurations to SHGWJA, only the plot referred to MsinDE (the best optimizer after SHGWJA and standard JAYA) is shown in the figure for the sake of clarity.
Remarkably, the dam safety factors against sliding and overturning corresponding to the SHGWJA’s optimal solution became 6.318 and 3.515, respectively, vs. their counterpart values of 6.213 and 3.453 indicated in [93]. Hence, reducing the concrete volume of the dam from 6831 m3 to 6647.874 m3 not only resulted in a more convenient design but also allowed a higher level of structural safety to be achieved. This proves the suitability of the proposed SHGWJA algorithm for civil engineering design problems.

5.3. Optimal Design of Tension/Compression Spring

Table 5 presents the optimization results obtained by SHGWJA and its competitors in the spring design problem. Statistical data on the optimization runs (i.e., best cost, average cost, worst cost and standard deviation optimized cost for the independent optimization runs), number of function evaluations NFE and optimal design are listed in the table for all the algorithms compared in this study. The values of the standard deviation on optimized cost listed in the table for the referenced algorithms are exactly those reported in the literature.
It can be seen that the SHGWJA algorithm proposed in this study performed very well: its best solution, 0.0126665, practically coincided with the optimized costs of MGWO-III —the astrophysics-based GWO variant of Ref. [39] (0.0126662) -, the preschool education optimization algorithm (PEOA) [52] (0.012667), and the modified sine cosine algorithm (MSCA) [99] (0.012668). SHGWJA’s optimized cost was only 0.0121% larger than the 0.012665 global optimum obtained by the JAYA variants EHRJAYA [98], SAMP-JAYA [111] and EJAYA [112], the mother optimization algorithm (MOA) [51], the marine predators algorithm (MPA) [80], the hybridized differential evolution naked mole-rat algorithm (SaDN) [94] and improved multi-operator differential evolution (IMODE) [125]. The gaze cues learning-based grey wolf optimizer (GGWO) ([105], the currently known best GWO variant), light spectrum optimizer (LSO) [31], starling murmuration optimizer (SMO) [74], hybrid EGTO algorithm combining marine predators and gorilla troops optimizer [102] and queuing search algorithm (QSA, which mimics the human activities in queuing and was the best amongst the nine MAs compared in [127]) converged to 0.0126652, the same as for EHRJAYA, MOA, MPA, SaDN and IMODE up to the fifth significant digit. The high-performance algorithm success history-based adaptive differential evolution with gradient-based repair (En(L)SHADE) was reported in [126] to obtain an optimized cost of 1.27·10−2, corresponding to the round up of the 0.0126652 cost.
The arithmetic optimization algorithm (AOA) [43] and its improved variant IAOA [96] found the smallest optimized costs, respectively, of 0.012124 and 0.012018, but the corresponding optimal designs were infeasible, violating the second constraint equation—g2( X ) in Equation (19)—by 8.05% and 10.8%. All other algorithms compared in Table 3 obtained practically feasible solutions, violating constraints by less than 0.143%. The learning cooking algorithm (LCA) was reported in [53] to converge within 30,000 function evaluations to the optimized cost of 0.0125 for the corresponding solution X * (0.05566; 0.4591; 7.194). However, the actual value of cost function computed by giving in input X * to Equation (19) is 0.013107, much higher than the 0.012665 optimum quoted in Table 5.
The rate of success of SHGWJA was 100% also for this test problem: in fact, the present algorithm found very close designs to the global optimum in all independent optimization runs. SHGWJA obtained the seventh-lowest standard deviation on optimized cost out of the 29 MAs compared in this study (21 listed in Table 3 and the other 8 from Ref. [127], while no information was available for LSO and its 31 competitors, compared in [31]), thus ranking in the first quartile. The standard deviation on optimized cost was only 0.0876% of the best design found by SHGWJA. Furthermore, the average optimized cost and worst optimized cost by SHGWJA, respectively, were at most 0.134% and 0.213% larger than the 0.012665 target value. Statistical dispersion on the optimized cost was very similar in its counterparts, that is, in MGWO-III, EHRJAYA and PEOA. However, EHRJAYA exhibited a larger average optimal cost than SHGWJA, while the worst solutions of MGWO-III and PEOA achieved a larger cost than that of SHGWJA.
As far computational cost is concerned, SHGWJA was on average the third fastest optimizer overall after EHRJAYA and LSO. However, the fastest optimization run of SHGWJA converging to the best solution of 0.0126665 was completed within only 2247 function evaluations, practically the same as for EHRJAYA, while the LSO’s convergence curve indicated in [31] for the best optimization run covered 5000 function evaluations. Among other JAYA variants, SAMP-JAYA was just slightly slower than SHGWJA (6861 function evaluations vs. only 6347). Furthermore, the present algorithm was on average about (i) 4.7 times faster than MGWO-III, LCA, PEOA, SO, SMO and IMODE, (ii) 4 times faster than MPA, EGTO and MSCA, (iii) 3.6 times faster than GGWO, (iv) 2.8 times faster than QSA and the other algorithms of Ref. [127], and (v) 2.4 times faster than EJAYA, EO, AOA, IAOA, SaDN and hSM-SA. It should be noted that Table 5 compares the average number of function evaluations required by SHGWJA either with the fixed computational budget of all optimization runs or the computational cost of the best optimization run. Furthermore, SHGWJA was robust enough in terms of computational cost: the standard deviation on the number of function evaluations was about 35.4% of the average number of evaluations.
Figure 11 shows a more detailed comparison of the optimization history of SHGWJA with those of EGTO [102] and QSA [127]. For the sake of clarity, the plot is limited to the first 3000 function evaluations. It can be seen that the convergence curves of the best and average optimization runs recorded for the present algorithm practically coincided after only 700 function evaluations, that is, at about 31% of the optimization history of the best run. Interestingly, EGTO started its optimization search from a much better initial design than those of SHGWJA and EGTO: the initial cost evaluated for its best individual was only 0.016 vs. 0.06 for QSA, 0.1 for the best SHGWJA run and 0.1528 for the average SHGWJA run. In spite of this, the present algorithm recovered the gap with respect to EGTO already in the early optimization cycles and the convergence curves of EGTO and the best SHGWJA’s run crossed each other after 650 function evaluations. Furthermore, SHGWJA generated on average better feasible intermediate designs than QSA after only 135 function evaluations, keeping this ability over the first 750 function evaluations. After about 2500 function evaluations, the cost function values of 0.012673 and 0.012697, respectively, achieved by EGTO and QSA in their best optimization runs were still higher than the best cost of 0.0126665, finally reached by SHGWJA after only 2250 function evaluations.
The data presented in this section demonstrate without a shadow of a doubt the computational efficiency of SHGWJA. In fact, the present algorithm generated very competitive results as compared to the 21 state-of-the art MAs, each of which was in turn proven in the literature to outperform 5 to 34 other metaheuristic algorithms and their variants.

5.4. Optimal Design of Welded Beam

Table 6 presents the optimization results for the welded beam design problem. The values of the standard deviation on optimized cost listed in the table for the referenced algorithms are exactly those reported in the literature. It can be seen that the proposed SHGWJA method was the best optimizer overall. In fact, the optimum cost of 1.724852 corresponds to the target solution of the classical formulation (see Section 4.4) and practically coincided with the optimized cost values obtained by all SHGWJA competitors except for the hybrid JAYA variant EHRJAYA [98], arithmetic optimization algorithm (AOA) [43], hybridized differential evolution naked mole-rat algorithm (SaDN) [94], improved multi-operator differential evolution (IMODE) [125], and Taguchi method integrated harmony search algorithm (TIHHSA) [132]. In particular, IMODE and SaDN converged to the largest costs, respectively, about 63.8% and 14.6% higher than the optimum cost of 1.72485. The Taguchi method integrated harmony search algorithm (TIHHSA) [132] converged to an optimized cost of 1.74026, close enough to the global optimum reached by SHGWJA.
The arithmetic optimization algorithm (AOA) [43] converged to an optimized solution yielding a slightly lower cost than SHGWJA: 1.7164 vs. 1.72485. However, this solution violates the shear stress constraint of the classical problem formulation—g1( X ) in Equation (20)—by 25.3%. All other algorithms of Table 6 solving the classical problem formulation converged to feasible designs.
SHGWJA found the target optimum cost of 1.6702 in all optimization runs (zero standard deviation), as well as in problem variant 3 of the CEC 2020 library, including J = 2 2 · x 1 x 2 · x 2 2 4 + x 1 + x 3 2 2 , beam tip displacement δ X ¯ = 4 P L 3 E x 3 2 x 4 , and critical load P c X ¯ = 4.013 E L 2 x 3 2 x 4 6 30 1 x 3 2 L E 4 G . The high-performance algorithms En(L)SHADE [126], LSHADE-SPACMA [119,129] and SASS [130,131] also converged to the same target optimal solution with 0 or near-0 standard deviations. However, SHGWJA required on average less than 4450 function evaluations vs. between 20,000 and 100,000 evaluations required by SASS, LSHADE-SPACMA and En(L)SHADE, thus confirming the computational efficiency of the proposed algorithm.
The rate of success of SHGWJA was again 100%: in fact, it converged to the global optima of the three problem variants (i.e., 1.72485; 1.69725; 1.6702) in all independent optimization runs. The zero or “practically zero” dispersion on optimized cost was achieved only by (i) SHGWJA, GGWO, SAMP-JAYA, EJAYA, MOA, EGTO and QSA in the classical problem variant 1; (ii) SHGWJA and NDWPSO in problem variant 2; and (iii) SHGWJA, En(L)SHADE, LSHADE-SPACMA and SASS in problem variant 3. These data confirm the robustness of the present algorithm.
SHGWJA was robust also in terms of computational cost. In fact, the values of the ratio between the standard deviations on the number of function evaluations and the corresponding average value were only 11.9%, 9.7% and 9.6% for problem variants 1, 2 and 3, respectively. In problem variant 1 (classical formulation), the present algorithm was the fastest optimizer together with SAMP-JAYA, completing its best optimization run within only 3670 function evaluations, 37.5% less than in the case of the third fastest optimizer LSO, which required 5000 function evaluations. Furthermore, SHGWJA was on average 3.3 times faster than GGWO and 7 times faster than MGWO-III. The EO, AOA, SaDN and hSM-SA algorithms ranked fifth because they were slightly slower than GGWO; however, AOA converged to an infeasible solution while SaDN missed the global optimum. All other MAs listed in Table 6 were 4.2 to 7 times slower than SHGWJA, requiring between 18,000 (e.g., QSA and all other MAs of [127]) and 30,000 (i.e., the high performance algorithm IMODE) function evaluations.
In problem variant 2, SHGWJA was on average slightly faster than FSO (i.e., 4201 vs. 4500 function evaluations) but also about six times faster than MSCA and about five times faster than NDWPSO. Its fastest optimization run required only 3635 function evaluations, close enough to the 2000 function evaluations required by EHRJAYA. The other JAYA variants of Ref. [98] were also slower than SHGWJA, as their optimization runs required between 4000 and 5000 function evaluations. Finally, in problem variant 3, SHGWJA required on average only 4438 function evaluations vs. the fixed computational budgets of 20,000, 40,000 and 100,000 evaluations, respectively, set in [119,126,129,130,131] for the high-performance algorithms SASS, LSHADE-SPACMA and En(L)SHADE.
Figure 12 presents a detailed comparison of the convergence curves of SHGWJA, LSO [31], EGTO [102], FSO [121] and QSA [127] in the three problem variants. As in the dam case, the best optimization runs of SHGWJA correspond also to the fastest runs, as the present algorithm achieved 0 standard deviation on the optimized cost in all problem variants. For the sake of clarity, the plot is limited to the first 4500 function evaluations. All algorithms except LSO started from very large costs compared to the target optima: (i) between 2.6 and 6.273 vs. only 1.72485 for problem variant 1; (ii) between 3.144 and 4.675 vs. only 1.69525 for problem variant 2; (iii) between 3.465 and 4.5 vs. only 1.6702 for problem variant 3. The analysis of convergence curves plotted in Figure 12 and its insert confirms the superiority of SHGWJA over its competitors also for this test problem. In fact, the best and average optimization runs’ convergence curves of SHGWJA practically coincided after only 2000 (i.e., 54.5% of the best run’s optimization history), 2100 (i.e., 57.8% of the best run’s optimization history) and 1700 (i.e., 44.8% of the best run’s optimization history) function evaluations, respectively, for problem variants 1, 2 and 3.
In problem variant 1, LSO started from the cost function value of 1.745, which is just 1.168% larger than the target optimum cost of 1.72485, while SHGWJA started from 6.273. However, SHGWJA recovered such a large gap in cost function with respect to LSO at 60% of its optimization history and then reduced the cost function to a slightly lower value than LSO (i.e., 1.724852 vs. 1.724866). EGTO started its best run from the cost of 2.6, about 58.5% lower than SHGWJA’s initial cost. The present algorithm recovered the gap before completing 68% of its optimization history also in this case. QSA’s best run started from a lower cost than SHGWJA (i.e., 4 vs. 6.273) but the present algorithm generated better feasible intermediate designs right after the first optimization iteration and for the first 400 function evaluations. The convergence curves of QSA and SHGWJA then crossed each other, with the former algorithm generating better intermediate designs until about 2000 function evaluations: in particular, SHGWJA reached the cost of 1.72507 after 1968 function evaluations while QSA’s cost was 1.725 after 2000 function evaluations. However, SHGWJA returned better than QSA from that point to the end of the search process, further reducing the cost function to the target cost of 1.72485 over the remaining 1692 function evaluations while QSA completed such a task over further 5000 function evaluations.
In problem variant 2, SHGWJA and FSO practically started from the same cost (4.237 vs. 4.675) but the present algorithm approached the target solution of 1.69525 long before FSO. In particular, SHGWJA generated in its best optimization run a feasible intermediate design only 0.6% higher in cost than the target minimum (i.e., 1.70526 vs. 1.69525) already at 46% of its optimization history (i.e., after only 1674 function evaluations) while FSO’s cost still was 2.26653. The optimization histories coincided after 2200 function evaluations with both algorithms practically converging to the target cost of 1.69525.
The data presented in this section clearly prove the computational efficiency of SHGWJA in this additional test problem. In fact, the present algorithm generated very competitive results compared to 24 state-of-the art MAs, each of which was in turn proven in the literature to outperform 5 to 32 other metaheuristic algorithms and their variants.

5.5. Optimal Design of Pressure Vessel

Table 7 presents the optimization results for the pressure vessel design problem. The values of the standard deviation on optimized cost listed in the table for the referenced algorithms are exactly those reported in the literature. In the continuous version of the problem, the SHGWJA approach developed in this study and the hybrid algorithm EGTO combining the marine predators algorithm and gorilla troops optimizer [102] found the lowest cost overall of 5885.331 in all of their independent optimization runs (zero standard deviation). This fully feasible solution was practically the same as those obtained by the gaze cues learning-based grey wolf optimizer (GGWO [105], the so-far known best GWO variant, 5885.3328), starling murmuration optimizer (SMO, 5885.3329) [74], EJAYA [112] (5885.333 for this JAYA variant utilizing a larger sets of solutions than classical JAYA to form new trial solutions), marine predators algorithm (MPA, 5885.3353) [80], and equilibrium optimizer (EO) [31] (5585.329, with just 0.000042% violation on constraint g3( X ) in Equation (21)).
The light spectrum optimizer (LSO) [31], learning cooking algorithm (LCA) [53], hybrid slime mould simulated annealing (hSM-SA) algorithm [100], snake optimizer (SO) [69], giant trevally optimizer (GTO) [81] and NDWPSO [115] (combining PSO, differential evolution and whale optimization) ranked right after SHGWJA and the other top MAs in this example, as they found optimized cost values ranging from 5585.434 to 5890. The modified sine cosine algorithm (MSCA) [99], Taguchi method integrated harmony search algorithm (TIHHSA) [132], and arithmetic optimization algorithm (AOA) [43] converged to significantly larger optimized costs than SHGWJA and EGTO: from 5917.5 to 6059.7 vs. only 5885.331. Interestingly, LSO missed the global optimum of the continuous problem, although it was superior over the other 18 MAs (see Ref. [31] for details).
The optimized designs reported in Table 7 for the continuous version of the pressure vessel problem were all feasible except for those algorithms that converged to lower costs than SHGWJA and EGTO: (i-ii) the mother optimization algorithm (MOA) [52] and preschool education optimization algorithm (PEOA) [53] (i.e., 5882.901 with 0.040% violation on constraint g3( X ) in Equation (21)); (iii) the astrophysics based GWO variant MGWO-III [39] (5884.0616 with 0.039% violation on constraint g3( X ) in Equation (21)); (iv) the hybrid JAYA variant EHRJAYA [98] (5734.9132 with 3.57% violation on constraint g1( X ) in Equation (21)); and (v) the improved arithmetic optimization algorithm (IAOA) [96] (5813.551 with 3.85% violation on constraint g1( X ) in Equation (21)). SAMP-JAYA [111], the JAYA variant using subpopulations to optimize the balance between exploration and exploitation, converged to an unfeasible design of cost 5872.213, practically within the same number of function evaluations as the SHGWJA’s fastest optimization run (i.e., 6513 vs. 6732).
SHGWJA was the fastest optimizer overall, up to one order of magnitude faster than the giant trevally optimizer (GTO) [81], which even missed the target optimum cost of 5885.331 and exhibited the third worst standard deviation on optimized cost. In particular, SHGWJA required on average the following: (i) 25% fewer function evaluations than GGWO; (ii) about 1.5 times fewer function evaluations than EJAYA, EO and hSM-SA; (iii) about 2 times fewer function evaluations than MPA, EGTO and MSCA; (iv) about 2.5 times fewer function evaluations than MPA and EGTO; and (v) about 3 times fewer function evaluations than MGWO-III, PEOA, LCA, SO, SMO and IMODE. It should be noted that the fastest optimization run of SHGWJA was completed within only 6732 function evaluations, very close to the following: (i) the 4000 function evaluations required by EHRJAYA [100] to prematurely converge to an infeasible solution (the other JAYA variants analyzed in [100] required between 8000 and 10,000 function evaluations), and (ii) the 5000 function evaluations covered by the LSO’s convergence curve indicated in [31]. However, in its fastest optimization run, SHGWJA generated a feasible intermediate design better than the optimized cost of LSO (only 5585.431 vs. 5585.434) within only 4880 function evaluations.
In the mixed variable version of the pressure vessel problem, SHGWJA and its competitors all converged to the target optimum cost of 6059.714. However, the zero or “almost 0” standard deviation on optimized cost was achieved only by SHGWJA and self-adaptive spherical search (SASS) [130,131], not even by the high-performance algorithm En(L)SHADE [126]. This occurred in spite of the fact that the present algorithm was on average (i) one order of magnitude faster than En(L)SHADE; (ii) about 54% faster than EO [30], its improved version mEO [92] and the hybridized differential evolution naked mole-rat algorithm (SaDN) [94]; (iii) about 2 times faster than SASS; and (iv) about 2.6 times faster than the queuing search algorithm (QSA, the best amongst the nine MAs compared in [127]). The fastest optimization run of SHGWJA required 7604 function evaluations, practically the same as the flying squirrel optimizer (FSO) [121]. It should be noted that SHGWJA did not implement any specific strategy for handling discrete variables x1 and x2 but rounding the continuous values generated for these variables to their nearest multiples of 0.0625.
SHGWJA was robust enough also with respect to computational cost because the ratio between the standard deviation of the number of function evaluations and the corresponding average value was, respectively, 26.2% and 28.1% for the continuous and mixed variables problem versions. The excellent convergence behavior of SHGWJA was confirmed by the optimization histories shown in Figure 13 for the best and average optimization runs recorded for the two problem variants with continuous variables or mixed variables. Again, the best SHGWJA runs correspond to the fastest ones since the present algorithms achieved a null standard deviation on optimized cost in both problem variants. The plot is limited to 7500 function evaluations for the sake of clarity. It can be seen that the best and average optimization runs’ convergence curves of SHGWJA practically coincided after only 4500 (i.e., 66.8% of best run’s optimization history) and 4000 (i.e., 52.6% of best run’s optimization history) function evaluations, respectively, for problem variants 1 and 2. Such a behavior is highlighted in detail by the insert in Figure 13.
The optimization histories of SHGWJA, LSO [31] and EGTO [102] recorded for problem variant 1 are compared in Figure 13. LSO and EGTO started their optimizations from 10,128.57 and 22,233.47, respectively, while SGHJA started its search process from the much larger cost of 526,765. In spite of such a large gap, FSO generated worse intermediate designs than the present algorithm over the whole optimization history and converged to a higher optimized cost than the target solution reached by SHGWJA (i.e., 5884.434 vs. 5885.331). Furthermore, the optimization histories of SHGWJA and EGTO practically coincided for the first 100 function evaluations and then EGTO reduced the cost function to 6359.256 at 150 function evaluations while SHGWJA obtained a similar cost function value at 200 function evaluations. However, the present algorithm kept reducing the cost while EGTO exhibited a large step in the cost function up to 2200 function evaluations: at this NFE, SHGWJA had already reduced the cost to about 5887.3 (just 0.034% more than the 5885.331 target optimum) while the cost function values of EGTO’s intermediate designs remained fixed at 6359.256.
The optimization histories recorded for SHGWJA, FSO [121] and QSA [127] in problem variant 2 again started from significantly larger costs than the 6059.714 target: 263,805.7, 35,000 and 19,097.22, respectively (see Figure 13). FSO was much slower than the present algorithm and recorded a cost of 14,179 after about 5780 function evaluations, that is, when SHGWJA’s intermediate designs had already reduced the cost function to 6061.85, just 0.035% higher than the target optimum. SHGWJA recovered the initial gap with respect to QSA already within the first 60 function evaluations and the optimization histories of the two algorithms practically coincided after 3500 function evaluations.
The data presented in this section demonstrate without a shadow of a doubt the computational efficiency of SHGWJA in this further test problem. In fact, the present algorithm performed very competitively compared to 27 state-of-the art MAs, each of which was in turn proven in the literature to outperform 5 to 31 other metaheuristic algorithms and their variants.

5.6. Optimal Design of Industrial Refrigeration System

Table 8 presents the optimization results for the industrial refrigeration system design problem. The values of the standard deviation on optimized cost listed in the table for the referenced algorithms are exactly those reported in the literature. All algorithms listed in the table converged to fully feasible solutions. The new algorithm SHGWJA was very competitive with the other optimizers selected for comparison. In fact, it reached the global optimum cost of 0.032213 (and the target design POPT[0.001;0.001;0.001;0.001;0.001;0.001;1.524;1.524;5;2;0.001;0.001;0.0072934;0.087556]) along with the standard JAYA and improved JAYA [93] variants, starling murmuration optimizer (SMO) [74], hybrid fast simulated annealing (HFSA) algorithm [93], success history-based adaptive differential evolution with gradient-based repair En(L)SHADE [118], self-adaptive spherical search (SASS) algorithm [130,131], and improved cuckoo search (CS-DP1) [134]. The hybrid HS and hybrid BBBC algorithms developed in [93], mBBBC–UBS (derived from [123]) and MsinDE (derived from [124]) ranked right after the best algorithms, achieving best optimized costs of 0.032214, 0.032215, 0.032218 and 0.032220, respectively. The mAHS algorithm (derived from [122]) captured the target optimum only up to the third significant digit (i.e., 0.032239 vs. 0.032213). As in the gravity dam design problem, the elitist strategy involving the condition W( X i , t r ) ≤ 1.1W( X o p t ) implemented in SHGWJA was effective also for mBBBC–UBS, mAHS and MsinDE in that it improved their optimized solutions with respect to the original formulations but not enough to outperform SHGWJA. The hybrid Split–Detect–Discard–Shrink–Sophisticated Artificial Bee Colony algorithm (SDDS-SABC) [131] and standard grey wolf optimizer (GWO) found the worst solutions overall, with a maximum cost of 0.033740.
The rate of success of SHGWJA was above 80% considering only the solutions yielding optimized costs between the global minimum of 0.032213 and the average cost 0.032215, hence within only 0.00621% penalty with respect to global minimum. Overall, SHGWJA was sufficiently competitive compared to all the other algorithms in terms of robustness. In fact, the average optimized cost coincided with its counterpart for the other algorithms up to the fourth significant digit (i.e., 0.03221). For SHGWJA, the ratio of standard deviation on optimized cost to average optimized cost was only 0.0114%. Standard GWO and CS-DP1 exhibited the largest dispersions on optimized cost. The high-performance algorithms En(L)SHADE [126] and SASS [130,131] achieved zero standard deviation on optimized cost but their computational budgets were, respectively, 100,000 (fixed in the CEC 2020 benchmark) and 20,000 function evaluations vs. only 8517 function evaluations required on average by SHGWJA. Interestingly, SHGWJA completed on average within 10,000 function evaluations all optimization runs thus converging to the target global optimum of 0.032213 with null standard deviation when the convergence limit ε conv in Equation (14) was reduced to 10−10.
It can be seen from Table 8 that SHGWJA required a considerably lower number of function evaluations than standard GWO and JAYA to complete the optimization process. The convergence rate increased by about 2.4 times with respect to standard JAYA and by a factor 6 with respect to standard GWO. Hence, the hybrid GWO-JAYA scheme implemented by SHGWJA was very effective in increasing the diversity of trial solutions from the α, β and δ wolves in the exploration phase and enhancing the exploitation phase through the generation of very high-quality designs.
SHGWJA was robust enough in terms of computational cost, limiting the ratio between the standard deviation of the number of function evaluations to the corresponding average value to 27.8%. SHGWJA required on average 1.6 times more function evaluations than the hybrid HS/BBBC/SA algorithms of Ref. [93], about 4% more function evaluations than mAHS, mBBBC–UBS and MsinDE, and slightly fewer function evaluations than the improved JAYA. However, the fastest optimization run of SHGWJA converging to the global optimum of 0.32213 was completed within only 5165 function evaluations, practically at the same convergence rate of the fastest algorithms reported in the literature. This confirms the high computational efficiency of the present algorithm.
The superior convergence behavior of SHGWJA with respect to its competitors is confirmed by Figure 14, which compares the optimization histories of the best runs for the algorithms listed in Table 8. The average convergence curve of SHGWJA is also shown in the figure. The plot is limited to 9000 function evaluations for the sake of clarity. It can be seen that the best and average optimization runs’ convergence curves of SHGWJA practically coincided after only 2700 function evaluations, that is, at 52.3% of the best run’s optimization history. All algorithms started their optimization runs from much larger initial costs than the target optimum: from 50 to 235.562 vs. only 0.032213. SHGWJA rapidly approached the target design, reducing the cost function to 0.0325 (less than 1% larger than the target cost) within only 900 function evaluations vs. the 3000 function evaluations performed by hybrid fast simulated annealing (HFSA) [93], the fastest competitor to achieve the same intermediate solution of 0.0325.
The mAHS algorithm (derived from [122]) was faster than the present algorithm over the first 170 function evaluations but then exhibited two large steps, respectively, lasting about 2950 function evaluations where the cost function decreased from 0.0612 to 0.04 and 3050 function evaluations where the cost function decreased from 0.04 to 0.034. The elitist strategy W( X i , t r ) ≤ 1.1W( X o p t ) of SHGWJA allowed for significantly improving the convergence speed in the early optimization iterations for mBBBC–UBS (derived from [123]) and MsinDE (derived from [124]) as well. In particular, the average optimization history of SHGWJA and the best optimization run of mBBBC–UBS practically coincided between 320 and 1200 function evaluations, while the best optimization runs of SHGWJA and MsinDE overlapped for the first 120 function evaluations. However, similar to what was reported above for mAHS, the optimization histories of mBBBC–UBS and MsinDE presented large steps, yielding very little improvement in the cost function. In summary, the W( X i , t r ) ≤ 1.1W( X o p t ) strategy is very effective when the cost function has to be rapidly reduced but it should be complemented by other elitist strategies such those implemented in SHGWJA.

5.7. Two-Dimensional Path Planning

Table 9 presents the optimization results for the sixth engineering design problem solved in this study, the 2-D path planning. All data listed in the table correspond to feasible solutions. The new algorithm SHGWJA was the best optimizer overall and designed the shortest path of 41.057 mm. The component algorithms of SHGWJA, the standard grey wolf optimizer (GWO) and standard JAYA, were less efficient than the hybrid algorithm developed here: they designed longer trajectories than SHGWJA (41.116 mm and 41.083 mm vs. only 41.057 mm, respectively).
The other algorithms compared with SHGWJA ranked as follows: hybrid BBBC [93], improved JAYA [93], mBBBC–UBS (derived from [123]), mAHS (derived from [122]), hybrid harmony search (hybrid HS) [93], hybrid fast simulated annealing (HFSA) [93], MsinDE (derived from [124]) and dynamic differential annealed optimization (DDAO) [136]. Optimized trajectory lengths ranged from 41.083 mm to 41.104 mm, except for DDAO, which designed the very long trajectory of 42.653 mm. Remarkably, the simple formulation of SHGWJA was able to solve a highly nonlinear optimization problem such as 2-D path planning in a more efficient way than algorithms including approximate line search and gradient information. This happened because SHGWJA is inherently able to balance exploration and exploitation phases: the JAYA operator enhances the exploration capability of the α, β and δ wolves and forces the optimizer to search for very high-quality solutions in the exploitation phase. As in the industrial refrigeration system design problem, the high nonlinearity of the 2-D path planning problem was handled better by SHGWJA than by other HS/BBBC/SA hybrid algorithms that included line search and gradient information. The approximate line searches and gradient information are introduced in the optimization formulation to try to minimize the number of function evaluations. However, this may limit the quality of the approximation and, hence, the ability of the optimizer to generate trial solutions that are effectively located in regions of the search space where the current best record can be improved. As in the gravity dam and refrigeration system design problems, the elitist strategy W( X i , t r ) ≤ 1.1W( X o p t ) implemented in SHGWJA was effective also for mBBBC–UBS, mAHS and MsinDE and improved their optimized solutions with respect to the original BBBC–UBS, SAHS and sinDE algorithms. However, the overall rank BBBC-JAYA-HS-SA-GWO-DE of the base algorithms after SHGWJA was substantially confirmed.
The rate of success of SHGWJA was 100% considering that the solutions of all independent optimization runs corresponded to shorter trajectories than the best solutions obtained by its competitors. The present algorithm was very competitive with all other optimizers in terms of robustness. In fact, it achieved the best average optimized path length of only 41.066 mm, followed by 41.093 mm of improved JAYA and 41.095 mm of standard JAYA and hybrid BBBC. The standard deviation of the optimized path length was only 0.0181% of the shortest path length designed by SHGWJA. Amongst the top five algorithms that designed the shortest trajectories (i.e., SHGWJA, standard JAYA, hybrid BBBC, improved JAYA, and mBBBC-UBS), SHGWJA presents the lowest ratio between the worst and best solutions. Interestingly, mBBBC–UBS ranked third in terms of the best solution together with improved JAYA but only seventh in terms of its average optimized path length, 41.103 mm. Standard GWO again exhibited the largest dispersion on optimized cost considering that only 10 independent optimization runs were executed for DDAO in [136] vs. the 20 runs carried out for standard GWO.
SHGWJA was computationally efficient also for this test problem. In this regard, Table 9 shows that the proposed SHGWJA algorithm required on average far fewer function evaluations than standard GWO and JAYA to converge to the optimal solution. In fact, SHGWJA was about 4.3 and 14.1 times faster than standard JAYA and standard GWO, respectively. The computational speed of SHGWJA was practically the same as for the HS/BBBC/SA hybrid algorithms and improved JAYA. The fastest optimization run of SHGWJA that converged to the global optimum path length of 41.057 mm required 3416 function evaluations, which falls in the middle of the 3122 to 3615 interval recorded for the function evaluations of the fastest algorithms reported in the literature. The mBBBC–UBS algorithm ranked sixth overall (right after HFSA, and on average about 11% slower than SHGWJA), while MsinDE and DDAO were the slowest optimizers, requiring about 50% more function evaluations than SHGWJA. The present algorithm was very robust also with respect to computational cost: just 7.6% dispersion on the number of function evaluations, followed by hybrid fast simulated annealing with 9.6%.
The excellent convergence behavior of SHGWJA is confirmed by Figure 15, which compares the optimization histories of the best runs for the algorithms listed in Table 9. The average convergence curve of SHGWJA is also shown in the figure. The plot is limited to 4200 function evaluations for the sake of clarity. The best and average optimization runs’ convergence curves of SHGWJA practically coincided after only 1450 function evaluations, that is, at 42.5% of the best run’s optimization history. The best optimization run of SHGWJA started from the very large cost of 254.713 mm (i.e., about 6.2 times the globally optimum length of 41.017 mm found by the present algorithm), while the initial cost for all other optimizers ranged between 50.01 and 59.62 mm (i.e., at most 45.2% longer than the shortest trajectory designed by SHGWJA). As in the other test problems discussed in this section, the present algorithm immediately recovered the initial gap in the cost function with respect to its competitors. The hybrid simulated annealing (HFSA) and hybrid big bang–big crunch algorithms of Ref. [93] were the only algorithms to compete in convergence speed with SHGWJA, respectively, for the first 230 and 450 function evaluations.
The optimized trajectories obtained in the best optimization runs of SHGWJA and its competitors are compared in Figure 16. SHGWJA, hybrid HS/BBBC/SA, mBBBC–UBS and improved JAYA substantially reduced the path length between the obstacles O1 and O7 with respect to the solution of DDAO. Furthermore, the present code reduced the curvature of the central part of the designed trajectory with respect to hybrid HS/BBBC/SA, improved JAYA and mBBBC–UBS (trajectories referred to mAHS and MsinDE are not shown in the figure because are very similar to the one plotted for mBBBC–UBS), thus shortening the path length. This is confirmed by comparing the optimal solution of SHGWJA with that of hybrid BBBC, the second best solution overall. The optimal positions of base points found by SHGWJA were (10.001;5.778), (17.183;16.589) and (19.696;20.495) mm, while they were (12.362;8.940), (15.060;13.460) and (18.198;18.446) mm for hybrid BBBC, which obtained the best solution quoted in the literature so far [93]. The base points of SHGWJA may be fitted in the XY plane by a linear regression with R2 = 1, while for the base points of hybrid BBBC, the correlation coefficient is only R2 = 0.999.

5.8. Mechanical Characterization of Flat Composite Panel Under Axial Compression

Table 10 presents the results obtained in the last engineering problem solved in this study. The values of the material properties and ply orientations of the axially compressed flat composite panel identified by SHGWJA and its competitors in their best optimization runs are listed in the table. The average and maximum errors on the identified parameters and the panel’s fundamental buckling mode shape for the best optimization runs of all the algorithms also are listed in the table, along with the number of finite element (FEM) analyses required in the identification process.
It can be seen that SHGWJA identified the panel’s properties more accurately than its competitors. In fact, the average error on material/layup parameters recorded for SHGWJA was only 0.03888%, while the corresponding errors for the other algorithms ranged between 0.197 (hybrid fast simulated annealing, HFSA, of Ref. [3]) and 5.214 (improved JAYA variant of Ref. [3]). The largest error on the parameters for the present algorithm was only 0.07080% (on the longitudinal modulus EL), while the other algorithms could not reduce this error below 0.389% (the best SHGWJA competitor again was HFSA). The superiority of SHGWJA over its competitors emerges clearly from the analysis of the identified parameters. In particular, SHGWJA identified the 0° ply angle at the fourth digit while the other algorithms could at most identify it up to the third digit. Furthermore, the errors in the 90° and ±45° ply directions were only 0.01222% and 0.008889%, respectively.
The maximum residual error on the panel’s buckling mode shape evaluated for the SHGWJA solution was only 1.035% vs. 2.281% of HFSA, while the other algorithms obtained error values ranging from 2.504% (hybrid fast harmony search optimization, HFHS, of Ref. [3]) to 4.421% (standard GWO). The average error on buckling mode shape for the present algorithm was only 0.674 vs. at least 1.554% for its competitors. The largest errors on w-displacements were localized in correspondence with the regions near points A and B of the vertical control path sketched in Figure 8c, that is, where the panel is fixed to the testing machine and displacements should be close to zero. The selected mesh size allowed for avoiding numerical noise in the finite element analyses performed in the optimization search. Interestingly, SHGWJA minimized the error functional Ω of Equation (25), considering 170 control points, while the hybrid fast HS/BBBC/SA algorithms used in Ref. [3] considered only the 86 control points lying on the AB path parallel to the loading direction. In spite of this, SHGWJA significantly reduced the error on the buckling pattern with respect to the optimizers of Ref. [3].
SHGWJA outperformed its component algorithms, standard GWO and JAYA, which failed in identifying the target value of the shear modulus GLT (respectively, 4317 and 4308 MPa with 4.275% and 4.058% error on this elastic property). Furthermore, the present algorithm required only 366 finite element analyses vs. 5000 and 2000 analyses required, respectively, by standard GWO and JAYA. The improved JAYA variant used in [3] required 975 analyses but prematurely converged to the worst solution overall with 20.749% error for the shear modulus. The hybrid GWO–JAYA formulation implemented by SHGWJA was thus very effective also in this highly nonlinear optimization problem.
As in the gravity dam design, refrigeration system design and 2D path planning problems, the elitist strategy W( X i , t r ) ≤ 1.1W( X o p t ) implemented in SHGWJA was effective also for mAHS (derived from [122]) and mBBBC–UBS (derived from [123]). In fact, mAHS and mBBBC–UBS significantly improved their performance with respect to the original AHS and BBBC–UBS algorithms used in Ref. [3]. In particular, mAHS reduced the identification errors on the transverse elastic modulus ET and Poisson’s ratio νLT from 2.470% to only 0.282% and from 1.589% to only 0.199%, respectively. Ply angles were also identified much more accurately than in [3]: errors on 90° and ±45° orientations decreased from 2.631% to 0.771% and from 1.796% to 0.620%, respectively. The 0° ply angle was identified up to the third digit, reducing its value by more than one order of magnitude with respect to [3]: 0.007373° vs. 0.08884°. Furthermore, mAHS completed its search process within only 1044 finite element analyses, while the original AHS algorithm used in [3] required 1650 analyses.
The mBBBC–UBS algorithm reduced the identification errors on the transverse elastic modulus ET, shear modulus GLT and Poisson’s ratio νLT from 1.503% to only 0.175%, from 3.478% to 0.773%, and from 1.987% to 0.861%, respectively. Ply angles were also identified much more accurately than in [3]: errors on 90° and ±45° orientations decreased from 1.652% to 0.638% and from 2.227% to 0.698%, respectively. The 0° ply angle was identified up to the third digit, reducing its value by almost 48% from 0.03891° vs. 0.008162°. Furthermore, mBBBC–UBS completed the identification process within only 819 finite element analyses while the original BBBC–UBS algorithm required 1250 analyses in [3].
The MsinDE algorithm (derived from [124]) augmented by SHGWJA’s W( X i , t r ) ≤ 1.1W( X o p t ) elitist strategy also performed well, ranking fifth overall in terms of accuracy in panel’s property identification and buckling pattern reconstruction after SHGWJA and hybrid fast HS/BBBC/SA. MsinDE ranked sixth overall in terms of computational cost after SHGWJA, hybrid fast HS/BBBC/SA and mBBBC–UBS, completing the identification process within 968 finite element analyses.
Statistical dispersion on the identified panel properties and residual error for the buckling pattern evaluated over the 20 independent runs remained below 0.75%, thus confirming the robustness of SHGWJA. In the worst optimization run, SHGWJA was still able to limit maximum errors on panel properties and the buckling mode to 0.08% and 1.0625%, respectively, while its best competitor, HFSA, recorded 0.389% and 2.281% for the same quantities. The present algorithm was also very robust in terms of computational cost: in fact, the standard deviation of the number of finite element analyses required by SHGWJA in the identification process for the different runs was only 10.2% of the corresponding average number of analyses. The present algorithm was on average 35.6%, 19.7% and 10.9% faster than hybrid fast SA, hybrid fast HS and hybrid fast BBBC, respectively, and 2.1 to 2.7 times faster than mBBBC–UBS, MsinDE and mAHS.
More information on the convergence behavior of SHGWJA and its competitors is available from Figure 17, which compares convergence curves relative to the best optimization runs of the algorithms listed in Table 10. The average convergence curve of SHGWJA is also shown in the figure. The plot is limited to 1050 function evaluations for the sake of clarity. It can be seen from the figure that SHGWJA generated better intermediate designs than its competitors over the whole search process. The best and average optimization runs’ convergence curves of SHGWJA practically coincided after 275 function evaluations, that is, at 75.1% of the best run’s optimization history. The convergence curve recorded for the best optimization run of the hybrid fast harmony search (HFHS) algorithm was close enough to the average convergence curve of SHGWJA for the first 40 finite element analyses and from 150 to 210 analyses. HFHS and HFBBBC (hybrid fast big bang–big crunch) were the only algorithms able to reduce the error functional value below 10−4 within 350–360 structural analyses, that is, when SHGWJA concluded its optimization process with the final cost function value ΩOPT = 1.97·10−6. Since SHGWJA started its best run from an initial population including individuals with at least Ω = 2.049 while the best optimization runs of HFHS and HFBBBC started from populations with at least Ω = 0.891, the present algorithm proved once again its inherent ability to quickly approach the global optimum solution.
In order to check whether residual errors on the panel buckling pattern evaluated for each optimization run were caused by some inherent limitation of SHGWJA formulation, the identification problem stated by Equation (25) was solved in silico. For that purpose, the target buckling pattern of the panel was computed by ANSYS for the target panel’s properties. SHGWJA performed the optimizations to reconstruct the target displacement field generated via FEM. Remarkably, the present algorithm always converged to target properties reproducing the panel displacement field with 0 residual error in all independent optimization runs. The 0.08% maximum errors on the buckling pattern mentioned above for the 20 SHGWJA optimization runs were thus caused by the uncertainties usually entailed by the hybrid characterization process, a very complicated task matching experimental data and FEM results. Optically measured displacements certainly are a good target to select because the double projection moiré method is a highly sensitive full-field measurement technique. However, in this study, control points were located at critical locations where even small changes in measured quantities may affect the success of the identification process to a great extent. Hence, the present results are very satisfactory.

5.9. Statistical Analysis of Results of Engineering Optimization Problems

Besides the classical statistical analysis carried out for the above engineering optimization problems, which relied on the best, average and worst solutions and the corresponding standard deviations of optimized cost over the independent optimization runs, the optimized designs obtained by SHGWJA and its competing algorithms—standard GWO, standard JAYA, improved JAYA [93], hybrid HS/BBBC/SA [93], hybrid fast HS/BBBC/SA (specifically developed for inverse problems) [3], mAHS (derived from [122]), mBBBC–UBS (derived from [123]) and MsinDE (derived from [124])—in the independent runs were also analyzed by performing a Wilcoxon test with a level of significance of 0.05. Interestingly, p-values determined for each pair of algorithms’ solutions were always smaller than 0.05, thus confirming the superiority of SHGWJA over its competitors.
In this second part of statistical analysis, SHGWJA and its competitors are ranked with respect to five performance criteria: (i) best optimized cost (BEST); (ii) average optimized cost (AVERAGE); (iii) worst optimized cost (WORST); (iv) standard deviation on optimized cost (STD); and (v) number of function evaluations (or finite element analyses) (NFE). Table 11 lists the corresponding ranks achieved by SHGWJA in each engineering optimization problem. The table actually includes 10 entries for each ranking criterion because the welded beam and pressure vessel design problems have three and two variants, respectively. For each test problem, the total score (TOTSCO) assigned to SHGWJA (also reported in Table 11) was determined by summing over the ranks achieved for the different criteria: TOTSCO = 5 may be obtained only if the optimizer ranks first with respect to all the performance criteria.
It can be seen that SHGWJA ranked 1st overall, respectively, 9 times out of 10 for best cost optimized cost, 8 times out of 10 for average and worst optimized cost, 7 times out of 10 for standard deviation on optimized cost, 6 times out of 10 for number of function evaluations. The low rankings obtained by SHGWJA in the tension/compression spring design problem for best optimized cost (14th among the 22 MAs compared here), average optimized cost (8th), worst optimized cost (6th) and standard deviation on optimized cost (7th) do not represent a major issue if one considers that SHGWJA’s best solution was 0.0126665, practically the same as the target value 0.012665; moreover, the average and worst optimized costs were also very close to 0.012665. The same arguments applied to the refrigeration system design problem where SHGWJA ranked fifth and seventh (among the 15 MAs compared here) with respect to the average/worst optimized cost and standard deviation on optimized cost. In fact, the average and worst optimized costs found by SHGWJA were, respectively, 0.032215 and 0.032219, practically the same as the target optimum of 0.032213. Furthermore, two algorithms that obtained standard deviations on optimized cost lower than SHGWJA could never converge to the 0.032213 target value in their independent runs. In the 2D path planning problem, SHGWJA ranked fourth (among the 11 MAs compared here) with respect to standard deviation on optimized cost; indeed, since SHGWJA’s worst solution was better than the best solutions obtained by all its competitors, SHGWJA’s actual rank should be first.
Table 11 shows that SHGWJA obtained a very large value of TOTSCO (i.e., 37) in only the tension/compression spring design problem and quite a large total score (i.e., 20) in the industrial refrigeration system problem. In view of this, SHGWJA’s competitors that obtained lower cost function values than 0.0126665 were analyzed with respect to all performance criteria and total score: the corresponding results are listed in Table 12. It can be seen that SHGWJA practically reached the same overall performance of GGWO and EHRJYA, the most advanced GWO and JAYA variants considered for this test problem: 37 vs. 31 and 36 total score values. While GGWO and EHRJYA converged to the target optimum, they were less efficient than SHGWJA with respect to three out of the other four performance indicators. Interestingly, SHGWJA would be the fifth best algorithm overall if the contribution of the best convergence run were removed from the total score (see the last column of the table), thus recovering from the small gaps with GGWO and EHRJYA and reducing its original 22-point gap from the top-ranked algorithms MPA and En(L)SHADE to only 9 points.
The same analysis of Table 12 was then carried out for the welded beam and the pressure vessel design problems, limiting the set of SHGWJA’s competitors to GGWO, EHRJAYA, EJAYA, MPA, EGTO, SaDN, En(L)SHADE and QSA. These algorithms were selected because (i) all the necessary data for determining performance indicators were available from the literature for at least one test problem, and (ii) they had achieved better total score values than SHGWJA in the tension/compression spring problem. Table 13 compares the performance indicators and the total score values obtained by SHGWJA and its competitors in the welded beam and pressure vessel problems. The first entries of each row in the table refer to the welded beam problem results. Since some algorithms were run for only one of the welded beam problem or pressure vessel problem variants, the values of the performance indicators and total score reported for SHGWJA represent the average of the corresponding values listed in Table 12.
It can be seen from Table 13 that SHGWJA significantly improved its rank with respect to the other algorithms. En(L)SHADE remained the best algorithm over the spring/welded beam/vessel problem but SHGWJA reduced the gap to only 7.3 points (47.3 vs. 40), equaling almost the 2nd rank position of GGWO (47.3 vs. 45). SHGWJA actually ranks above both MPA, which reached a total score of 44 for only the tension/compression spring and welded beam problems, and EHRJAYA that converged to an infeasible solution in the pressure vessel problem obtaining a total score of 45 for only the tension/compression spring and welded beam problems: even adding only 5 points (best case scenario) the total score would raise to 49 for MPA and 50 for EHRJYA, above the 47.3 score of SHGWJA.
Interestingly, SHGWA and En(L)SHADE can also be compared in the industrial refrigeration system design problem, while no data are available for the other algorithms. The ranks of En(L)SHADE were 〈1, 1, 1, 1, 14〉 for a total score of 18, less than the score of 20 obtained by SHGWA (see Table 11). However, the En(L)SHADE results were collected for a computational budget of 100,000 function evaluations, while the present algorithm always required on average only 8517 function evaluations to complete its optimization runs. As mentioned in Section 5.6, SHGWJA completed on average within only 10,000 function evaluations (one whole order of magnitude less than En(L)SHADE) all optimization runs always converging to the target global optimum of 0.032213 with a null standard deviation when the convergence limit ε conv in Equation (14) was reduced to 10−10. Hence, the total score of SHGWJA in the industrial refrigeration system problem would be five. Consequently, the total score of SHGWJA in the four CEC2020 test problems (i.e., tension/compression spring, welded beam, pressure vessel, industrial refrigeration system) would be only 37 + 5.3 + 5 + 5 = 52.3, better than the total score 15 + 8 + 17 + 18 = 58 of En(L)SHADE. This confirms the high efficiency of the present algorithm.
The last part of statistical analysis regards the trajectory of optimization variables during the search process. However, it should be noted that analyzing the trajectory of only a few (very often just one or two) optimization variables of one individual of the population, as is usually done in the literature, may provide only partial information on the convergence behavior and robustness of an optimization algorithm if the selected variables do not drive the search process. For example, some optimization variables may fluctuate to a large extent over the whole search process, while other variables may converge to their target values within just a few iterations. The former may occur if the optimizer becomes stuck in a region of the search space which hosts many competitive solutions that are weakly dependent on the fluctuating variable(s). In order to overcome this limitation, the worst-case scenario was considered in this study by monitoring the evolution of the largest percent variation experienced by the optimization variables of the whole population as the search progressed. If the optimizer truly converges to the optimal solution, the design variables should not fluctuate. The selected representation is a sort of normalized trajectory of the variables that cycle by cycle have the highest difficulty to converge. The normalized trajectory is hence formed by the trajectories of many different variables. The maximum variation in optimization variables over the whole population also gives information about the population diversity, which will tend to zero as the global optimum is reached.
Figure 18 shows the plots for the best runs of SHGWJA for the engineering optimization problems solved in this study. In order to evaluate the overall behavior of SHGWJA, normalized trajectories were plotted with respect to the percent fraction of the best run’s convergence history: (i) Figure 18a presents the data obtained for the tension/compression spring, welded beam and pressure vessel test problems; and (ii) Figure 18b shows the data obtained for the concrete dam, industrial refrigeration system, 2D path planning and composite panel characterization test problems. For the sake of clarity, the scale of the vertical axes of the plot was limited to 45%.
It can be seen from Figure 18 that the largest variation in optimization variables became smaller than 1–2% at about half of the optimization history. Such behavior was observed for all the engineering design test problems solved in this study. Oscillations in the normalized trajectory (with peaks up to 45%) were relevant over the first 10% of the optimization history for the spring/beam/vessel problems, while they lasted until 25–30% of the optimization history for the dam/refrigeration system/2D path planning/composite panel problems. This was expected in view of the larger number of optimization variables and higher complexity of the latter group of test problems. In all cases, the oscillations were smoothed as the optimization search progressed. For example, in variant 2 of the pressure vessel problem, the largest percent variation in the variables was 45% at 3% of the search, 25% at 6%, 15% at 9%, and only 2% at 17% of the search. In the composite panel identification problem, the largest percent variation in the variables was 42% at 4% of the search, 20% at 14%, 8% at 20%, 5% at 25% and only 2.5% at 30% of the search.

6. Conclusions and Future Work

This paper described a novel hybrid metaheuristic optimization algorithm, SHGWJA, combining grey wolf optimizer (GWO) and JAYA. The new algorithm developed in this study utilizes an elitist approach and JAYA operators to minimize the number of function evaluations and optimize the balance between exploration and exploitation. The rationale of this study was to develop an efficient hybrid algorithm for carrying out engineering optimizations without overly complicating the formulation of the new optimizer.
SHGWJA was successfully tested in two classical mathematical optimization problems (the Rosenbrock and Rastrigin functions) with up to 1000 variables, and in seven “real-world” engineering optimization problems (four of those problems were taken from the CEC 2020 test suite), covering very different fields such as civil engineering (shape optimization of a concrete gravity dam), mechanical engineering (tension/compression spring, welded beam, pressure vessel, refrigeration system), aeronautical engineering (mechanical characterization of a flat composite panel under axial compression) and robotics (2-D path planning). The seven engineering problems included up to 14 optimization variables and 721 nonlinear constraints.
SHGWJA was extensively compared with the metaheuristic optimization literature: in particular, with 31, 29, 11, 21, 24, 27, 14, 10 and 10 state-of-the art optimizers (each of which in turn had been proven in the literature to outperform up to 34 other MAs), respectively, including the algorithms that provide the best solutions in the literature for each problem, high-performance algorithms and CEC competitions winners. The comparison carried out in this article was very indicative considering that most of the selected competitors were (i) improved variants of the most commonly used MAs, (ii) recently developed methods that are very competitive with high-performance algorithms and CEC winners, and (iii) hybrid algorithms that use approximate line searches and gradient information to generate new trial designs.
The present algorithm always converged to the global optima quoted in the literature or to very close solutions to the global optima. In particular, in the mathematical optimization problems with 0 target optima, SHGWJA’s best costs were about 10−15 with average standard deviation values on the order of 10−13 to 10−12. In the “real-world” problems, a very small deviation (only 0.0121% of the target global optimum cost) was only observed for the spring design problem. SHWGJA always ranked first or second in terms of optimized cost.
The computational cost of SHGWJA was always lower than those of the standard GWO and JAYA formulations, as well as of the advanced GWO and JAYA variants. Such behavior confirmed the validity of the proposed hybrid formulation, which allowed the performance of the GWO and JAYA components to be significantly improved. Remarkably, the present algorithm always ranked first or second overall with respect to the average computational speed and its fastest optimization runs were better or highly competitive with those of the best MAs.
SHGWJA was very robust, achieving a very high rate of success in all the test problems. A null or near-to-0 standard deviation optimized cost was obtained in most cases. Furthermore, the standard deviation of the number of function evaluations required in the optimization process never exceeded 35% of the average number of function evaluations.
The results presented in this paper fully supported the conclusion that SHGWJA is a very efficient tool for optimizations related to engineering design problems. The proposed algorithm does not present major theoretical limitations that could affect its search ability and robustness. However, the selected engineering test problems, despite covering many different fields, included at most only 14 optimization variables. For this reason, further research is currently being carried out in order to maximize the convergence speed of SHGWJA in constrained optimization problems with a larger number of design variables. This may be achieved by introducing additional JAYA operators into the definitions of wolves α, β and δ. For example, preliminary results obtained in the weight minimization of a planar 200 bar truss structure optimized with 29 sizing variables (the structure must carry three independent loading conditions and the optimization problem includes 1200 nonlinear constraints on bar stresses; sizing variables correspond to the cross-sectional areas of the elements that belong to each group) confirm the validity of the above-mentioned approach: in fact, SHGWJA converged to the target optimum weight of 11,542.4 kg. Furthermore, numerical tests carried out on the whole CEC2020 test suite of mechanical engineering benchmark problems, which includes 19 test cases also covering discrete optimization, equality constraints and topology optimization (with up to 30 design variables and 86 nonlinear inequality/equality constraints), indicate that the present algorithm can converge to the target optima within a significantly smaller number of function evaluations than the computational budget allowed in CEC competitions.
An interesting issue is the scalability of the SHGWJA. While mathematical functions (e.g., Rosenbrock and Rastrigin) may easily be scaled because the analytical formulation of the optimization problem does not change with the problem dimension, the same is not in general true for engineering optimization problems. In this regard, other preliminary investigations are currently being conducted for a classical large-scale structural optimization problem: the weight minimization of a planar 200 bar truss structure subject to five independent loading conditions, with 3500 nonlinear constraints on nodal displacements and element stresses. Because of the symmetry of the structure, the bars can be grouped in 96, 105 or 200 groups: hence, the optimization problem may be solved with 96, 105 or 200 sizing variables. The lowest weights reported in the literature [143] for the 96-, 105- and 200-variable problem variants are 12,823.808 kg, 13,062.339 kg and 13,054.841 kg, respectively. However, these designs are not consistent with the amount of design freedom included in the optimization. Interestingly, this problem can be solved by SHGWJA, which obtains optimized structural weights of 12,822.902 kg, 12,821.543 kg and 12,820.289 kg for the 96-, 105- and 200-variable problem variants, respectively.
SHGWJA was applied in this study to single-objective optimization problems. However, since both of its component algorithms, GWO and JAYA, have already been successfully used in multi-objective optimization problems (see, for example, Refs. [108,144]), and our hybrid SHGWJA formulation does not entail any theoretical limitations in this regard, further studies will focus on developing a suitable version of the proposed algorithm for multi-objective optimization problems. For example, multiple populations could be used for independently solving each single-objective problem, and a “good” solution weighted over the single-objective solutions could be generated for the multi-objective optimization problem.

Author Contributions

Conceptualization, methodology, and validation, C.F. and L.L.; formal analysis, C.F. and C.I.P.; software, investigation, and data curation, C.F.; writing—original draft preparation, C.F.; writing—review and editing, L.L. and C.I.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data relevant to this study are available upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Degertekin, S.O.; Minooei, S.M.; Santoro, L.; Trentadue, B.; Lamberti, L. Large-scale truss-sizing optimization with enhanced hybrid HSalgorithm. Appl. Sci. 2021, 11, 3270. [Google Scholar] [CrossRef]
  2. Degertekin, S.O.; YalcinBayar, G.; Lamberti, L. Parameter free Jaya algorithm fortrusssizing-layout optimization under naturalfrequency constraints. Comput. Struct. 2021, 245, 106461. [Google Scholar] [CrossRef]
  3. Ficarella, E.; Lamberti, L.; Degertekin, S.O. Mechanical identification of materials and structures with optical methods and metaheuristic optimization. Materials 2019, 12, 2133. [Google Scholar] [CrossRef]
  4. Li, H.; Zhang, Q.; Qin, X.; Yuantao, S. Raw vibration signal pattern recognition with automatic hyper-parameter-optimized convolutional neural network for bearing fault diagnosis. Proc. Inst. Mech. Eng. C J. Mech. Eng. Sci. 2019, 234, 343–360. [Google Scholar] [CrossRef]
  5. Guo, J.; Liu, C.; Cao, J.; Jiang, D. Damage identification of wind turbineblades with deep convolutional neural networks. Renew. Energy 2021, 174, 122–133. [Google Scholar] [CrossRef]
  6. Silvestrin, P.V.; Ritt, M. An iterated tabu search for the multi-compartment vehicle routing problem. Comput. Oper. Res. 2017, 81, 191–202. [Google Scholar] [CrossRef]
  7. Afzal, A.; Buradi, A.; Jilte, R.; Shaik, S.; Kaladgi, A.R.; Arici, M.; Lee, C.T.; Nižetić, S. Optimizing the thermal performance of solar energy device susingmeta-heuristic algorithms: A critical review. Renev. Sustain. Energy Rev. 2023, 173, 112903. [Google Scholar] [CrossRef]
  8. Bui, Q.-T. Metaheuristic algorithms inoptimiz ingneural network: A comparative study for forest fire susceptibility mapping in DakNong, Vietnam. Geomat. Nat. Hazards Risk 2019, 10, 136–150. [Google Scholar] [CrossRef]
  9. Zubaidi, S.L.; Abdulkareem, I.H.; Hashim, K.S.; Al-Bugharbee, H.; Ridha, H.M.; Gharghan, S.K.; Al-Qaim, F.F.; Muradov, M.; Kot, P.; Al-Khaddar, R. Hybridised artificial neural network model with slimemould algorithm: A novel methodology for prediction of urban stochastic water demand. Water 2020, 12, 2692. [Google Scholar] [CrossRef]
  10. Brion, D.A.J.; Pattinson, S.W. Generalisable 3D printing error detection and correction viamulti-headneural networks. Nat. Commun. 2022, 13, 4654. [Google Scholar] [CrossRef]
  11. Anter, A.M.; Oliva, D.; Thakare, A.; Zhang, Z. AFCM-LSMA: New intelligent model based on Lévy slime mould algorithm and adaptive fuzzy C-means for identification of COVID-19 infection from chest X-ray images. Adv. Eng. Inform. 2021, 49, 101317. [Google Scholar] [CrossRef]
  12. Meenachi, L.; Ramakrishnan, S. Metaheuristic search based features election methods for classification of cancer. Pattern Recognit. 2021, 119, 108079. [Google Scholar] [CrossRef]
  13. Oliva, D.; Ortega-Sanchez, N.; Hinojosa, S.; Perez-Cisneros, M. Modern Metaheuristics in Image Processing; CRCPress: BocaRaton, FL, USA, 2023. [Google Scholar]
  14. Goldberg, D.E. Genetic Algorithms in Search, Optimization, and Machine Learning; Addison-Wesley: Reading, MA, USA, 1989. [Google Scholar]
  15. Holland, J.H. Geneticalgorithms. Sci. Am. 1992, 267, 66–72. [Google Scholar] [CrossRef]
  16. Storn, R.M.; Price, K.V. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  17. Price, K.V.; Storn, R.M.; Lampinen, J.A. Differential Evolution A Practical Approach to Global Optimization; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  18. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar]
  19. Beyer, H.G.; Schwefel, H.P. Evolution strategies—A comprehensive introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
  20. Simon, D. Biogeography-based optimization. IEEE Trans.Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  21. Hayyolalam, V.; Kazem, A.A.P. Blackwidow optimization algorithm: A novel meta-heuristic approach for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 87, 103249. [Google Scholar] [CrossRef]
  22. Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  23. Van Laarhoven, P.J.M.; Aarts, E.H.L. Simulated Annealing: Theory and Applications; Kluwer Academic Publishers: Dordrecht, TheNetherlands, 1987. [Google Scholar]
  24. Kaveh, A.; Talatahari, S. A novel heuristic optimization method: Charged system search. Acta Mech. 2010, 213, 267–289. [Google Scholar] [CrossRef]
  25. Kaveh, A.; Motie Share, M.A.; Moslehi, M. Magnetic charged system search: A new meta-heuristic algorithm for optimization. Acta Mech. 2013, 224, 85–107. [Google Scholar] [CrossRef]
  26. Kaveh, A.; Khayat Azad, M. A new meta-heuristic method: Ray optimization. Comput. Struct. 2012, 112–113, 283–294. [Google Scholar] [CrossRef]
  27. Kaveh, A.; Mahdavi, V.R. Colliding bodies optimization: A novel meta-heuristic method. Comput. Struct. 2014, 139, 18–27. [Google Scholar] [CrossRef]
  28. Kaveh, A.; Bakhshpoori, T. A new metaheuristic for continuous structural optimization: Water evaporation optimization. Struct. Multidiscip. Optim. 2016, 54, 23–43. [Google Scholar] [CrossRef]
  29. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  30. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl. Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  31. Abdel-Basset, M.; Mohamed, R.; Sallam, K.M.; Chakrabortty, R.K. Light spectrum optimizer: A novel physics-inspired metaheuristic optimization algorithm. Mathematics 2022, 10, 3466. [Google Scholar] [CrossRef]
  32. Alatas, B. ACROA: Artificial chemical reaction optimization algorithm for global optimization. Expert Syst. Appl. 2011, 38, 13170–13180. [Google Scholar] [CrossRef]
  33. Abdechiri, M.; Meybodi, M.R.; Bahrami, H. Gases Brownian motion optimization: A nalgorithm for optimization (GBMO). Appl. Soft Comput. 2013, 13, 2932–2946. [Google Scholar] [CrossRef]
  34. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gassolubility optimization: A novel physics based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  35. Erol, O.K.; Eksin, I. A new optimization method: Big Bang-Big Crunch. Adv. Eng. Softw. 2006, 37, 106–111. [Google Scholar] [CrossRef]
  36. Rashedi, E.; Nezamabadipour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inform. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  37. Hosseini, H.S. Principal components analysis by the galaxy-based search algorithm: A novel metaheuristic for continuous optimisation. Int. J. Comput. Sci. Eng. 2011, 6, 132–140. [Google Scholar] [CrossRef]
  38. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inform. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  39. Kumar, V.; Kumar, D. Anastro physics-inspired Greywolf algorithm for numerical optimization and its application to engineering design problems. Adv. Eng. Softw. 2017, 112, 231–254. [Google Scholar] [CrossRef]
  40. Amjad, A.; Hussam, N. Supernova optimizer: A novel natural inspired meta-heuristic. Mod. Appl. Sci. 2018, 12, 32–50. [Google Scholar]
  41. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  42. Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN beyond the metaphor: A nefficient optimization algorithm based on Runge Kutta method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  43. Abualigah, L.; Diabat, A.; Mirjalili, S.; Elaziz, M.A.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  44. Glover, F. Future paths for integer programming and links to artificial intelligence. Comput. Oper. Res. 1986, 13, 533–549. [Google Scholar] [CrossRef]
  45. Glover, F.; Laguna, M. Tabu Search; Kluwer Academic Publishers: Boston, MA, USA, 1997. [Google Scholar]
  46. Geem, Z.W.; Kim, J.H.; Loganathan, G. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  47. Lee, K.S.; Geem, Z.W. A new meta-heuristic algorithm for continuous engineering optimization: Harmony search theory and practice. Comput. Methods Appl. Mech. Eng. 2005, 194, 3902–3933. [Google Scholar] [CrossRef]
  48. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  49. Rao, R.V. Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems. Int. J. Ind. Eng. Comput. 2016, 7, 19–34. [Google Scholar]
  50. Zhang, Y.; Jin, Z. Group teaching optimization algorithm: A novel metaheuristic method for solving global optimization problems. Expert Syst. Appl. 2020, 148, 113246. [Google Scholar] [CrossRef]
  51. Matoušová, I.; Trojovský, P.; Dehghani, M.; Trojovská, E.; Kostra, J. Mother optimization algorithm: A new human-based metaheuristic approach for solving engineering optimization. Sci. Rep. 2023, 13, 10312. [Google Scholar] [CrossRef]
  52. Trojovský, P. A new human-based metaheuristic algorithm for solving optimization problems based on preschool education. Sci. Rep. 2023, 13, 21472. [Google Scholar] [CrossRef] [PubMed]
  53. Gopi, S.; Mohapatra, P. Learning cooking algorithm for solving global optimization problems. Sci. Rep. 2024, 14, 13359. [Google Scholar] [CrossRef] [PubMed]
  54. Zhang, Q.; Wang, R.; Yang, J.; Ding, K.; Li, Y.; Hu, J. Collective decision optimization algorithm: A new heuristic optimization method. Neurocomputing 2017, 221, 123–137. [Google Scholar] [CrossRef]
  55. Kaveh, A.; Talatahari, S. Optimum design of skeletal structures using imperialist competitive algorithm. Comput. Struct. 2010, 88, 1220–1229. [Google Scholar] [CrossRef]
  56. Askari, Q.; Younas, I.; Saeed, M. Political Optimizer: A novel socio-inspired meta-heuristic for global optimization. Knowl. Based Syst. 2020, 195, 105709. [Google Scholar] [CrossRef]
  57. Kennedy, J.; Eberhart, R. Particles warm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  58. Eberhart, R.C.; Kennedy, J. A new optimizer using particles warm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995. [Google Scholar]
  59. Clerc, M. Particle Swarm Optimization; ISTE Publishing Company: London, UK, 2006. [Google Scholar]
  60. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by acolony of cooperating agents. IEEE Trans. Syst. Man Cybern. 1996, 26, 29–41. [Google Scholar] [CrossRef] [PubMed]
  61. Bonabeau, E.; Dorigo, M.; Theraulaz, G. Inspiration for optimization from social insect behaviour. Nature 2000, 406, 39–42. [Google Scholar] [CrossRef] [PubMed]
  62. Dorigo, M.; Stutzle, T. Ant Colony Optimization; MITPress: Cambridge, MA, USA, 2004. [Google Scholar]
  63. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial beecolony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  64. Yang, X.-S. Fire fly algorithm, stochastic test functions and design optimization. Int. J. Bio-Inspir. Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  65. Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Mixed variable structural optimization using fire fly algorithm. Comput. Struct. 2011, 89, 2325–2336. [Google Scholar] [CrossRef]
  66. Mirjalili, S. The antlion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  67. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  68. Pierezan, J.; Dos Santos Coelho, L. Coyote Optimization Algorithm: A new metaheuristic for global optimization problems. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation, Rio de Janeiro, Brazil, 8–13 July 2018; 8477769. [Google Scholar]
  69. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl. Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  70. Coufal, P.; Hubálovský, S.; Hubálovská, M.; Balogh, Z. Snow Leopard Optimization algorithm: A new nature-based optimization algorithm for solving optimization problems. Mathematics 2021, 9, 2382. [Google Scholar] [CrossRef]
  71. Yang, X.-S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature and Biologically Inspired Computing, Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  72. Yang, X.S.; Deb, S. Engineering optimisation by cuckoo search. Int. J. Math. Model. 2010, 1, 330–343. [Google Scholar] [CrossRef]
  73. Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  74. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. Starling murmuration optimizer: A novel bio-inspired algorithm for global and engineering optimization. Comput. Methods Appl. Mech. Eng. 2022, 392, 114616. [Google Scholar] [CrossRef]
  75. Yang, X.-S. A new metaheuristic bat-inspired algorithm. In Studies in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2010; Volume 284, pp. 65–74. [Google Scholar]
  76. Yang, X.-S.; Gandomi, A.H. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef]
  77. Kaveh, A.; Farhoudi, N. A new optimization method: Dolphin echolocation. Adv. Eng. Softw. 2013, 59, 53–70. [Google Scholar] [CrossRef]
  78. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Soft. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  79. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salps warm algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  80. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  81. Sadee, H.T.; Abdulazeez, A.M. Giant trevally optimizer (GTO): A novel metaheuristic algorithm for global optimization and challenging engineering problems. IEEE Access 2022, 10, 121615–121640. [Google Scholar] [CrossRef]
  82. Hwang, S.F.; He, R.S. Improvingreal-parameter genetic algorithm with simulated annealing for engineering problems. Adv. Eng. Softw. 2006, 37, 406–418. [Google Scholar] [CrossRef]
  83. Fesanghary, M.; Mahdavi, M.; Minary-Jolandan, M.; Alizadeh, Y. Hybridizing harmony search algorithm with sequential quadratic programming for engineering optimization problems. Comput. Methods Appl. Mech. Eng. 2008, 197, 3080–3891. [Google Scholar] [CrossRef]
  84. Lamberti, L. A nefficient simulated annealing algorithm for design optimization of truss structures. Comput. Struct. 2008, 86, 1936–1953. [Google Scholar] [CrossRef]
  85. Kaveh, A.; Talatahari, S. Particle swarm optimizer, ant colony strategy and harmony search scheme hybridized for optimization of truss structures. Comput. Struct. 2009, 87, 267–283. [Google Scholar] [CrossRef]
  86. Park, J.-B.; Jeong, Y.-W.; Shin, J.-R.; Lee, K.Y. An improved particles warm optimization for nonconvex economic dispatch problems. IEEE Trans. Power Syst. 2010, 25, 156–166. [Google Scholar] [CrossRef]
  87. Degertekin, S.O. Improved harmony search algorithms for sizing optimization of truss structures. Comput. Struct. 2012, 92–93, 229–241. [Google Scholar] [CrossRef]
  88. Yildiz, A.R. A new hybrid differential evolution algorithm for the selection of optimal machining parameter sinmilling operations. Appl. Soft Comput. 2013, 13, 1561–1566. [Google Scholar] [CrossRef]
  89. Kaveh, A.; Bakhshpoori, T.; Afshari, E. An efficient hybrid particles warm and swallow swarm optimization algorithm. Comput. Struct. 2014, 143, 40–59. [Google Scholar] [CrossRef]
  90. Ting, T.O.; Yang, X.S.; Cheng, S.; Huang, K. Hybrid metaheuristic algorithms: Past, present, and future. In Recent Advances in Swarm Intelligence and Evolutionary Computation; Springer: Cham, Switzerland, 2015; Volume 585, pp. 73–81. [Google Scholar]
  91. Nenavath, H.; Jatoth, R.K. Hybridizing sine cosine algorithm with differential evolution for global optimization and object tracking. Appl. Soft Comput. 2018, 62, 1019–1043. [Google Scholar] [CrossRef]
  92. Gupta, S.; Deep, K.; Mirjalili, S. An efficient equilibrium optimizer with mutation strategy for numerical optimization. Appl. Soft Comput. 2020, 96, 106542. [Google Scholar] [CrossRef]
  93. Ficarella, E.; Lamberti, L.; Degertekin, S.O. Comparison of three novel hybrid metaheuristic algorithms for structural optimization problems. Comput. Struct. 2021, 244, 106395. [Google Scholar] [CrossRef]
  94. Salgotra, S.; Singha, U.; Singh, G.; Mittal, N.; Gandomi, A.H. Aself-adaptive hybridized differential evolution naked mole-rat algorithm for engineering optimization problems. Comput. Methods Appl. Mech. Eng. 2021, 383, 113916. [Google Scholar] [CrossRef]
  95. Welhazi, Y.; Guesmi, T.; Alshammari, B.M.; Alqunun, K.; Alateeq, A.; Almalaq, Y.; Alsabhan, R.; Abdallah, H.H. A novel hybrid chaotic Jaya and Sequential Quadratic Programming method for robust design of power system stabilizers and static VAR compensator. Energies 2022, 15, 860. [Google Scholar] [CrossRef]
  96. Zheng, R.; Jia, H.; Abualigah, L.; Liu, Q.; Wang, S. An improved arithmetic optimization algorithm with forced switching mechanism for global optimization problems. Math. Biosci. Eng. 2022, 19, 473–512. [Google Scholar] [CrossRef] [PubMed]
  97. Bouaouda, A.; Sayouti, Y. Hybrid meta-heuristic algorithms for optimal sizing of hybrid renewable energy system: A review of the state-of-the-art. Arch. Computat. Methods Eng. 2022, 29, 4049–4083. [Google Scholar] [CrossRef]
  98. Zhang, Y.J.; Wang, Y.F.; Tao, L.W.; Yan, Y.X.; Zhao, J.; Gao, Z.M. Self-adaptive classification learning hybrid JAYA and Rao-1 algorithm for large-scale numerical and engineering problems. Eng. Appl. Artif. Intell. 2022, 114, 105069. [Google Scholar] [CrossRef]
  99. Shang, C.; Zhou, T.; Liu, S. Optimization of complex engineering problems using modified sine cosine algorithm. Sci. Rep. 2022, 12, 20528. [Google Scholar] [CrossRef]
  100. Ch, L.K.; Kamboj, V.K.; Bath, S.K. Hybridizing slime mould algorithm with simulated annealing algorithm: A hybridized statistical approach for numerical and engineering design problems. Complex Intell. Syst. 2023, 9, 1525–1582. [Google Scholar] [CrossRef]
  101. Chun, Y.; Hua, X.; Qi, C.; Yao, Y.X. Improved marine predators algorithm for engineering design optimization problems. Sci. Rep. 2024, 14, 13000. [Google Scholar] [CrossRef]
  102. Hassan, M.H.; Kamel, S.; Mohamed, A.W. Enhanced gorilla troops optimizer powered by marine predator for global optimization and engineering design. Sci. Rep. 2024, 14, 7650. [Google Scholar] [CrossRef]
  103. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  104. Ho, Y.C.; Pepyne, D.L. Simple explanation of the No-Free-Lunch Theorem and its implications. J. Optim. Theory Appl. 2002, 115, 549–570. [Google Scholar] [CrossRef]
  105. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Zamani, H.; Bahreininejad, A. GGWO: Gazecues learning-based grey wolf optimizer and its applications for solving engineering problems. J. Comput. Sci. 2022, 61, 101636. [Google Scholar] [CrossRef]
  106. Tsai, H.-C.; Shi, J.-Y. Potential corrections to grey wolf optimizer. Appl. Soft Comput. 2024, 161, 111776. [Google Scholar] [CrossRef]
  107. Zitar, R.A.; Al-Betar, M.A.; Awadallah, M.A.; Doush, I.A.; Assaleh, K. An intensive and comprehensive overview of JAYA algorithm, its versions and applications. Arch. Comput. Methods Eng. 2022, 29, 763–792. [Google Scholar] [CrossRef]
  108. DaSilva, L.S.A.; Lucio, Y.L.S.; Coelho, L.D.; Mariani, V.C.; Rao, R.V. A comprehensive review on Jaya optimization algorithm. Artif. Intell. Rev. 2023, 56, 4329–4361. [Google Scholar] [CrossRef]
  109. Degertekin, S.O.; Lamberti, L.; Ugur, I.B. Discretesizing/layout/topology optimization of truss structures with an advanced Jaya algorithm. Appl. Soft Comput. 2019, 79, 363–390. [Google Scholar] [CrossRef]
  110. Chakraborty, U.K. Semi-steady-state Jaya algorithm for optimization. Appl. Sci. 2020, 10, 5388. [Google Scholar] [CrossRef]
  111. Rao, R.V.; Saroj, A. A self-adaptive multi-population based Jaya algorithm for engineering optimization. Swarm Evol. Comput. 2017, 37, 1–26. [Google Scholar]
  112. Zhang, Y.; Chi, A.; Mirjalili, S. Enhanced Jaya algorithm: A simple but efficient optimization method for constrained engineering design problems. Knowl.-Based Syst. 2021, 233, 107555. [Google Scholar] [CrossRef]
  113. Zhang, G.; Wan, C.; Xue, S.; Xie, L. A global-local hybrid strategy with adaptive spacer eduction search method for structural health monitoring. Appl. Math. Model. 2023, 121, 231–251. [Google Scholar] [CrossRef]
  114. Chen, W.N.; Zhang, J.; Lin, Y.; Chen, N.; Zhan, Z.H.; Chang, H.; Li, Y.; Shi, Y. Particles warm optimization with an aging leader and challengers. IEEE Trans. Evol. Comput. 2013, 17, 241–258. [Google Scholar] [CrossRef]
  115. Qiao, J.; Wang, G.; Yang, Z.; Luo, X.; Chen, J.; Li, K.; Liu, P. A hybrid particles warm optimization algorithm for solving engineering problem. Sci. Rep. 2024, 14, 8357. [Google Scholar] [CrossRef] [PubMed]
  116. Hansen, N.; Müller, S.D.; Koumoutsakos, P. Reducing the time complexity of the derandomized evolution strategy with covariance matrixad aptation (CMA-ES). Evol. Comput. 2003, 11, 1–18. [Google Scholar] [CrossRef] [PubMed]
  117. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computations, Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar]
  118. Awad, N.H.; Ali, M.Z.; Suganthan, P.N. Ensemble sinusoidal differential covariance matrix adaptation with Euclidean neighborhood for solving CEC2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computations, Donostian-San Sebastian, Spain, 5–8 June 2017; pp. 372–379. [Google Scholar]
  119. Ali, W.; Mohamed, A.W.; Hadi, A.A.; Fattouh, A.M.; Jambi, K.M. LSHADE with semi-parameter adaptation hybrid with CMA-ES for solving CEC2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computations, Donostian-SanSebastian, Spain, 5–8 June 2017; pp. 145–152. [Google Scholar]
  120. Miar Naeimi, F.; Azizyan, G.; Rashki, M. Multi-level cross entropy optimizer (MCEO): A nevolutionary optimization algorithm for engineering problems. Eng. Comput. 2018, 34, 719–739. [Google Scholar] [CrossRef]
  121. Azizyan, G.; Miar Naeimi, F.; Rashki, M.; Shabakhty, S. Flying Squirrel Optimizer (FSO): A novel SI-based optimization algorithm for engineering problems. Iran. J. Optim. 2019, 11, 177–205. [Google Scholar]
  122. Hasancebi, O.; Erdal, F.; Saka, M.P. Adaptive harmony search method for structural optimization. ASCE J. Struct. Eng. 2010, 136, 419–431. [Google Scholar] [CrossRef]
  123. Kazemzadeh Azad, S.; Hasancebi, O.; Kazemzadeh Azad, S.; Erol, O.K. Upper bound strategy in optimum design of truss structures: A big bang-big crunch algorithm based application. Adv. Struct. Eng. 2013, 16, 1035–1046. [Google Scholar] [CrossRef]
  124. Draa, A.; Bouzoubia, S.; Boukhalfa, I. A sinu soidal differential evolution algorithm for numerical optimisation. Appl. Soft Comput. 2015, 27, 99–126. [Google Scholar] [CrossRef]
  125. Karam, M.S.; Saber, M.E.; Ripon, K.C.; Michael, J.R. Improved multi-operator differential evolution algorithm for solving unconstrained problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation, Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  126. Tang, H.; Lee, J. Adaptive initialization LSHADE algorithm enhanced with gradient-based repair for real-world constrained optimization. Knowl.-Based Syst. 2022, 246, 108696. [Google Scholar] [CrossRef]
  127. Gupta, S.; Abderazek, H.; Yildiz, B.S.; Yildiz, A.R.; Mirjalili, S.; Sait, S.M. Comparison of metaheuristic optimization algorithms for solving constrained mechanical design optimization problems. Expert Syst. Appl. 2021, 183, 115351. [Google Scholar] [CrossRef]
  128. Tzanetos, A.; Blondin, M. A qualitative system aticreview of metaheuristics appliedtotension/compression spring design problem: Current situation, recommendations, and research direction. Eng. Appl. Artif. Intell. 2023, 118, 105521. [Google Scholar] [CrossRef]
  129. Chakraborty, S.; Saha, A.K.; Sharma, S.; Sahoo, S.K.; Pal, G. Comparative performance analysis of differential evolution variantson engineering design problems. J. Bionic Eng. 2022, 19, 1140–1160. [Google Scholar] [CrossRef]
  130. Kumar, A.; Das, S.; Zelinka, I. A self-adaptive spherical search algorithm for real-world constrained optimization problems. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, Cancún, Mexico, 8–12 July 2020; pp. 13–14. [Google Scholar]
  131. Sharma, D.; Jabeen, S.D. Hybridizing interval method with aheuristic for solving real-world constrained engineering optimization problems. Structures 2023, 56, 104993. [Google Scholar] [CrossRef]
  132. Uray, E.; Carbas, S.; Geem, Z.W.; Kim, S. Parameters optimization of Taguchi Method integrated Hybrid Harmony Search algorithm for engineering design problems. Mathematics 2022, 10, 327. [Google Scholar] [CrossRef]
  133. Paul, H.; Tay, A.O. Optimal design of anindustrial refrigeration system. In Proceedings of the International Conference on Optimization Techniques and Applications, Singapore, 8–10 April 1987; pp. 427–435. [Google Scholar]
  134. Tsipianitis, A.; Tsompanakis, Y. Improved Cuckoo Search algorithmic variants for constrained nonlinear optimization. Adv. Eng. Softw. 2020, 149, 102865. [Google Scholar] [CrossRef]
  135. Kumar, A.; Wu, G.; Ali, M.Z.; Mallipeddi, R.; Suganthan, P.N.; Das, S. A test-suite of non-convex constrained optimization problems from the real-world and some baseline results. Swarm Evol. Comput. 2020, 56, 100693. [Google Scholar] [CrossRef]
  136. Ghafil, H.N.; Jármai, K. Dynamic differential annealed optimization: New metaheuristic optimization algorithm for engineering applications. Appl. Soft Comput. 2020, 93, 106392. [Google Scholar] [CrossRef]
  137. Gasparetto, A.; Zanotto, V. Optimal trajectory planning for industrial robots. Adv. Eng. Softw. 2010, 41, 548–556. [Google Scholar] [CrossRef]
  138. Li, B.; Shao, Z. Simultaneous dynamic optimization: A trajectory planning method for nonholonomic car-like robots. Adv. Eng. Softw. 2020, 87, 30–42. [Google Scholar] [CrossRef]
  139. Sciammarella, C.A.; Sciammarella, F.M. Experimental Mechanics of Solids; Wiley: Chichester, UK, 2012. [Google Scholar]
  140. Sciammarella, C.A.; Lamberti, L.; Boccaccio, A. A general model for moiré contouring. Part I: Theory. Opt. Eng. 2008, 47, 033605. [Google Scholar] [CrossRef]
  141. Sciammarella, C.A.; Lamberti, L.; Boccaccio, A.; Cosola, E.; Posa, D. A general model for moiré contouring. Part II: Applications. Opt. Eng. 2008, 47, 033606. [Google Scholar] [CrossRef]
  142. General Stress Optics, Inc. Holo-Moiré Strain Analyzer (HoloStrain); Version 2.0; General Stress Optics, Inc.: Chicago, IL, USA, 2013; Available online: http://www.stressoptics.com (accessed on 20 September 2024).
  143. Lamberti, L.; Pappalettere, C. Move limits definitionin structural optimization with sequential linear programming. Part II: Numerical examples. Comput. Struct. 2003, 81, 215–238. [Google Scholar] [CrossRef]
  144. Mirjalili, S.; Saremi, S.; Mirjalili, S.M.; Coelho, L.D. Multi-objective grey wolf optimizer: A novel algorithm for multi-criterion optimization. Expert Syst. Appl. 2016, 47, 106–119. [Google Scholar] [CrossRef]
Figure 1. Explanation of elitist mirroring strategy implemented by SHGWJA.
Figure 1. Explanation of elitist mirroring strategy implemented by SHGWJA.
Mathematics 12 03464 g001
Figure 2. Flowchart of SHGWJA.
Figure 2. Flowchart of SHGWJA.
Mathematics 12 03464 g002
Figure 3. Schematic of concrete gravity dam.
Figure 3. Schematic of concrete gravity dam.
Mathematics 12 03464 g003
Figure 4. Schematic of tension/compression spring.
Figure 4. Schematic of tension/compression spring.
Mathematics 12 03464 g004
Figure 5. Schematic of welded beam.
Figure 5. Schematic of welded beam.
Mathematics 12 03464 g005
Figure 6. (a) Three-dimensional view and (b) schematic of pressure vessel.
Figure 6. (a) Three-dimensional view and (b) schematic of pressure vessel.
Mathematics 12 03464 g006
Figure 7. Design space of 2D path planning problem.
Figure 7. Design space of 2D path planning problem.
Mathematics 12 03464 g007
Figure 8. (a) The experimental moiré setup used in the composite panel identification; (b) the phase of moiré pattern at the buckling onset; (c) the finite element model of the tested specimen with indication of the control paths used for building the error functional Ω.
Figure 8. (a) The experimental moiré setup used in the composite panel identification; (b) the phase of moiré pattern at the buckling onset; (c) the finite element model of the tested specimen with indication of the control paths used for building the error functional Ω.
Mathematics 12 03464 g008
Figure 11. Comparison of convergence curves obtained in tension/compression spring problem. In the vertical axis, the “,” notation indicates the decimal signs.
Figure 11. Comparison of convergence curves obtained in tension/compression spring problem. In the vertical axis, the “,” notation indicates the decimal signs.
Mathematics 12 03464 g011
Figure 12. Comparison of convergence curves obtained in welded beam design problem. In the vertical axis, the “,” notation indicates the decimal signs.
Figure 12. Comparison of convergence curves obtained in welded beam design problem. In the vertical axis, the “,” notation indicates the decimal signs.
Mathematics 12 03464 g012
Figure 13. Comparison of convergence curves obtained in pressure vessel design problem.
Figure 13. Comparison of convergence curves obtained in pressure vessel design problem.
Mathematics 12 03464 g013
Figure 17. Comparison of convergence curves obtained in identification problem of axially compressed flat composite panel. In the vertical axis, the “,” notation indicates the decimal signs.
Figure 17. Comparison of convergence curves obtained in identification problem of axially compressed flat composite panel. In the vertical axis, the “,” notation indicates the decimal signs.
Mathematics 12 03464 g017
Figure 9. Comparison of convergence curves obtained in concrete dam problem.
Figure 9. Comparison of convergence curves obtained in concrete dam problem.
Mathematics 12 03464 g009
Figure 10. Comparison of concrete dam optimized shapes obtained by different algorithms.
Figure 10. Comparison of concrete dam optimized shapes obtained by different algorithms.
Mathematics 12 03464 g010
Figure 14. Comparison of convergence curves obtained in the industrial refrigeration system problem. In the vertical axis, the “,” notation indicates the decimal signs.
Figure 14. Comparison of convergence curves obtained in the industrial refrigeration system problem. In the vertical axis, the “,” notation indicates the decimal signs.
Mathematics 12 03464 g014
Figure 15. Comparison of convergence curves obtained in 2D path planning problem. In the vertical axis, the “,” notation indicates the decimal signs.
Figure 15. Comparison of convergence curves obtained in 2D path planning problem. In the vertical axis, the “,” notation indicates the decimal signs.
Mathematics 12 03464 g015
Figure 16. Comparison of optimal trajectories obtained in 2-D path planning problem by different MAs.
Figure 16. Comparison of optimal trajectories obtained in 2-D path planning problem by different MAs.
Mathematics 12 03464 g016
Figure 18. Normalized trajectories of optimization variables recorded for SHGWJA in: (a) Tension/compression spring, welded beam and pressure vessel design problems; (b) Concrete dam, refrigeration system, 2D path planning and composite panel problems.()In the vertical axis, the “,” notation indicates the decimal signs.
Figure 18. Normalized trajectories of optimization variables recorded for SHGWJA in: (a) Tension/compression spring, welded beam and pressure vessel design problems; (b) Concrete dam, refrigeration system, 2D path planning and composite panel problems.()In the vertical axis, the “,” notation indicates the decimal signs.
Mathematics 12 03464 g018
Table 1. Optimization results for the unimodal Rosenbrock function.
Table 1. Optimization results for the unimodal Rosenbrock function.
AlgorithmNDVNPOPBestAverageWorstST DevNFE
SHGWJA—Present10105.179·10−131.044·10−122.291·10−121.117·10−1215,000
30104.789·10−132.832·10−123.706·10−121.029·10−1215,000
50104.998·10−133.959·10−126.803·10−122.669·10−1215,000
100108.460·10−151.032·10−141.239·10−141.606·10−1415,000
30301.001·10−153.846·10−147.309·10−147.545·10−1414,187 ± 1319
100304.189·10−134.213·10−132.423·10−129.620·10−1315,000
500302.284·10−141.018·10−132.030·10−137.813·10−1315,000
1000307.028·10−145.407·10−136.704·10−133.875·10−1315,000
30501.006·10−151.072·10−151.710·10−151.563·10−1614,233 ± 861
50504.541·10−133.144·10−126.331·10−121.336·10−1215,000
100501.000·10−151.092·10−149.215·10−142.120·10−1413,997 ± 1190
MGWO-III [39]3030N/A16.3N/A12.530,000
5030N/A35.0N/A19.230,000
10030N/A66.1N/A41.830,000
CJAYA-SQP [95]3040N/A0.312N/A1.3280,000
10040N/A13.0N/A33.980,000
SJAYA [110]301000.0015 25.453N/A28.87630,000
SAMP-JAYA [111]30Variab. subpopul.N/A1.801·10−9N/A1.844·10−850,000
Hybrid HS [93]100For all algorithmsN/A1.197·10−11N/AAver. of optim. 9509 ± 420
Hybrid BBBC [93]100of [93]N/A1.369·10−11N/Aruns of [93]9865 ± 680
HFSA [93]10020;200;500N/A1.480·10−11N/A7.4·10−129896 ± 1406
EO [30]3030N/A25.3233N/A0.1695815,000 *
5030N/A45.55 N/A0.477215,000 *
10030N/A96.73 N/A1.10915,000 *
50030N/A497.6N/A0.144115,000 *
mEO [92]3030N/A0.07952N/A0.0542715,000 *
5030N/A0.2291N/A0.236015,000 *
10030N/A0.4783N/A0.424915,000 *
50030N/A6.2N/A5.14215,000 *
LSO [31]10020N/A97.0N/A0.61850,000
RUN [42]305022.924.5N/A1.0425,000 *
AOA [43]3030N/A24.9N/A0.36415,000 *
10030N/A97.9 N/A0.65515,000 *
50030N/A497N/A0.53215,000 *
100030N/A921N/A96915,000 *
IAOA [96]303026.47027.941N/A0.206515,000 *
50030495.848496.902N/A0.151115,000 *
100030996.137997.156N/A0.166215,000 *
MOA [51]N/A50000050,000 *
PEOA [52]N/A504.577·10−74.425·10−43.201·10−39.18·10−450,000 *
LCA [53]3030000030,000 *
SLOA [70]3050N/A24.25997N/A1.51·10−1450,000 *
GTO [81]3030N/A2.86·10−8N/A5.221·10−930,000 *
10030N/A2.53·10−8N/A4.619·10−930,000 *
50030N/A5.1·10−7N/A9.311·10−830,000 *
100030N/A3.57·10−6N/A6.518·10−730,000 *
MPA [80]305025.94326.61827.5020.397525,000 *
505043.80345.11747.7410.95125,000 *
IMPA [101]5050N/A2.298·10−5N/A2.461·10−550,000 *
EGTO [102]30504.29·10−79.225724.69611.60125,000 *
50503.99·10−610.799644.76119.17825,000 *
SaDN [94]10050N/A98.9N/A3.01·10−225,000 *
MSCA [99]3050N/A25.5N/A0.16625,000 *
hSM-SA [100]30301.146·10−211.77828.3557.059·10−315,000 *
ALC-PSO [114]30203.729·10−77.613N/A 6.6586406
GPSO [114]30202.362·10−411.657N/A14.9995348
NDWPSO [115]5070N/A46.2N/A0.6514,000 *
CMA-ES [116]30301.77727.053.8120.622230,000 *
5050N/A52.11N/A23.1525,000 *
LSHADE [117]30Lin. decr. from
18NDV = 540 to 4
3.97046.3884.1229.3030,000
LSHADE-cnEpSin [118]50Lin. decr. from
18NDV = 900 to 4
N/A59.253N/A29.01225,000
LSHADE-SPACMA [119]30Lin. decr. from
18NDV = 540 to 4
N/A28.8255N/A0.824215,000
* Determined as the product between the population size and the limit number of optimization iterations.
Table 2. Optimization results for multimodal Rastrigin function.
Table 2. Optimization results for multimodal Rastrigin function.
AlgorithmNDVNPOPBestAverageWorstST DevNFE
SHGWJA—Present10101.776·10−152.186·10−153.553·10−157.793·10−1613,770 ± 978
30101.776·10−151.776·10−151.776·10−15013,400 ± 1253
50101.776·10−152.265·10−153.553·10−159.175·10−1613,960 ± 865
100102.842·10−143.908·10−148.882·10−143.224·10−1413,989 ± 288
30301.776·10−152.131·10−153.553·10−157.493·10−1613,736 ± 1123
100301.776·10−151.973·10−155.329·10−158.375·10−1613,803 ± 1316
500301.492·10−132.065·10−121.114·10−113.771·10−1215,000
1000302.934·10−131.368·10−127.212·10−123.459·10−1215,000
30501.776·10−151.776·10−151.776·10−15013,250 ± 1125
50501.776·10−151.924·10−153.553·10−155.130·10−1613,630 ± 724
100501.776·10−151.776·10−151.776·10−15013,578 ± 1397
CJAYA-SQP [95]3040000080,000
30100000080,000
SAMP-JAYA [111]30Variab. subpopul.N/A68.516N/A32.54350,000
I-JAYA [113]301001.0·10−10N/AN/AN/A15,700
MGWO-III [39]3030000030,000 *
5030000030,000 *
10030000030,000 *
Hybrid HS [93]100For all algorithmsN/A2.988·10−12N/AAver. of optim. 10,877 ± 538
Hybrid BBBC [93] 100of [93]N/A1.606·10−11N/Aruns of [93]9703 ± 1096
HFSA [93]10020;200;500N/A1.192·10−11N/A7.3·10−129875 ± 677
EO [30]3030000015,000 *
5030000015,000 *
10030000015,000 *
50030N/A9.095·10−14N/A2.775·10−1315,000 *
mEO [92]3030000015,000 *
5030000015,000 *
10030000015,000 *
50030000015,000 *
LSO [31]10020000050,000
AOA [43]3030N/A3.42·10−7N/A3.42·10−715,000 *
10030N/A8.46·10−6N/A9.24·10−615,000 *
50030N/A1.58·10−2N/A1.58·10−215,000 *
100030N/A3.99·10−2N/A2.64·10−215,000 *
IAOA [96]3030000015,000 *
MOA [51]N/A50000050,000 *
PEOA [52]N/A50000050,000 *
LCA [53]3030000030,000 *
SLOA [70]3050000050,000 *
GTO [81]3030000030,000 *
MPA [80]30504.32·10−71.43·10−41.393·10−33.07·10−425,000 *
5050000025,000 *
IMPA [101]5050000050,000 *
EGTO [102]3050000025,000 *
5050000025,000 *
SaDN [94]3050000025,000 *
MSCA [99]3050000025,000 *
hSM-SA [100]3030000015,000 *
ALC-PSO [114]30207.105·10−152.528·10−14N/A1.376·10−1474,206
NDWPSO [115]5070000014,000 *
CMA-ES [116]3030141.952157.610165.2097.32430,000 *
3050N/A30.91N/A5.38325,000 *
LSHADE [117]30Lin. decr. from
18NDV = 540 to 4
1.275·10−64.983·10−20.995040.222530,000
LSHADE-cnEpSin [118]50Lin. decr. from
18NDV = 900 to 4
N/A76.052N/A9.1525,000
LSHADE-SPACMA [119]30Lin. decr. from
18NDV = 540 to 4
N/A67.542N/A10.01615,000
* Determined as the product between the population size and the limit number of optimization iterations.
Table 3. Optimization results obtained for the concrete gravity dam design problem.
Table 3. Optimization results obtained for the concrete gravity dam design problem.
AlgorithmBest
(m3)
Average
(m3)
Worst
(m3)
ST Dev
(m3)
NFEConstraint Violation (%)
SHGWJA (Present)6647.8746647.8746647.874010,388 ± 3628
6584
1.284·10−3
Standard GWO6654.9276752.6446841.40297.72350,0007.209·10−3
Standard JAYA6647.8746722.3926836.90295.21625,0001.284·10−3
Improved JAYA [93]6833.3916835.3376839.5741.107823,662 ± 8263
19,531
4.06
Hybrid HS [93]6830.9766831.5196832.5420.734916,193 ± 965
16,703
3.398·10−5
Hybrid BBBC [93]6830.2356830.9986831.7320.624815,846 ± 948
15,487
7.284·10−6
HFSA [93]6832.1646833.6126834.3441.027714,560 ± 184
14,447
1.396·10−5
mAHS (der. from [122])6658.0246768.3526848.30996.45534,159 ± 6898
36,310
1.311·10−3
mBBBC−UBS (der. from [123])6656.3416776.1256845.26593.86631,690 ± 9902
32,004
3.376·10−3
MsinDE (der. from [124])6652.5136771.4786851.634100.18935,208 ± 7169
37,545
3.653·10−3
MCEO [120]7448.74N/AN/A3.274·10−235,00032
FSO [121]7448.74N/AN/AN/AN/A32
Table 5. Optimization results for tension/compression spring design problem.
Table 5. Optimization results for tension/compression spring design problem.
AlgorithmBestAverageWorstST DevNFEX1 (in)X2 (in)X3
SHGWJA—Present0.01266650.0126820.0126921.11·10−56347 ± 2246 0.05146480.3513434311.611476
GGWO [105]0.01266520.01270.01270.000022,6400.05160.354011.4515
MGWO-III [39]0.01266620.0126790.0127211.4·10−530,000 *0.0516580.35597511.33351
EHRJAYA [98]0.0126650.0127160.0128695.24·10−520000.05170980.35721811.2597
SAMP-JAYA [111]0.0126650.0127140.0131939.252·10−56861 (aver)N/AN/AN/A
EJAYA [112]0.0126650.0126680.0126874.633·10−615,0000.05174310.358020511.213015
EO [30]0.0126660.0130170.0139973.91·10−415,000 *0.05161990.3550543811.387968
LSO [31]0.0126652N/AN/AN/A5000 **0.0516890.3567111.290
AOA [43]0.012124N/AN/AN/A15,000 *0.050.34980911.8637
IAOA [96]0.012018N/AN/AN/A15,000 *0.0500825 0.36306111.197508
MOA [51]0.0126650.0126650.0126659.85·10−19N/AN/AN/AN/A
PEOA [52]0.0126670.012680.012722·10−530,000 *0.051610.3547211.4068
SO [69]0.01267250.0136340.0177731.2·10−330,000 *0.05110.341812.2222
SMO [74]0.0126652N/AN/AN/A30,000 *0.05167590.3564009711.307562
MPA [80]0.0126650.0126650.0126655.55·10−825,000 *0.05172450.3575700311.239196
EGTO [102]0.01266520.0126660.0126681.13·10−625,000 *0.05168400.3565966411.296069
SaDN [94]0.0126650.0126780.0128726.73·10−815,0000.0516220.35510511.384415
MSCA [99]0.01266680.0128170.0133421.90·10−425,000 *0.0517820.358944811.160789
hSM-SA [100]0.0127150.0137570.0177311.561·10−315,000 *0.0500590.318747413.919190
IMODE [125]0.0126650.01350.01456.32·10−430,0000.05190.362311
En(L)SHADE [126] 1.27·10−21.27·10−21.27·10−20100,000N/AN/AN/A
QSA (best alg. [127])0.01266520.0126670.0126742.589·10−618,0000.0516880.356711.28999
The fastest optimization run of SHGWJA converging to the best solution required only 2247 function evaluations. * Determined as the product between the population size and the limit number of optimization iterations. ** Maximum number of function evaluations indicated in the convergence curve plot. The number of significant digits given in Ref. [126] was only 3.
Table 6. Optimization results for welded beam design problem.
Table 6. Optimization results for welded beam design problem.
AlgorithmBestAverageWorstST DevNFEX1 (in)X2 (in)X3 (in)X4 (in)
SHGWJA (Present)
Problem Variant 1
1.7248521.7248521.72485204276 ± 509 0.205733.4704899.0366240.20573
GGWO [105]1.724851.724851.72485013,9200.20573.47059.03660.2057
MGWO-III [39]1.7249841.7251561.7254201.37·10−430,000 *0.205673.47199.036680.20573
SAMP-JAYA [111]1.7248521.7248521.7248526.7·10−163618 (aver)N/AN/AN/AN/A
EJAYA [112]1.7248521.7248521.7248525.563·10−1024,0000.205733.4704899.0366240.20573
EO [30]1.7248531.7264821.7367253.257·10−315,000 *0.20573.47059.036640.2057
LSO [31]1.724866N/AN/AN/A5000 **0.205723.47079.03660.20573
AOA [43]1.7164N/AN/AN/A15,000 *0.1944752.5709210.0000.201827
MOA [51]1.7248521.7248521.7248526.9·10−16N/AN/AN/AN/AN/A
PEOA [52]1.7248561.7248921.7249483.11·10−530,000 *0.205733.470489.036640.20573
SO [69]1.7248521.7699492.4556850.13730,000 *0.2057 3.47059.03660.2057
SMO [74]1.724852N/AN/AN/A30,000 *0.205733.470499.036620.20573
MPA [80]1.7248531.7248611.7248736.41·10−625,000 *0.205733.470519.036620.20573
EGTO [102]1.7248521.7248521.7248525.32·10−1625,000 *0.205733.470499.036620.20573
SaDN [94]1.97731.98281.99475.5·10−615,0000.24446.217878.29150.2444
hSM-SA [100]1.7251341.7780212.3218430.14352215,000 *0.205733.471139.035480.20578
IMODE [125]2.825784.447.551.0130,0000.23444.96627.53000.3674
QSA (Best Alg. [127])1.7248521.7248521.7248521.121·10−1518,0000.205733.470499.036620.20573
TIHHSA [132]1.74026N/AN/AN/AN/A0.198233.645399.028570.206407
SHGWJA (Present)
Problem Variant 2
1.695251.695251.6952504201 ± 406 ♣♣0.205733.253129.0366240.20573
EHRJAYA [98]1.69521.6961.69899.44·10−420000.205733.25319.03660.20573
MSCA [99]1.69711.702091.722165.816·10−325,000 *0.20519 3.266079.0338010.20591
NDWPSO [115]1.6978N/AN/A2.76·10−920,000 *0.2063.259.040.2066
FSO [121]1.695251.696731.701397.24·10−445000.205733.253129.0366240.20573
SHGWJA (Present)
Problem Variant 3
1.6702181.6702181.67021804438 ± 424 ♣♣♣0.1988323.3373659.1920240.198832
En(L)SHADE [126] 1.6701.6701.6700100,000N/AN/AN/AN/A
LSHADE-SPACMA [119,129]167,021167,021N/A1.977·10−1640,000N/AN/AN/AN/A
SASS [130,131]N/A1.670218N/A2.27·10−1620,000N/AN/AN/AN/A
The fastest optimization run of SHGWJA for problem variant 1 required only 3670 function evaluations. ♣♣ The fastest optimization run of SHGWJA for problem variant 2 required only 3635 function evaluations. ♣♣♣ The fastest optimization run of SHGWJA for problem variant 3 required only 3791 function evaluations. * Determined as the product between the population size and the limit number of optimization iterations. ** Maximum number of function evaluations indicated in the convergence curve plot. The number of significant digits given in Ref. [126] was only 4.
Table 7. Optimization results for the pressure vessel design problem.
Table 7. Optimization results for the pressure vessel design problem.
AlgorithmBestAverageWorstST DevNFEX1 (in)X2 (in)X3 (in)X4 (in)
SHGWJA (Present)
Continuous variables
5885.3315885.3315885.33109773 ± 2561 0.7781680.38464940.31962200
GGWO [105]5885.33285885.33285885.3328012,8600.77820.384640.3196200
MGWO-III [39]5884.06165884.78205886.39610.97730,000 *0.7781970.3846840.31549199.9593
EHRJAYA [98]5734.91325735.38735741.06781.533640000.7424340.37019840.31962200
EJAYA [112]5885.3335885.8865894.7771.73416,0000.7781690.38464940.31962199.9999
EO [30]5885.3279N/AN/AN/A15,000 *0.778169 0.38464940.31962199.9999
LSO [31]5585.434N/AN/AN/A5000 **0.778180.3846640.320200
AOA [43]6048.7844N/AN/AN/A15,000 *0.8303740.41620642.75127169.3454
IAOA [96]5813.551N/AN/AN/A15,000 *0.7637210.37054641.5666184.1352
MOA [51]5882.9015882.9015882.9011.89·10−12N/AN/AN/AN/AN/A
PEOA [52]5882.9015883.0435884.2450.31612830,000 *0.7780270.38457940.31228200
LCA [53]5886588758900.963230,000 *N/AN/AN/AN/A
SO [69]5887.52985989.80916247.617010430,000 *0.78190.385740.5752196.5499
SMO [74]5585.3329N/AN/AN/A30,000 *0.7781690.384649240.31962199.9999
MPA [80]5885.3353N/AN/AN/A25,000 *0.7781690.384649740.31962199.9999
EGTO [102]5885.3315885.3315885.3311.43·10−1225,000 *0.7781680.38464940.31962200
GTO [81]5889.55967.4646175.56814.239790,000 *0.7788340.38544240.34289199.6848
MSCA [99]5917.50986029.2446396.551113.24725,000 *0.7805830.39175640.41908198.9641
hSM-SA [100]5885.7897318.7346182.999451.54115,000 *0.7783480.38478940.32887199.8715
NDWPSO [115]5890N/AN/A4.973·10−520,000 *0.7780.38540.3200
IMODE [125]6499.445N/AN/AN/A30,0000.94800.473548.8727114.1672
TIHHSA [132]5959.86N/AN/AN/AN/A0.8141810.40379942.1533176.032
SHGWJA (Present)
Discret./Contin. Variables
6059.7146059.7146059.71409712 ± 2730 ♣♣0.81250.437542.0984176.6372
EO [30]6059.7146668.1147544.493566.2415,0000.81250.437542.0984176.6372
mEO [92]6059.714N/AN/AN/A15,0000.81250.437542.0984176.6372
SaDN [94]6059.7146128.7326416.35346.48215,0000.81250.437542.0984176.6372
FSO [121]6059.7146059.7176060.1840.246975000.81250.437542.0984176.6366
En(L)SHADE [126] 6.06·1036.06·1036.09·10311.5100,000N/AN/AN/AN/A
QSA (Best Alg. [127])6059.7146078.3196370.78061.56325,0000.81250.437542.09844176.6365
SASS [130,131]N/A6059.714N/A3.71·10−1220,000N/AN/AN/AN/A
The fastest optimization run of SHGWJA in the continuous problem required only 6732 function evaluations. ♣♣ The fastest optimization run of SHGWJA in the mixed variable problem required only 7604 function evaluations. * Determined as the product between the population size and the limit number of optimization iterations. ** Maximum number of function evaluations indicated in the convergence curve plot. The number of significant digits given in Ref. [126] was only 3.
Table 8. Optimization results for the industrial refrigeration system.
Table 8. Optimization results for the industrial refrigeration system.
AlgorithmBestAverageWorstST DevNFE
SHGWJA (pres.)0.0322130.0322150.0322193.671·10−68517 ± 2370
Standard GWO0.0337400.0346720.0367801.292·10−350,000
Standard JAYA0.0322130.0322150.0322336.583·10−620,255
Improved JAYA [93]0.0322130.0322130.0322131.10·10−108672
SMO [74]0.032213N/AN/AN/AN/A
Hybrid HS [93]0.0322140.0322140.0322154.714·10−74975
Hybrid BBBC [93]0.0322150.0322180.0322202.055·10−65784
HFSA [93]0.0322130.0322180.0322212.966·10−65371
mAHS (der. from [122])0.0322390.0322630.0322781.61·10−57990
mBBBC–UBS (der. from [123])0.0322200.0322240.0322304.113·10−68243
MsinDE (der. from [124])0.0322180.0322220.0322272.39·10−58468
En(L)SHADE [126] 3.22·10−23.22·10−23.22·10−20100,000
SASS [130,131]0.0322130.0322130.0322131.42·10−1720,000
SDDS-SABC [131]N/A0.033414N/A0.264420,000
CS-DP1 [134]0.0322130.040964N/A3.274·10−210,000
The fastest optimization run of SHGWJA converging to target optimum required only 5165 function evaluations. The number of significant digits given in Ref. [126] was only 3.
Table 9. Optimization results for the 2-D path planning problem.
Table 9. Optimization results for the 2-D path planning problem.
AlgorithmBest
(mm)
Average (mm)Worst
(mm)
ST Dev (mm)NFE
SHGWJA (pres.)41.05741.06641.0817.445·10−33541 ± 271
Standard GWO41.11641.12841.3228.896·10−250,000
Standard JAYA41.08341.09541.1472.635·10−215,191
Improved JAYA [93]41.08741.09341.0974.722·10−33613
Hybrid HS [93]41.09241.09941.1139.723·10−33122
Hybrid BBBC [93]41.08341.09541.1191.756·10−33482
HFSA [93]41.09341.10141.1191.531·10−23615 ± 346
mAHS (der. from [122])41.09041.11441.1263.205·10−25260
mBBBC–UBS (der. from [123])41.08741.10341.1152.066·10−23934
MsinDE (der. from [124])41.10441.11041.1183.354·10−35378
DDAO [136]42.65345.86147.4211.490210,000
The fastest optimization run of SHGWJA converging to the best solution required only 3416 function evaluations.
Table 10. Results of axially compressed flat composite panel identification problem.
Table 10. Results of axially compressed flat composite panel identification problem.
PropertiesEL (MPa)ET (MPa)GLT (MPa)νLTθoθ90θ45Error (%) on ParametersError (%) on Buckling ModeFEM Analyses
SHGWJA (pres.)148,405744841380.015110.00058190.01145.004Aver: 0.03888
Max: 0.07080
Aver: 0.674
Max: 1.035
384 ± 39
366
Standard GWO148,635746843170.015160.00746989.12545.357Aver: 1.151
Max: 4.275
Aver: 3.581
Max: 4.421
5000
Standard JAYA148,562747443080.015170.00575889.21445.121Aver: 1.027
Max: 4.058
Aver: 3.269
Max: 4.074
2000
Impr. JAYA [3]149,703745149990.015250.0158685.10246.171Aver: 5.124
Max: 20.749
Aver: 4.058
Max: 4.903
975
HFHS [3]147,928749141450.015100.00349289.90445.113Aver: 0.213
Max: 0.550
Aver: 1.680
Max: 2.504
478
HFBBBC [3]148,343749941600.015090.00324390.27444.975Aver: 0.266
Max: 0.658
Aver: 1.889
Max: 2.735
431
HFSA [3]148,708747941400.015080.00102190.22344.938Aver: 0.197
Max: 0.389
Aver: 1.554
Max: 2.281
596
mAHS (der. from [122])148,593 742941090.015130.00737389.30645.279Aver: 0.470
Max: 0.771
Aver: 2.075
Max: 2.993
1044
mBBBC–UBS (der. from [123])147,793 743741080.015230.00816289.42645.314Aver: 0.581
Max: 0.861
Aver: 2.146
Max: 3.087
819
MsinDE (der. from [124])148,164748341390.015280.00517389.46845.329Aver: 0.402
Max: 0.731
Aver: 1.996
Max: 3.006
968
Table 11. SHGWJA ranks and its total score in the seven engineering problems.
Table 11. SHGWJA ranks and its total score in the seven engineering problems.
Design ExampleBESTAVERAGEWORSTSTDNFETOTSCO
Dam111115
Tension/compression spring14867237
Welded beam1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
5
6
5
Pressure vessel1
1
1
1
1
1
1
1
1
1
5
5
Refrigeration system1557220
2D path planning111429
Panel identification111115
Table 12. Comparison of ranks and scores achieved in tension/compression spring problem.
Table 12. Comparison of ranks and scores achieved in tension/compression spring problem.
AlgorithmBESTAVERAGEWORSTSTDNFETOTSCO(TOTSCO–BEST)
SHGWJA—Present1486723723
GGWO [105]11071123130
MGWO-III [39]137108145239
EHRJAYA [98]113111013623
SAMP-JAYA [111]112131144129
EJAYA [112]164742221
MPA [80]111391514
EGTO [102]233492119
SaDN [94]1710442625
En(L)SHADE [126]11111115 14
QSA (best alg. [127])2445102321
Table 4. Optimized designs for concrete gravity dam problem.
Table 4. Optimized designs for concrete gravity dam problem.
Variables (m)SHGWJAGWOMsinDE
(Der. from [124]
Hybrid HS [93]Hybrid BBBC [93]HFSA
[93]
Impr. JA
[93]
MCEO/FSO
[120,121]
Existing Design
X15.00005.00705.01005.05315.14175.06065.00005.16784
X250.00050.00050.00047.09349.18848.44847.99225.197842
X31.00001.16711.14714.08024.20624.00084.00004.95564.2
X469.79969.80869.80170.77168.17766.92072.00755.781150
X518.17917.95517.94516.22615.52112.39219.93916.087412.5
X630.00030.00130.00132.13632.62734.86030.00021.000958
X740.00040.00040.00039.77239.65939.94337.42729.903623.2
X899.78799.79999.811103.56102.76100.48102.01120.169140
X980.00080.01880.01581.14782.23587.75680.00085.382105
Table 13. Comparison of ranks and scores achieved in welded beam (1st entry of each row) and pressure vessel (2nd entry of each row) design problems by SHGWJA and its best competitors, along with cumulative ranks for spring/welded beam/vessel problems.
Table 13. Comparison of ranks and scores achieved in welded beam (1st entry of each row) and pressure vessel (2nd entry of each row) design problems by SHGWJA and its best competitors, along with cumulative ranks for spring/welded beam/vessel problems.
AlgorithmBESTAVERAGEWORSTSTDNFETOTSCOCumulative Total Score
SHGWJA—Present1
1
1
1
1
1
1
1
1.3
1
5.3
5
47.3
GGWO [105]1
1
1
1
1
1
1
1
3
3
7
7
45
EHRJAYA [98]1
Infeasible
2
---
2
---
3
---
1
---
9
---
45
EJAYA [112]1
1
1
4
1
4
1
9
8
5
12
23
57
MPA [80]1
N/A
1
N/A
8
N/A
8
N/A
11
N/A
29
N/A
44
EGTO [102]1
1
1
1
1
1
1
2
11
8
16
13
50
SaDN [94]17
1
14
6
13
5
8
5
5
3
57
20
103
En(L)SHADE [126]1
1
1
1
1
3
1
4
4
8
8
17
40
QSA (best alg. [127])1
1
1
5
1
4
1
2
6
7
10
19
52
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Furio, C.; Lamberti, L.; Pruncu, C.I. Mechanical and Civil Engineering Optimization with a Very Simple Hybrid Grey Wolf—JAYA Metaheuristic Optimizer. Mathematics 2024, 12, 3464. https://doi.org/10.3390/math12223464

AMA Style

Furio C, Lamberti L, Pruncu CI. Mechanical and Civil Engineering Optimization with a Very Simple Hybrid Grey Wolf—JAYA Metaheuristic Optimizer. Mathematics. 2024; 12(22):3464. https://doi.org/10.3390/math12223464

Chicago/Turabian Style

Furio, Chiara, Luciano Lamberti, and Catalin I. Pruncu. 2024. "Mechanical and Civil Engineering Optimization with a Very Simple Hybrid Grey Wolf—JAYA Metaheuristic Optimizer" Mathematics 12, no. 22: 3464. https://doi.org/10.3390/math12223464

APA Style

Furio, C., Lamberti, L., & Pruncu, C. I. (2024). Mechanical and Civil Engineering Optimization with a Very Simple Hybrid Grey Wolf—JAYA Metaheuristic Optimizer. Mathematics, 12(22), 3464. https://doi.org/10.3390/math12223464

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop