An Enhanced Hunger Games Search Optimization with Application to Constrained Engineering Optimization Problems

The Hunger Games Search (HGS) is an innovative optimizer that operates without relying on gradients and utilizes a population-based approach. It draws inspiration from the collaborative foraging activities observed in social animals in their natural habitats. However, despite its notable strengths, HGS is subject to limitations, including inadequate diversity, premature convergence, and susceptibility to local optima. To overcome these challenges, this study introduces two adjusted strategies to enhance the original HGS algorithm. The first adaptive strategy combines the Logarithmic Spiral (LS) technique with Opposition-based Learning (OBL), resulting in the LS-OBL approach. This strategy plays a pivotal role in reducing the search space and maintaining population diversity within HGS, effectively augmenting the algorithm’s exploration capabilities. The second adaptive strategy, the dynamic Rosenbrock Method (RM), contributes to HGS by adjusting the search direction and step size. This adjustment enables HGS to escape from suboptimal solutions and enhances its convergence accuracy. Combined, these two strategies form the improved algorithm proposed in this study, referred to as RLHGS. To assess the efficacy of the introduced strategies, specific experiments are designed to evaluate the impact of LS-OBL and RM on enhancing HGS performance. The experimental results unequivocally demonstrate that integrating these two strategies significantly enhances the capabilities of HGS. Furthermore, RLHGS is compared against eight state-of-the-art algorithms using 23 well-established benchmark functions and the CEC2020 test suite. The experimental results consistently indicate that RLHGS outperforms the other algorithms, securing the top rank in both test suites. This compelling evidence substantiates the superior functionality and performance of RLHGS compared to its counterparts. Moreover, RLHGS is applied to address four constrained real-world engineering optimization problems. The final results underscore the effectiveness of RLHGS in tackling such problems, further supporting its value as an efficient optimization method.


Introduction
Over the past few years, there have been remarkable advancements in optimization algorithms fueled by the rapid expansion of the engineering and artificial intelligence sectors [1,2].These algorithms have broadened their capabilities to address intricate problem areas that conventional methods struggle with [3][4][5], including single-objective, multiobjective, and many-objective optimization problems [6,7].They achieve this by utilizing evolutionary algorithms, swarm intelligence, and machine learning techniques.Integrating these algorithms with engineering and artificial intelligence has paved the way for hybrid approaches that harness the strengths of both disciplines.These advancements offer immense potential to tackle real-world challenges across diverse industries, enhance decision-making processes, optimize resource allocation, and improve overall system performance by delivering top-notch solutions for previously unsolvable problems [8,9].
Meta-heuristic Algorithms (MAs) have emerged as a crucial element in the field of artificial intelligence (AI), attracting substantial scholarly interest in the last two decades.This attention is primarily driven by their remarkable efficacy in addressing diverse practical problems, such as those encountered in economics management [10], power load forecasting [11], and data reduction problems [12][13][14][15], etc.The real-world optimization problems mentioned above generally share two common characteristics: high nonlinearity and inter-relation of decision variables during the solving process [16,17].However, these characteristics inevitably give rise to large problem spaces, which make the optimization process prone to failure [18,19].Therefore, it is crucial to consider factors such as time, risk, efficiency, and quality during the optimization process [20].The emergence of Metaheuristic Algorithms (MAs) has provided valuable inspiration to researchers in related fields.MAs offer reliability, robustness, and effective approaches for overcoming local optima by mimicking various phenomena to seek optimal solutions [21].However, those metaphors do not change mathematics, and each optimizer's core model will change its performance.The majority of Meta-heuristic Algorithms (MAs) derive their inspiration from natural phenomena and can be comprehensively classified into four distinct categories: evolutionary-based algorithms, swarm intelligence-based algorithms, human behavior-based algorithms, and physics-based algorithms.Evolutionary algorithms are meticulously devised based on the observable principles of evolution in nature.Swarm Intelligence (SI) algorithms draw inspiration from evolutionary theory and collective behavior, centering on the intricate behaviors displayed by systems comprising uncomplicated agents.Human behavior-based algorithms encompass diverse facets of human behavior, encompassing teaching, social interactions, learning, emotions, and management.Physicsbased algorithms find inspiration in the laws of physics and mathematics.Table 1 provides some examples of related algorithm types.

Evolutionarybased
Genetic Algorithm (GA) [22] 1975 It is derived from biological, genetic, and evolutionary mechanisms and an adaptive probabilistic optimization algorithm.Differential Evolution (DE) [23] 1995 It can be considered based on the theory of biological evolution, which imitates the process of cooperation and competition among individuals.Biogeography-Based Optimization (BBO) [24] 2008 It is based on the geographical distribution of biological organisms.

Swarm intelligencebased
Particle Swarm Optimization (PSO) [25] 1995 It is inspired by the collective behavior of social organisms, particularly the flocking and swarming behavior observed in birds, fish, and insects.Grey Wolf Optimization (GWO) [26] 2014 Its inspiration is from observing the leadership level and hunting behaviors within grey wolves in nature.Harris Hawk Optimization (HHO) [27] 2019 It draws upon the natural behavior of wolf pack hunting.
Slime Mould Algorithm (SMA) [28] 2020 Its principle is based on the oscillation mode of slime moulds in nature.

Human behavior-based
Teaching-Learning-Based Optimization (TLBO) [29] 2011 It is inspired by the idea of how teachers guide students toward better learning outcomes.Social-Based Algorithm (SBA) [30] 2013 It is in the light of the evolutionary algorithm and socio-political process based Imperialist Competitive Algorithm (ICA) [31].

Physics-based
Simulated Annealing (SA) [32] 1983 It is proposed based on the principle of solid-state high-temperature annealing.Gravitational Search Algorithm (GSA) [33] 2009 It can trace back to the law of gravity and mass interactions.
Multi-Verse Optimizer (MVO) [34] 2015 It is according to three cosmology concepts: white hole, black hole, and wormhole.

Type MAs Published Brief Introduction
RUNge Kutta Optimizer (RUN) [35] 2021 It combines elements of the classical Runge-Kutta numerical integration method with optimization techniques.weIghted meaN oF vectOrs (INFO) [36] 2022 It stems from the weight mean method, which is an enhanced optimizer in solving optimization problems.
Numerical optimization refers to finding the maximum or minimum objective value of a given problem within a defined search space.Population-based and derivative-free Swarm Intelligence (SI) optimization algorithms are widely regarded as effective solvers for complex numerical problems [37].These algorithms utilize iterative methods to continually update individuals within a population, enhancing their adaptability to the environment and yielding acceptable optimal solutions within a reasonable timeframe.The study of SI optimization algorithms has received considerable attention in recent decades.Moreover, due to their superior performance, SI optimization algorithms also have been used to deal with multiobjective problems [38][39][40], constrained optimization problems [41][42][43][44], image segmentation [45][46][47][48], medical disease diagnosis [49][50][51][52][53], parameter estimation of solar photovoltaic models [54][55][56][57][58], intelligent traffic management [59], etc.Although not all the solutions obtained by these SI algorithms are optimal, what can be guaranteed is that high-quality solutions can be acquired in a reasonable time.
Hunger Games Search (HGS) [60] is a novel algorithm proposed in 2021, designed based on natural animals' foraging behavior.Once raised, HGS has received extensive attention from scholars.AbuShanab et al. [61] used the HGS optimizer to optimize a random vector functional link (RVFL) model, which successfully finds the optimal internal parameters of RVFL that boost the model accuracy.Nguyen et al. [62] combined HGS with Artificial Neural Network (ANN) named HGS-ANN for predicting ground vibration intensity problems.The experiment's result verifies that HGS-ANN performs better than other same-type models.Like other SI algorithms, exploration and exploitation are two fundamental phases in HGS.The major work of the exploration phase is to search for a solution location, and the evaluation of promising solutions is completed within the exploitation stage.Through comparative research, it can be found that there are certain deficiencies in the performance of HGS in these two stages, which leads to premature and makes HGS prone to getting stuck at local optimal factors.To overcome the mentioned drawbacks, several authors enhanced HGS with efficient strategies.Xu et al. [63] improved HGS by including the quantum rotation gate approach and the Nelder-Mead simplex process, which discovered the locality of the best result in the feature space.For HGS is easily trapped in a local location and steadily converges speed while solving intricate problems.Ma et al. [64] introduced chaotic mappings, greedy selection, and vertical crossover strategy into the standard HGS.The introduced strategies were conducive to accelerating convergence velocity and enhancing search capability. A. Fathy et al. [65] introduced a non-homogeneous mutation operator to HGS, which proved effective in identifying the optimal settings for a Fractional-Order Proportional Integral Derivative (FOPID) based Load Frequency Controller (LFC).Emam et al. [66] modified the base HGS with a Local Escaping procedure with Brownian motion, to mitigate performance shortcomings.A. M. Nassef et al. [67] proposed a variant of HGS with a binary tau-based crossover plan.Zhang et al. [68] revealed improved HGS (IHGS) with cube mapping and refracted opposition-based learning policies.Besides, to recognize most important genes and handle the high-dimensional genetic data, Z. Chen et al. [69] invented a novel wrapper gene selection with artificial bee barebone Hunger Games Search (ABHGS).This variant combines HGS with an idea based on artificial bee mo-tions and a gaussian bare-bone idea.
However, based on the No Free Lunch (NFL) [70] theory, no such algorithm can universally solve all types of problems.In light of this, a new variant of the HGS algorithm is proposed in this study after identifying the limitations of the original HGS.To enhance the capability of exploration and exploitation, two adapted strategies are in-corporated: the adapted Logarithmic Spiral strategy [71] (LS-OBL) and the adapted Rosenbrock Method (RM) [71].The LS-OBL strategy is key in reducing the search space and maintaining solution diversity.By incorporating LS-OBL into HGS, the algorithm becomes more efficient in exploring different regions of the problem space and increasing the variety of solutions.On the other hand, the adapted RM strategy aids in overcoming local optima.It assists HGS in bypassing suboptimal solutions and improves its ability to converge toward better solutions.
The main contributions of this paper can be summarized as follows: 1.
The introduced strategy enhance the exploration and exploitation process of ordinary HGS algorithms when solving optimization problems.2.
To evaluate the efficacy of the proposed approach, RLHGS is compared with eight other state-of-the-art algorithms on 23 classical benchmark functions and 10 benchmark functions from CEC2020.And the comparative evaluation of these experiments demonstrates the superiority of RLHGS in terms of optimization performance.

3.
The proposed RLHGS algorithm addresses four constrained real-world problems, showcasing its practical applicability and effectiveness in tackling complex engineering challenges.

4.
The experiment results of RLHGS indicate excellent accuracy and reliable performance.
The other part of this paper is organized as follows: Section 2 describes the standard HGS algorithm and the embedded strategies used in this study.Section 3 elaborates on the structure of the proposed RLHGS algorithm and displays its flowchart.Section 4 clarifies the experiment setting.Section 5 conducts a qualitative analysis and three experiments to demonstrate the improvement achieved by the embedded strategies, compare the performance of RLHGS with eight competing algorithms, and showcase its ability to handle practical engineering applications.Section 6 provides the conclusion and discusses the prospect of future work.

Description of Hunger Games Search
Hunger serves as one of the most immediate homeostatic motivations in the lives of animals, influencing their behavioral decisions and actions.This fundamental motivation can even surpass and impact other competing drive states, such as thirst, feelings of insecurity, or fear of predators [72].According to the literature [73], a conclusion can be drawn that with the increase in hunger, animal food cravings also increase.It is observed that hunger increases food craving in animals.In situations where there are limited food sources, a logical game emerges among hungry animals, where participants strive to secure victory and gain access to food sources for better chances of survival [74].Building upon these premises, Hunger Games Search (HGS) was proposed.

Approach Food
In nature, animals often engage in cooperative foraging behaviors, although this is not always the case [75].There are instances where individuals choose to act alone.Based on studies on animal predatory behavior, Equation (1) introduces three position-updating modes that simulate the behavior of animals when they are in close proximity to food sources.

−−−−−→
In the above formula, t means current iterations and ; l is a significant control parameter of the HGS, which can influence its overall performance.E is a variation control for all positions, the mathematical formula of it is as follows: where F(i) records individual's fitness value; BF means the best fitness acquired from the current iteration.What's more, the specific expression of hyperbolic function sech in this study is as follows: The formulas of → R and relative parameters are as follows: where rand represents a random value limited within [0, 1]; T is the maximum value of iterations.

Hunger Role
The starvation characteristic of individuals is the core content of the HGS algorithm.This subsection plans to present the mathematical model of this characteristic.

− −− →
Equations ( 6) and (7) show how the weight value is calculated.In the formulas, hungry indicates the hunger condition of individuals, and Sum_hungry stands for the sum value of hungry; N is the total amount of individuals; r 3 , r 4 and r 5 are random numbers which are limited in [0, 1].The detailed expression of hungry(i) is as follows: where the AllFitness stores all individual's fitness value generated in iterations and the AllFitness(i) indicates the fitness of each independent individual in the current iteration.Notably, when the best fitness is found, the corresponding hungry(i) value would be assigned to 0, if not, the new hungry value H would be supplied based on the actual hungry value.Equation (9) showed as follows denotes the consists of H: where the sensation of hunger [76] H in Equation ( 9) is restricted to a lower bound LH which represents the lower limits of hunger.The setting of LH in this study is equal to 100, consisting with the settings in the literature [69,77].BF and WF in Equation ( 10) means the best fitness value and the worst fitness value acquired from the current iteration, respectively; F(i) stands for the fitness of each individual; F(i) − BF denotes the threshold of food consumption necessary for an individual to attain a state of complete satiation; WF − BF denotes the maximal foraging ability of an individual in the current iteration; WF−BF stands for the hunger ratio; r 6 is a random value limited within [0, 1]; UB and LB are the meanings of upper limits and lower limits of the dimensions, respectively.What's more, in Equation ( 10), Algorithm 1. displays the pseudo-code of the HGS algorithm.Initialize the parameters N, T, l, D, Sum_hungry Initialize the population X i (i = 1, 2, . . ., n) While (t ≤ T Calculate the initial fitness of all populations Update BF, WF, and X b Calculate hungry by using Equation ( 8) Calculate W 1 and W 2 by using Equations ( 6) and (7), respectively For i = 1 to N If (rand < 0.

The Adapted Logarithmic Spiral Strategy
Logarithmic Spiral (LS) [78] strategy, an effective search method in the exploration phase, is inspired by the spiral phenomena existing in nature [79].Observing the famous Whale Optimization Algorithm (WOA) and Moth Flame Optimization (MFO) algorithm, the application of LS strategy can be found, which both use Equation (11) to mimic logarithmic spiral trajectory in their algorithm logic.
In Equation (11), → X b stands for the best solution location obtained from the current iteration; → X t and → X t+1 means the position vector of t-th and (t + 1)-th iteration, respectively; b is a constant value used to define the logarithmic spiral shape and the value of b is set to 1; r 7 is a random value range from [−1, 1].
An adapted LS strategy was proposed in the literature [71] to achieve a wider and more plausible range of exploration.This novel method combines the original LS strategy with Opposition-based Learning (OBL) [80], named LS-OBL strategy.The idea of this modified strategy is to incorporate the LS spatial trajectory between iteration-based and opposite-based solutions to boost the algorithm's optimal efficiency.
The implementation of the LS-OBL strategy is mainly divided into three parts.
Firstly, the OBL algorithm generates an opposite solution → X op based on the − → X b that the original HGS algorithm got in the current iteration.The mathematical model of this part is as follows: where − → X op is the opposite position vector of Then, the dynamic logarithmic spiral space between → X b and − → X op is formed in each iteration.The relative mathematical model is mentioned in Equation (13).
Lastly, the search agent achieves random exploration throughout the whole logarithmic spiral space according to the parameter of s which is defined in Equation (14).

The Adapted Rosenbrock Method Strategy
Rosenbrock Method (RM) [81] strategy is a reliable local search method proposed by Rosenbrock.For the poor performance of the basic RM strategy on multimodal problems [82], an enhanced variant based on the RM strategy was proposed by Li et al. [71] in 2021, which makes targeted improvement in the basic RM's initial step size σ i (i = 1, 2, . . ., n) and termination conditions.Different from the σ i in tradition RM as a constant value, the σ i in adapted RM strategy can realize the dynamic change with iterations.The detailed description of σ i is as follows: X ki n (16) where n represents the total amount of candidate individuals and d represents the dimension of the population.→ X ki stands for the vector of the k-th individual in the i-th dimension, and → X i stands for average candidate individual in the i-th dimension which is explained in Equation (16).ε 1 is a constant equal to 1.0e −150 for preventing the algorithm's initial value from being 0.
What's more, the two-loop termination condition in the original RM strategy is changed, which increases the participation of ε 1 , ε 2 , k 1 , and k 2 .Specifically, ε 1 and ε 2 are two parameters controlling internal and external loops, respectively.The parameter setting of ε 1 is mentioned before and the ε 2 is set to 1.0e −4 , which values are consistent with the settings in the literature [82].While the function of k 1 and k 2 play the role of loop counter.Algorithm 2 presents the pseudo-code of the improved RM strategy.Input the search agents position X i (i = 1, 2, . . ., n), population position X, D. Initialize the orthonormal basis d i (i = 1, 2, . . ., n), the value of step size adjustment α and β, the ending instruction ε 1 , ε 2 , N, k 2 = 0. Initialize the step size δ i (i = 1, 2, . . ., n) by using Equation (14)

Motivation for This Work
Conventional swarm intelligence algorithms often suffer from issues such as premature convergence, slow convergence, and easy trapping in local optima.The design of the combination or hybrid algorithms can mitigate these problems to a certain extent.Maintaining the diversity of the population for simulation purposes and extending the flexibility of the algorithm to speed up the convergence velocity, hybridizing specific optimization strategies can make a good balance between the two phases of exploration and exploitation.
There is no denying that Hunger Games Search (HGS) is a good population-based optimizer.However, when dealing with challenging optimization problems, the classic HGS sometimes shows premature convergence and stagnation shortcomings.Therefore, finding approaches that enhance solution diversity and exploitation capabilities is crucial.This study incorporates two effective strategies into HGS: adaptations based on the Logarithmic Spiral (LS-OBL) and Rosenbrock Method (RM).On the one hand, LS-OBL is an exploration method that is based on the idea of Logarithmic Spiral and Opposition-based Learning.Furthermore, the idea behind this strategy is to generate a batch of new solutions through the OBL strategy and then construct a logarithmic spiral spatial trajectory between the current solution and the OBL-based solution.LS-OBL effectively alleviates the defects of the classic HGS in exploration by properly narrowing space and increasing the solution's diversity.On the other hand, the adapted RM method is employed to optimize the exploitation process.By adjusting the search direction and step size, RM helps the search agent avoid getting trapped into local optima, ensuring stronger convergence towards global optimal results.

Flowchart and Pseudo-Code of RLHGS
The flowchart and pseudo-code are shown in Figure 1 and Algorithm 3, respectively.Notably, the execution of the RM strategy is conditional.For the high computational time of RM, the execution of RM is limited by a parameter prob, which plays the role of balancing RM performance and time consumption.The description of this parameter is as follows: where N means the size of the population; P no means the population number that is weaker than the optimal solution in the current iteration, rand is a random number obtained from (0, 1).When the prob i is higher than 0.8, the RM strategy will be invoked to search for the best further, or the exploitation strategy of standard HGS is performed.
the solution's diversity.On the other hand, the adapted RM method is employed to optimize the exploitation process.By adjusting the search direction and step size, RM helps the search agent avoid getting trapped into local optima, ensuring stronger convergence towards global optimal results.

Flowchart and Pseudo-Code of RLHGS
The flowchart and pseudo-code are shown in Figure 1 and Algorithm 3, respectively.Notably, the execution of the RM strategy is conditional.For the high computational time of RM, the execution of RM is limited by a parameter , which plays the role of balancing RM performance and time consumption.The description of this parameter is as follows: where  means the size of the population;   means the population number that is weaker than the optimal solution in the current iteration,  is a random number obtained from (0, 1).When the   is higher than 0.8, the RM strategy will be invoked to search for the best further, or the exploitation strategy of standard HGS is performed.

Initialize the parameters N, T, l, D, Sum_hungry
Initialize the population X i (i = 1, 2, . . ., n) While (t ≤ T ) Calculate the initial fitness of all populations Update BF, WF, and X b Calculate hungry by using Equation ( 8) Calculate W 1 and W 2 by using Equations ( 6) and ( 7

Computational Complexity Analysis
Computational complexity is a measure of the time and resources required by an algorithm to execute.In the original HGS, the computational complexity mainly depends on these aspects: population initialization, fitness value calculation, sorting, and population updating.In these processes, N, D, and T represent the scale of population, the scale of dimension, and the maximum number of iterations respectively.Specifically, in the initial stage, the computational complexity of population initialization is O(N), whereas the computational complexity of fitness value calculation is O(T × N).In the worst case, the computational complexity of sorting is O(T × N × logN).The population updating includes hunger updating, weight updating, and location updating, with computational complexity O(T × N), O(T × N × D), and O(T × N × D), respectively.Thus, the computational complexity of the original HGS is The adapted algorithm RLHGS is also compounded from the above aspects, but owing to the addition of LS-OBL and RM operators, they differ in the process of updating population positions.However, due to the random operations of RM strategy, it is hard to assess the exact computational complexity of RLHGS.Hence, evaluating the algorithm's computational cost necessitates taking into account the running time of the code.This study evaluates the actual computational cost of RLHGS and other comparative algorithms by recording their average time cost on 23 classic benchmark test suites.The average running cost comparison is listed in Section 5.3.1, and their units is seconds.

Details of Benchmark Functions
To mitigate the impact of randomness in algorithms, it is necessary to employ appropriate and comprehensive test functions and case studies.This ensures that superior results are not merely coincidental but consistently achieved.Thus, a sufficient evaluation is conducted using 23 classic benchmarks [83] and CEC 2020 benchmark test suite [84].These benchmarks serve as crucial tools for testing algorithm performance.Twenty-three classical benchmark test suites consist of unimodal, multimodal, and fixed-dimensional multi-modal functions.Specifically, F1-F13 represent high-dimensional problems, including unimodal functions (F1-F5), a step function with one minimum value (F6), a noisy quartic function (F7), and multimodal functions with multiple local optima (F8-F13).Besides, F14-F23 are low-dimensional functions with only a few local minima, which enables the assessment of the algorithm's effectiveness in searching for near-global optima.Detailed information about these benchmark functions can be found in Table 2. What's more, to ascertain the RLHGS's efficacy, it is also tested on the CEC2020 benchmark test suite, which includes one unimodal function, three multimodal functions, three hybrid functions, and three composition functions.Table 3 provides further details about the CEC2020 benchmark functions.Notably, both in Tables 2 and 3, D means the dimensions of functions, R means the domain of functions, and f min means the optimum solution of the functions.

Configuration of Experiment Environment
In this study, two kinds of experiments on benchmark test suites are conducted, one aims to demonstrate the effectiveness of the added strategies and the other is focused on showcasing the superiority of RLHGS through a comparison with other powerful algorithms.To maintain fair comparisons, we have adhered to the suggested principles outlined in previous AI studies [85][86][87], which emphasize the importance of employing uniform conditions during the assessment of different methodologies [88,89].Hence, the population size of these two experiments is set to 30, and the function evaluation for  = 30.The maximum iteration  is 300,000.To minimize the random error, all involved algorithms are run 30 times independently on all benchmark functions.
Besides, including the constrained real-world engineering experiment, all experiments are accomplished on a PC with Win11, a 64-bit operating system.The CPU is Intel (R) Core (TM) i5-9400, the main frequency is 2.90 GHz, the memory is 8.00 GB, and the software is MATLAB R2018b.

Configuration of Experiment Environment
In this study, two kinds of experiments on benchmark test suites are conducted, one aims to demonstrate the effectiveness of the added strategies and the other is focused on showcasing the superiority of RLHGS through a comparison with other powerful algorithms.To maintain fair comparisons, we have adhered to the suggested principles outlined in previous AI studies [85][86][87], which emphasize the importance of employing uniform conditions during the assessment of different methodologies [88,89].Hence, the population size of these two experiments is set to 30, and the function evaluation for D = 30.The maximum iteration T is 300,000.To minimize the random error, all involved algorithms are run 30 times independently on all benchmark functions.
Besides, including the constrained real-world engineering experiment, all experiments are accomplished on a PC with Win11, a 64-bit operating system.The CPU is Intel (R) Core (TM) i5-9400, the main frequency is 2.90 GHz, the memory is 8.00 GB, and the software is MATLAB R2018b.

Statistical Analysis Methods
Evaluating the progress made by a new proposed algorithm compared to existing techniques is a specific challenge in experimental investigations.In recent years, researchers have recognized the importance of statistical analysis in assessing the performance of novel algorithms.In this study, the effectiveness of RLHGS is evaluated through several evaluation criteria.
Firstly, the average value (Avg) and standard deviation (Std) of the optimal function value are used to evaluate the performance of algorithms.Among them, Avg is applied to evaluate the global search ability and the quality of the solution, while Std is devoted to evaluating the robustness of the algorithm.The ranking of each algorithm based on Avg is provided to reflect their performance on each function.In addition, the evaluation criteria of the Friedman test and Wilcoxon rank test also be used.In 1937, Milton Friedman first developed the concept of the Friedman test [90], which was later used to assess several different algorithms' performance on different kinds of test functions.After its effectiveness is proven in various literature, the Friedman test has been regarded as an available method for model performance evaluation.Wilcoxon signed-rank test [91] was first proposed by a U.S. statistician named Frank Wilcoxon.As a hypothesis-testing method, the Wilcoxon rank test has been widely used to verify the algorithm's statistical consistency after it was put forward.The principle of this method is to judge which algorithm is better by comparing the significant differences between two samples.Moreover, it can give a judgment if one algorithm is superior to another by calculating the p-value.Notably, the standard value of p-value is set to 0.05.The sign "+/−/=" in the table is utilized to indicate if the compared algorithm's execution is better than, worse than, or comparable to that of RLHGS in terms of statistical manner.

Qualitative Analysis
Exploration and exploitation are two crucial processes in SI algorithms.The exploration process primarily occurs in the early stages of algorithm execution, with the main aim of conducting a comprehensive search across the feasible domain space to identify potential regions and optimal solutions.During this phase, the algorithm would place significant emphasis on global search, which expands the search scope and diversifies search strategies to discover new areas that may contain better solutions.Exploration presents an opportunity to venture into uncharted territories and acts as a foundation for the subsequent exploitation process.The goal of the exploitation is to delve into known solution spaces and enhance the quality of candidate solutions through thorough searches.This stage focuses more on specific areas that are considered to have a higher probability of obtaining better solutions.By using more refined search strategies and utilizing prior knowledge, the exploitation phase can perform local optimizations around the current solution to obtain higher-quality solutions.Its major focus revolves around enhancing optimization performance to achieve swift convergence towards the optimal or approximate optimal solution.
To evaluate the exploration and exploitation processes of RLHGS, this subsection conducts a qualitative analysis.Figure 3 presents the qualitative results of RLHGS on several functions from the CEC2020 benchmark test suite, encompassing three types: unimodal function (F1 ), multimodal functions (F2 and F3), and composite function (F8).In Figure 3, column (a) presents the 3-D position distribution, showcasing the nature of four functions.Column (b) illustrates the 2-D spatial distribution of search history trajectories, providing insights into the position and dispersion of the population throughout the iteration process.The red dot in the image represents the global optimal value.Upon observing the graphs in this column, it becomes apparent that the population's search trajectory almost revolves around the red dot.This observation suggests that the search range of RLHGS is both reasonable and effective.Column (c) showcases the motion trajectory of the first individual in the first dimension.It exhibits fluctuations during the initial stage of the search but ultimately converges towards the optimal value in later stages.This behavior can be attributed to the algorithm's continuous pursuit of higher-quality solutions during the exploration phase, underscoring RLHGS's adaptability and exceptional exploration capability.However, although the graphs in columns (b) and (c) indicate a trend for individuals of RLHGS to explore promising areas throughout the search space and ultimately utilize the best solution, the convergence curve has not been observed or proved.Column (d) records the convergence curves of RLHGS, revealing the trend of changes in the optimal fitness value and verifying the capability of RLHGS in obtaining a near-optimal solution throughout the whole iteration.the first individual in the first dimension.It exhibits fluctuations during the initial stage of the search but ultimately converges towards the optimal value in later stages.This behavior can be attributed to the algorithm's continuous pursuit of higher-quality solutions during the exploration phase, underscoring RLHGS's adaptability and exceptional exploration capability.However, although the graphs in columns (b) and (c) indicate a trend for individuals of RLHGS to explore promising areas throughout the search space and ultimately utilize the best solution, the convergence curve has not been observed or proved.Column (d) records the convergence curves of RLHGS, revealing the trend of changes in the optimal fitness value and verifying the capability of RLHGS in obtaining a near-optimal solution throughout the whole iteration.However, when the processes of exploration and exploitation are not balanced, the optimization performance may not meet expectations.For instance, if the algorithm only possesses strong exploratory capabilities, it may yield high-quality solutions but at a slower convergence speed.On the other hand, if the algorithm leans towards exploitation, the convergence speed may improve, yet there is a higher risk of getting trapped in local However, when the processes of exploration and exploitation are not balanced, the optimization performance may not meet expectations.For instance, if the algorithm only possesses strong exploratory capabilities, it may yield high-quality solutions but at a slower convergence speed.On the other hand, if the algorithm leans towards exploitation, the convergence speed may improve, yet there is a higher risk of getting trapped in local optima.Hence, achieving a delicate balance between the exploration and exploitation stages becomes crucial for enhancing algorithm performance.
To further examine the impact of LS-OBL and RM on the exploration and exploitation process.This study conducts a balance analysis and comprehensive discussions of these two processes of RLHGS and HGS.The relative results are shown in Figure 4.
Notably, the % EPL and % EPT indicators in column (a) represent the proportions of algorithmic exploration and exploitation processes throughout the entire execution, which are calculated by Equations ( 18)- (20).In equations, Div refers to individual diversity, Div max indicates maximum individual diversity, Div j represents the j-th dimensional diversity of an individual, and n denotes the total number of individuals in the population.D represents the function's dimension, X ij represents the j-th dimension of the i-th individual, and medium (X j ) signifies the median value of the j-th dimension across all individuals.

%EPL = Div
Div max × 100% ) As shown in Figure 4, columns (a) and (b) illustrate the exploration and exploitation stage balance diagrams throughout the execution process, showcasing trend curves representing the exploration and exploitation stages.Except for the change curve of two processes, an incremental-decremental curve is added to reflect the algorithm's level of exploration.If the global search outweighs local development during algorithm execution, the curve exhibits an upward trend.Conversely, if local development dominates, the curve demonstrates a downward trend.In detail, upon analyzing the graphs in columns (a) and (b), it becomes apparent that the exploration and exploitation process in RLHGS has shown significant improvement compared to HGS.On the F1 function, % EPL increased from 8.1273% to 16.7709%, indicating an improvement of 8.6517%.On the F2 function, % EPL increased from 8.6423% to 24.202%, representing a boost of 15.5597%.On the F3 function, % EPL rose from 3.1199% to 15.4554%, an increase of 12.3355%.Similarly, on the F8 function, % EPL surged from 6.125% to 22.7202%, marking an increase of 16.5952%.These numerical changes in % EPL and % EPT and the convergence curve separately generated by RL-HGS and HGS in column (c) demonstrate that the inclusion of LS-OBL and RM algorithms has introduced a certain degree of balance between the exploration and exploitation stages.

Inspection of Improvement Effect
Even if the aforementioned results provide evidence and validation of RLHGS's high performance, more specific information needs to be obtained to confidently confirm whether the added mechanism effectively promotes the performance of the HGS.In this experiment, there are four algorithms participating in the comparison.Table 4 lists an intuitive description of compared algorithms, where '1' indicates that strategy is embedded, and '0' indicates not to be adopted.All the algorithms are used to handle ten classical benchmark functions from CEC2020.

Inspection of Improvement Effect
Even if the aforementioned results provide evidence and validation of RLHGS's high performance, more specific information needs to be obtained to confidently confirm whether the added mechanism effectively promotes the performance of the HGS.In this experiment, there are four algorithms participating in the comparison.Table 4 lists an intuitive description of compared algorithms, where '1' indicates that strategy is embedded, and '0' indicates not to be adopted.All the algorithms are used to handle ten classical benchmark functions from CEC2020.Table 5 presents the average value (Avg) and standard deviation (Std) of the optimal function value of RLHGS, RHGS, LHGS, and HGS.Upon observing ranking in Table 5, it is evident that RLHGS obtains the highest number of optimal values.Also, RLHGS exhibits a remarkable probability of approximately 80% in achieving the best performance across 10 test functions.Specifically, the outstanding performance of RLHGS is mainly reflected in the unimodal function (F1), multimodal functions (F2-F4), hybrid functions (F5-F7), and composition function (F8).Though slightly behind RHGS and HGS in F9 and F10, RLHGS still ranked first and obtained the lowest average value of the Friedman test shown in Table 6.Such a result not only indicates that RLHGS has good global search ability and higher quality solutions but also indicates that the robustness of the algorithm is better than compared algorithms.What's more, it can be noticed that the original HGS is only in third place, which lags behind RLHGS in the ranking, directly verifying that embedded strategies have a good effect on promoting the optimization capability of HGS.A more intuitive comparison result can be observed from Figure 5. Looking at the convergence curves of F1, F2, F4, and F6, it can be concluded that neither the LS-OBL strategy nor the adapted RM strategy alone can effectively improve the HGS method sometimes, but when these two are simultaneously introduced, the improvement would be obvious.What's more, the excellent performance of RLHGS can also be seen from the convergence curves of F3, F5, F7 and F8.The Wilcoxon signed-rank results in Table 7 also support the above conclusion.
In summary, the LS-OBL and adapted RM strategies work synergistically to overcome the limitations of the original algorithm, improving its overall performance in solving complex optimization problems.

Comparison with Eight Superior Algorithms
This subsection mainly introduces the experiment of RLHGS with eight state-of-theart algorithms.Table 8 shows the parameter configuration of the algorithms involved in comparison and the brief introductions of these algorithms are as follows: • CS [92]: Cuckoo search algorithm, a powerful algorithm that was presented by Gandomi et al. in 2013, the internal logic of the algorithm is based on the brood parasitism of cuckoo species.

Comparison with Eight Superior Algorithms
This subsection mainly introduces the experiment of RLHGS with eight state-of-theart algorithms.Table 8 shows the parameter configuration of the algorithms involved in comparison and the brief introductions of these algorithms are as follows: • CS [92]: Cuckoo search algorithm, a powerful algorithm that was presented by Gandomi et al. in 2013, the internal logic of the algorithm is based on the brood parasitism of cuckoo species.

Benchmark Function Set I: 23 Classic Test Functions
To validate the feasibility of RLHGS, RLHGS with CS, MFO, HHO, SSA, JADE, AL-CPSO, SCGWO, and RDWOA are arranged to handle 23 classical numerical optimization problems in this subsection.
The comparison results are presented in Table 9.
According to the Avg and Std, it can be found that RLHGS achieves the highest number of optimal values across various functions, including F1, F2, F9, F12, F13, F14, F16, F17, F19, F20, F21, F22 and F23.In contrast, other algorithms such as CS, MFO, SSA, JADE, and ALCPSO perform well only on fixed-dimensional multimodal functions, while HHO, SCGWO, and RDWOA show proficiency mainly in unimodal and multimodal functions.
Only RLHGS consistently achieves ideal values across all function types, which indicates its versatility and robustness.Furthermore, Table 10 and Figure 6 provide the Friedman mean level and overall rank of all compared algorithms.Comparing these results, it is clear that RLHGS attains the highest rank, further solidifying its position as a powerful stochastic optimization algorithm.The Wilcoxon signed-rank results of RLHGS and other eight superior algorithms on 23 benchmark functions are shown in Table 11.In the table, the p-value which is less than 0.05, indicating a significant difference between RLHGS and the compared algorithm, with RLHGS performing better than the compared algorithm.Table 12 lists the average running time of RLHGS and other eight superior algorithms on 23 benchmark functions.Although the results are not the shortest time consuming, it can be seen that the algorithm has a relatively reasonable time cost on most functions.Figure 7 portrays the convergence curves of this experiment.Look at Figure 7, when dealing with test functions F1 and F2, RLHGS obtains the optimum result with the fastest optimization speed.Moreover, when dealing with multimodal functions like F12 and F13, RLHGS is far more than other compared algorithms in searching for global or near-optimal solutions, which can be intuitively seen from the convergence curves that the algorithm does not immediately fall into local optima like other competing algorithms, also proving RLHGS can jump out of local optima.What's more, for most test functions, the performance of the RLHGS is much better than other compared methods in the early search stage.Meanwhile, the final values obtained at the late search stage are faster or much closer to the optimal value.The evaluation configurations of this experiment are consistent with those in Section 5.3.1.Analyzing the comparison results revealed in Table 13, RLHGS outperforms all compared methods, which achieves five optimal values in ten test functions and no other algorithm exceeds it.In detail, it can be observed that RLHGS outperforms all of the competitors on multi-modal functions (F2-F3), hybird function (F6), and composition function   The evaluation configurations of this experiment are consistent with those in Section 5.3.1.Analyzing the comparison results revealed in Table 13, RLHGS outperforms all compared methods, which achieves five optimal values in ten test functions and no other algorithm exceeds it.In detail, it can be observed that RLHGS outperforms all of the competitors on multi-modal functions (F2-F3), hybrid function (F6), and composition function (F8), this result strongly reflects that RLHGS has advantage of exploration and local optima avoidance.Meanwhile, according to the statistical standard, it can be obtained that the function quantity of RLHGS superior to CS, MFO, ALCPSO, HHO, JADE, SCGWO, RD-WOA, and SSA is 10, 10, 9, 7, 7, 7, 7, and 6, respectively, showing RLHGS is a competitive algorithm.Table 14 shows the result of the Friedman test result, where RLHGS secures the first position with a rank of 2.3, followed by JADE, SSA, HHO, RDWOA, and others.More intuitive ranking can be obtained from Figure 8. Table 15 shows the p-value results of the Wilcoxon signed-rank test.Upon observation, it is clear that RLHGS exhibits significant differences compared to other algorithms and exceeds them in terms of performance.Figure 9 shows four convergence curves of nine methods in this experiment, which are F2, F3, F4, and F6.The information that can be obtained from this figure is that RLHGS successfully exceeds other strong opponents and reaches a better solution.RDWOA 2.7000 × 10 3 0.0000 × 10 0 1 7/1/2

Four Real-World Constrained Benchmark Problems
In this subsection, the proposed RLHGS is used to settle four classical constrain benchmark problems, they are tension/compression spring design, welded beam desi  Although the results and analyses in Sections 5.3.1 and 5.3.2verify that RLHGS has capability of determining the global optimal of the test functions to a certain degree, there still has some differences between actual problems and standard function problems.For example, the global optimal value for commonly used test functions is provided, whereas the global optimal value for actual problems remains unknown.What's more, some equality and inequality constraints are also attached to practical problems.Therefore, in addition to the performance on the benchmark function, it is necessary to test the performance of the function in practical problems.In the next subsection, RLHGS is applied to solve four practical problems.

Four Real-World Constrained Benchmark Problems
In this subsection, the proposed RLHGS is used to settle four classical constrained benchmark problems, they are tension/compression spring design, welded beam design, pressure vessel design problem, and three-bar truss design.Due to their constraints being based on the different conditions, thus find a method that can effectively solve all these problems seems particularly significant.Researchers have recently proposed a mass of processing methods combining constraints with swarm intelligence algorithms.According to the different processing ways of these methods [99], the functions of the penalty are mainly divided into five categories: co-evolutionary, static, adaptive, dynamic, and death penalty functions.Considering the characteristic of the algorithm proposed, the method used in this study to handle four constrained benchmark problems is the death penalty function, which is the modest one in constructing an objective value of a mathematical model.

Tension/Compression String Problem
The way to solve the tension/compression string problem is to obtain the optimal parameters that can minimize the weight.This problem has three variables, which are wire diameter (d), mean coil diameter (D), and the number of active coils (N).Meanwhile, to solve this problem need to pay attention to four constraints functions x .Its structure is shown in Figure 10.The mathematical description of this problem is shown as follows: Consider: Objective function: Variable ranges: The comparison results of RLHGS with other eight advanced opti rithms in the tension/compression spring design problem are shown in Ta ing the data from the table, it can be found that RLHGS obtains the lowest v which is in bold, followed by INFO, IHS, PSO, and GSA close, whic 0.0126706, 0.0126747 and 0.0126763 respectively.The results verify that RL capability in optimizing this engineering problem.The comparison results of RLHGS with other eight advanced optimization algorithms in the tension/compression spring design problem are shown in Table 16.Observing the data from the table, it can be found that RLHGS obtains the lowest value 0.0126653 which is in bold, followed by INFO, IHS, PSO, and GSA close, which is 0.012666, 0.0126706, 0.0126747 and 0.0126763 respectively.The results verify that RLHGS has good capability in optimizing this engineering problem.The key to solving welded beam design problem is to acquire the optimal parameters that can minimize the cost of welded beams.In this problem, shear stress (τ), bending stress (θ), buckling load (P c ) and deflection (δ) are four constraints that need to be satisfied and thickness of welding seam (h), length of welding joint (l), width of beam (t), and thickness of bar (b) are four variables need to be considered.The shape of the welded beam design problem is shown in Figure 11.The following mathematical descriptions detailed describe this problem and its constraints: Consider: Subject to: Table 17 shows the optimization results of welded beam design problem.This problem compares RLHGS with HGS, GSA, CDE, HS, GWO, BA, IHS, and RO.According to the data of optimum cost, the optimal value is shown in bold, which is obtained by RLHGS.Thus, it is easy to conclude that RLHGS performs best in solving the welded beam design problem.When the variables are set to 0.2015, 3.3345, 9.03662391, and 0.20572964, the optimum cost can reach 1.699986, lower than all compared algorithms.Pressure vessel design problem is a conundrum in the engineering field.The way to solve this problem should focus on minimum the cost of welding, material, and forming of a vessel.The mathematical description of four variables and four constraints are shown in the following equations.Looking at these variables,  1 to  4 indicate the thickness of the shell (  ), the thickness of the head ( ℎ ), the internal radius (), and the vessel length excluding head (), respectively.Figure 12 displays the Components of this problem.
Consider: Table 17 shows the optimization results of welded beam design problem.This problem compares RLHGS with HGS, GSA, CDE, HS, GWO, BA, IHS, and RO.According to the data of optimum cost, the optimal value is shown in bold, which is obtained by RLHGS.Thus, it is easy to conclude that RLHGS performs best in solving the welded beam design problem.When the variables are set to 0.2015, 3.3345, 9.03662391, and 0.20572964, the optimum cost can reach 1.699986, lower than all compared algorithms.Pressure vessel design problem is a conundrum in the engineering field.The way to solve this problem should focus on minimum the cost of welding, material, and forming of a vessel.The mathematical description of four variables and four constraints are shown in the following equations.Looking at these variables, x 1 to x 4 indicate the thickness of the shell (T s ), the thickness of the head (T h ), the internal radius (R), and the vessel length excluding head (L), respectively.Figure 12 displays the Components of this problem.
Consider:   In solving pressure vessel design problem, RLHGS is arranged to comp PSO, GA, G-QPSO, SMA, Branch-and-bound, HIS, GA3, and CPSO.The re 18 show that when   ,  ℎ , ,  are set as 0.8125, 0.4375, 42.0984456, 176.636 tively, RLHGS gets the value 6059.714335, an optimal cost.This result dem proposed algorithm in this paper is superior to other algorithms for solving of mechanical engineering problems.

MAs
Optimal Values of Parameters In solving pressure vessel design problem, RLHGS is arranged to compare with ES, PSO, GA, G-QPSO, SMA, Branch-and-bound, HIS, GA3, and CPSO.The results in Table 18 show that when T s , T h , R, L are set as 0.8125, 0.4375, 42.0984456, 176.6365958 respectively, RLHGS gets the value 6059.714335, an optimal cost.This result demonstrates the proposed algorithm in this paper is superior to other algorithms for solving these kinds of mechanical engineering problems.Three-bar truss design problem is a well-known constrained space problem, which is derived from civil engineering.Figure 13 presents its component.The method to solve this problem is to gain the minimum value of the weight of the bar structures.The stress constraints of each bar are the basis of the constraints in this problem.The problem is expressed mathematically in the following way: Consider: Subject to: Three-bar truss design problem is a well-known constrained space problem, which is derived from civil engineering.Figure 13 presents its component.The method to solve this problem is to gain the minimum value of the weight of the bar structures.The stress constraints of each bar are the basis of the constraints in this problem.The problem is expressed mathematically in the following way: Consider: Table 19 lists the results of seven optimization algorithms in solving the three-bar design problem.In the table, optimum cost indicates the weight of the bar structures, and it is not hard to notice that RLHGS ranks first among all algorithms based on the minimum Table 19 lists the results of seven optimization algorithms in solving the three-bar design problem.In the table, optimum cost indicates the weight of the bar structures, and it is not hard to notice that RLHGS ranks first among all algorithms based on the minimum value of 263.89584338.BWOA and MVO ranked second and third with 263.8958435 and 263.8958499, respectively.Though the narrowing gap between values, it also supports the RLHSG can provide powerful assistance for dealing with three-bar design.[93] 0.788244771 0.409466958 263.8959797BWOA [98] 0.788666327 0.408273202 263.8958435GOA [111] 0.788897556 0.40761957 263.8958815MBA [112] 0.7885650 0.4085597 263.8958522MVO [34] 0.78860276 0.408453070 263.8958499

Conclusions and Future Work
HGS is a novel heuristic algorithm that has gained attention in recent years.Building upon the original literature, it is evident that HGS demonstrates remarkable optimization capabilities, surpassing numerous robust algorithms.However, the original HGS does have some limitations, including premature convergence, susceptibility to local optima, and slow convergence speed.These shortcomings indicate room for improvement within HGS.This study introduces the RLHGS algorithm, which incorporates an adapted LS-OBL mechanism and an adapted RM mechanism into the original HGS.These additions aim to enhance the algorithm's exploration and exploitation abilities, respectively.
To assess the efficiency of these introduced mechanisms and the superiority of RL-HGS over other powerful algorithms, several evaluations are conducted using the 23 classic benchmark functions and CEC2020 test suite, encompassing various function types.The first experiment analyzes the effectiveness of embedded strategies, yielding the conclusion that RLHGS performed exceptionally well when the LS-OBL strategy and adapted RM strategy worked in tandem.This finding validates the effectiveness of the added mechanisms in overcoming HGS's drawbacks.In the second comparison experiment, RLHGS is compared alongside CS, MFO, HHO, SSA, JADE, ALCPSO, SCGWO, and RDWOA.According to the experiment results, the performance of RLHGS not only surpasses wellestablished classic algorithms like CS, SSA, JADE, RDWOA, and ALCPSO but also outperforms exceptional state-of-the-art algorithms such as MFO, HHO, and SCGWO.Furthermore, RLHGS is applied to optimize parameters in four engineering design problems.Comparative analysis with other algorithms reveals that the proposed method achieves superior results.Thus, RLHGS exhibits promise in tackling complex real-world optimization problems and could serve as a valuable auxiliary method for a broader range of global optimization problems.Overall, the integration of LS-OBL and RM into HGS, resulting in RLHGS, proves to be a valuable improvement, showcasing enhanced performance and robustness in various evaluation scenarios and real-world engineering optimization challenges.Additionally, this study only scratches one of RLHGS's potential applications.In the future, RLHGS can find utility in numerous other fields beyond engineering optimization, such as image segmentation, machine learning model optimization, and others.

→X→ W 1 →W 2
(t) represents the position of each individual; → X b indicates the position of the best one in the current iteration; randn(1) is a value that can satisfy normal distribution; and are indicators of hunger weight; r 1 and r 2 are two random values within the range of [0, 1]

Algorithm 2 :
Pseudo-code of the adapted RM strategy.
), respectively For i= 1 to N If (rand < 0.3) Update the position of the current search agent by using the adapted LS-OBL strategy Else Calculate E by using Equation (2) Update R using Equation (4) Update the position of the current search agent by Equation (1) If (prob > 0.8) Update the position of the current search agent by using the adapted RM strategy End If End If End For t= t + 1 End While Return BF and X b

Figure 2
presents 3-D map of some 23 classic benchmarks functions.

Figure 2 . 3 -
Figure 2. 3- map of some 23 classic benchmark functions.Different colors represent different solutions to the function, the mark of "F1", "F2" etc. in the figure refer to the corresponding function.

Figure 2 .
Figure 2. 3-D map of some 23 classic benchmark functions.Different colors represent different solutions to the function, the mark of "F1", "F2" etc. in the figure refer to the corresponding function.

Figure 3 .
Figure 3. (a) 3-D distributions of test functions, (b) 2-D position distribution of RLHGS, (c) RLHGS trajectories in the first dimension, (d) convergence curve of RLHGS.The line is generated by the projection of a three-dimensional figure.Different color represents different solution of function.

Figure 3 .
Figure 3. (a) 3-D distributions of test functions, (b) 2-D position distribution of RLHGS, (c) RLHGS trajectories in the first dimension, (d) convergence curve of RLHGS.The line is generated by the projection of a three-dimensional figure.Different color represents different solution of function.

Figure 4 .
Figure 4. (a) balance analysis of the RLHGS, (b) balance analysis of the HGS, (c) convergence curves of RLHGS and HGS.

Figure 4 .
Figure 4. (a) balance analysis of the RLHGS, (b) balance analysis of the HGS, (c) convergence curves of RLHGS and HGS.

Figure 7 .
Figure 7. Convergence curves of compared algorithms on six classic benchmark functions.

Figure 9 .
Figure 9. Convergence curves of compared algorithms on four CEC2020 functions.

Figure 10 .
Figure 10.Structure of the tension/compression spring problem.

PFigure 11 .
Figure 11.Shape of the welded beam design problem.

Figure 11 .
Figure 11.Shape of the welded beam design problem.

Figure 12 .
Figure 12.Components of the pressure vessel design problem.

Figure 12 .
Figure 12.Components of the pressure vessel design problem.

Figure 13 .
Figure 13.Components of the 3-bar truss design problem.

Figure 13 .
Figure 13.Components of the 3-bar truss design problem.
Y.L. and A.A.H.; Writing-review & editing, S.W., H.C. and Y.Z.All authors have read and agreed to the published version of the manuscript.

Table 1 .
Review of several classic optimization algorithms.

Table 6 .
The result of the Friedman test.

•
[27][93]: Moth-flame optimization algorithm was a novel nature-inspired heuristic paradigm proposed by Mirjalili in 2015.The inspiration for designing this algorithm origins from the navigation method of moths in nature called transverse orientation.•HHO[27]:Harris Hawks optimization algorithm was first proposed by Heidari et al.

Table 8 .
The parameter setting of the algorithms involved in the comparison.

Table 9 .
Experiment results of RLHGS and other eight superior algorithms on 23 classic benchmark functions.

Table 10 .
Friedman test results on 23 classic benchmark functions.

Table 10 .
Friedman test results on 23 classic benchmark functions.
Figure 6.Friedman test results on 23 classic benchmark functions.

Table 11 .
Wilcoxon signed-rank results on 23 benchmark functions.

Table 11 .
Wilcoxon signed-rank results on 23 benchmark functions.

Table 12 .
Execution time of RLHGS and other eight superior algorithms on 23 benchmark functions.

Table 13 .
Experiment results of RLHGS and other eight superior algorithms on CEC2020.

Table 14 .
Friedman test results on CEC2020.

Table 14 .
Friedman test results on CEC2020.

Table 16 .
Comparison results of nine algorithms on tension/compression spring de

Table 16 .
Comparison results of nine algorithms on tension/compression spring design problem.

Table 17 .
Comparison results of ten algorithms on welded beam design problem.

Table 17 .
Comparison results of ten algorithms on welded beam design problem.

Table 18 .
Comparison results of ten algorithms on pressure vessel design problem.

Table 18 .
Comparison results of ten algorithms on pressure vessel design problem.

Table 19 .
Comparison results of seven algorithms on three-bar truss design problem.