Next Article in Journal
Artificial Intelligence in Regenerative Medicine: Applications and Implications
Next Article in Special Issue
Vegetation Evolution with Dynamic Maturity Strategy and Diverse Mutation Strategy for Solving Optimization Problems
Previous Article in Journal
Using Footpad Sculpturing to Enhance the Maneuverability and Speed of a Robotic Marangoni Surfer
Previous Article in Special Issue
Fault Reconfiguration in Distribution Networks Based on Improved Discrete Multimodal Multi-Objective Particle Swarm Optimization Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Hunger Games Search Optimization with Application to Constrained Engineering Optimization Problems

1
Department of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou 325035, China
2
School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
*
Authors to whom correspondence should be addressed.
Biomimetics 2023, 8(5), 441; https://doi.org/10.3390/biomimetics8050441
Submission received: 10 July 2023 / Revised: 7 September 2023 / Accepted: 12 September 2023 / Published: 20 September 2023

Abstract

:
The Hunger Games Search (HGS) is an innovative optimizer that operates without relying on gradients and utilizes a population-based approach. It draws inspiration from the collaborative foraging activities observed in social animals in their natural habitats. However, despite its notable strengths, HGS is subject to limitations, including inadequate diversity, premature convergence, and susceptibility to local optima. To overcome these challenges, this study introduces two adjusted strategies to enhance the original HGS algorithm. The first adaptive strategy combines the Logarithmic Spiral (LS) technique with Opposition-based Learning (OBL), resulting in the LS-OBL approach. This strategy plays a pivotal role in reducing the search space and maintaining population diversity within HGS, effectively augmenting the algorithm’s exploration capabilities. The second adaptive strategy, the dynamic Rosenbrock Method (RM), contributes to HGS by adjusting the search direction and step size. This adjustment enables HGS to escape from suboptimal solutions and enhances its convergence accuracy. Combined, these two strategies form the improved algorithm proposed in this study, referred to as RLHGS. To assess the efficacy of the introduced strategies, specific experiments are designed to evaluate the impact of LS-OBL and RM on enhancing HGS performance. The experimental results unequivocally demonstrate that integrating these two strategies significantly enhances the capabilities of HGS. Furthermore, RLHGS is compared against eight state-of-the-art algorithms using 23 well-established benchmark functions and the CEC2020 test suite. The experimental results consistently indicate that RLHGS outperforms the other algorithms, securing the top rank in both test suites. This compelling evidence substantiates the superior functionality and performance of RLHGS compared to its counterparts. Moreover, RLHGS is applied to address four constrained real-world engineering optimization problems. The final results underscore the effectiveness of RLHGS in tackling such problems, further supporting its value as an efficient optimization method.

1. Introduction

Over the past few years, there have been remarkable advancements in optimization algorithms fueled by the rapid expansion of the engineering and artificial intelligence sectors [1,2]. These algorithms have broadened their capabilities to address intricate problem areas that conventional methods struggle with [3,4,5], including single-objective, multi-objective, and many-objective optimization problems [6,7]. They achieve this by utilizing evolutionary algorithms, swarm intelligence, and machine learning techniques. Integrating these algorithms with engineering and artificial intelligence has paved the way for hybrid approaches that harness the strengths of both disciplines. These advancements offer immense potential to tackle real-world challenges across diverse industries, enhance decision-making processes, optimize resource allocation, and improve overall system performance by delivering top-notch solutions for previously unsolvable problems [8,9].
Meta-heuristic Algorithms (MAs) have emerged as a crucial element in the field of artificial intelligence (AI), attracting substantial scholarly interest in the last two decades. This attention is primarily driven by their remarkable efficacy in addressing diverse practical problems, such as those encountered in economics management [10], power load forecasting [11], and data reduction problems [12,13,14,15], etc. The real-world optimization problems mentioned above generally share two common characteristics: high nonlinearity and inter-relation of decision variables during the solving process [16,17]. However, these characteristics inevitably give rise to large problem spaces, which make the optimization process prone to failure [18,19]. Therefore, it is crucial to consider factors such as time, risk, efficiency, and quality during the optimization process [20]. The emergence of Meta-heuristic Algorithms (MAs) has provided valuable inspiration to researchers in related fields. MAs offer reliability, robustness, and effective approaches for overcoming local optima by mimicking various phenomena to seek optimal solutions [21]. However, those metaphors do not change mathematics, and each optimizer’s core model will change its performance. The majority of Meta-heuristic Algorithms (MAs) derive their inspiration from natural phenomena and can be comprehensively classified into four distinct categories: evolutionary-based algorithms, swarm intelligence-based algorithms, human behavior-based algorithms, and physics-based algorithms. Evolutionary algorithms are meticulously devised based on the observable principles of evolution in nature. Swarm Intelligence (SI) algorithms draw inspiration from evolutionary theory and collective behavior, centering on the intricate behaviors displayed by systems comprising uncomplicated agents. Human behavior-based algorithms encompass diverse facets of human behavior, encompassing teaching, social interactions, learning, emotions, and management. Physics-based algorithms find inspiration in the laws of physics and mathematics. Table 1 provides some examples of related algorithm types.
Numerical optimization refers to finding the maximum or minimum objective value of a given problem within a defined search space. Population-based and derivative-free Swarm Intelligence (SI) optimization algorithms are widely regarded as effective solvers for complex numerical problems [37]. These algorithms utilize iterative methods to continually update individuals within a population, enhancing their adaptability to the environment and yielding acceptable optimal solutions within a reasonable timeframe. The study of SI optimization algorithms has received considerable attention in recent decades. Moreover, due to their superior performance, SI optimization algorithms also have been used to deal with multi-objective problems [38,39,40], constrained optimization problems [41,42,43,44], image segmentation [45,46,47,48], medical disease diagnosis [49,50,51,52,53], parameter estimation of solar photovoltaic models [54,55,56,57,58], intelligent traffic management [59], etc. Although not all the solutions obtained by these SI algorithms are optimal, what can be guaranteed is that high-quality solutions can be acquired in a reasonable time.
Hunger Games Search (HGS) [60] is a novel algorithm proposed in 2021, designed based on natural animals’ foraging behavior. Once raised, HGS has received extensive attention from scholars. AbuShanab et al. [61] used the HGS optimizer to optimize a random vector functional link (RVFL) model, which successfully finds the optimal internal parameters of RVFL that boost the model accuracy. Nguyen et al. [62] combined HGS with Artificial Neural Network (ANN) named HGS-ANN for predicting ground vibration intensity problems. The experiment’s result verifies that HGS-ANN performs better than other same-type models. Like other SI algorithms, exploration and exploitation are two fundamental phases in HGS. The major work of the exploration phase is to search for a solution location, and the evaluation of promising solutions is completed within the exploitation stage. Through comparative research, it can be found that there are certain deficiencies in the performance of HGS in these two stages, which leads to premature and makes HGS prone to getting stuck at local optimal factors. To overcome the mentioned drawbacks, several authors enhanced HGS with efficient strategies. Xu et al. [63] improved HGS by incorporating the quantum rotation gate strategy and the Nelder-Mead simplex method, which explored the neighborhood of the optimal solution in the decision space more practically. For HGS is easily trapped in a local location and steadily converges speed while solving intricate problems. Ma et al. [64] introduced chaotic mappings, greedy selection, and vertical crossover strategy into the standard HGS. The introduced strategies were conducive to accelerating convergence velocity and enhancing search capability. A. Fathy et al. [65] introduced a non-homogeneous mutation operator to HGS, which proved effective in identifying the optimal settings for a Fractional-Order Proportional Integral Derivative (FOPID) based Load Frequency Controller (LFC). Emam et al. [66] modified the base HGS with a Local Escaping procedure with Brownian motion, to mitigate performance shortcomings. A. M. Nassef et al. [67] proposed a variant of HGS with a binary tau-based crossover plan. Zhang et al. [68] revealed improved HGS (IHGS) with cube mapping and refracted opposition-based learning policies. Besides, to recognize most important genes and handle the high-dimensional genetic data, Z. Chen et al. [69] invented a novel wrapper gene selection with artificial bee bare-bone Hunger Games Search (ABHGS). This variant combines HGS with an idea based on artificial bee mo-tions and a gaussian bare-bone idea.
However, based on the No Free Lunch (NFL) [70] theory, no such algorithm can universally solve all types of problems. In light of this, a new variant of the HGS algorithm is proposed in this study after identifying the limitations of the original HGS. To enhance the capability of exploration and exploitation, two adapted strategies are incorporated: the adapted Logarithmic Spiral strategy [71] (LS-OBL) and the adapted Rosenbrock Method (RM) [71]. The LS-OBL strategy is key in reducing the search space and maintaining solution diversity. By incorporating LS-OBL into HGS, the algorithm becomes more efficient in exploring different regions of the problem space and increasing the variety of solutions. On the other hand, the adapted RM strategy aids in overcoming local optima. It assists HGS in bypassing suboptimal solutions and improves its ability to converge toward better solutions.
The main contributions of this paper can be summarized as follows:
  • The introduced strategy enhance the exploration and exploitation process of ordinary HGS algorithms when solving optimization problems.
  • To evaluate the efficacy of the proposed approach, RLHGS is compared with eight other state-of-the-art algorithms on 23 classical benchmark functions and 10 benchmark functions from CEC2020. And the comparative evaluation of these experiments demonstrates the superiority of RLHGS in terms of optimization performance.
  • The proposed RLHGS algorithm addresses four constrained real-world problems, showcasing its practical applicability and effectiveness in tackling complex engineering challenges.
  • The experiment results of RLHGS indicate excellent accuracy and reliable performance.
The other part of this paper is organized as follows: Section 2 describes the standard HGS algorithm and the embedded strategies used in this study. Section 3 elaborates on the structure of the proposed RLHGS algorithm and displays its flowchart. Section 4 clarifies the experiment setting. Section 5 conducts a qualitative analysis and three experiments to demonstrate the improvement achieved by the embedded strategies, compare the performance of RLHGS with eight competing algorithms, and showcase its ability to handle practical engineering applications. Section 6 provides the conclusion and discusses the prospect of future work.

2. Preliminaries

2.1. Description of Hunger Games Search

Hunger serves as one of the most immediate homeostatic motivations in the lives of animals, influencing their behavioral decisions and actions. This fundamental motivation can even surpass and impact other competing drive states, such as thirst, feelings of insecurity, or fear of predators [72]. According to the literature [73], a conclusion can be drawn that with the increase in hunger, animal food cravings also increase. It is observed that hunger increases food craving in animals. In situations where there are limited food sources, a logical game emerges among hungry animals, where participants strive to secure victory and gain access to food sources for better chances of survival [74]. Building upon these premises, Hunger Games Search (HGS) was proposed.

2.1.1. Approach Food

In nature, animals often engage in cooperative foraging behaviors, although this is not always the case [75]. There are instances where individuals choose to act alone. Based on studies on animal predatory behavior, Equation (1) introduces three position-updating modes that simulate the behavior of animals when they are in close proximity to food sources.
X t + 1 = G a m e 1 : X t · 1 + r a n d n 1 , r 1 < l G a m e 2 : W 1 · X b + R · W 2 · X b X t , r 1 > l , r 2 > E G a m e 3 : W 1 · X b R · W 2 · X b X t , r 1 > l , r 2 < E  
In the above formula, t means current iterations and X t represents the position of each individual; X b indicates the position of the best one in the current iteration; r a n d n 1 is a value that can satisfy normal distribution; W 1 and W 2 are indicators of hunger weight; r 1 and r 2 are two random values within the range of [0, 1]; l is a significant control parameter of the HGS, which can influence its overall performance. E is a variation control for all positions, the mathematical formula of it is as follows:
E = sech F i B F
where F i records individual’s fitness value; B F means the best fitness acquired from the current iteration. What’s more, the specific expression of hyperbolic function s e c h in this study is as follows:
sech x = 2 e x + e x
The formulas of R and relative parameters are as follows:
R = 2 × s h r i n k × r a n d s h r i n k
s h r i n k = 2 × 1 t T
where r a n d represents a random value limited within [0, 1]; T is the maximum value of iterations.

2.1.2. Hunger Role

The starvation characteristic of individuals is the core content of the HGS algorithm. This subsection plans to present the mathematical model of this characteristic.
W 1 i = h u n g e y i · N S u m _ h u n g r y × r 4 , r 3 < l 1 ,   r 3 > l
W 2 i = ( 1 exp ( | h u n g r y ( i ) | S u m _ h u n g r y ) ) × r 5 × 2
Equations (6) and (7) show how the weight value is calculated. In the formulas, h u n g r y indicates the hunger condition of individuals, and S u m _ h u n g r y stands for the sum value of h u n g r y ; N is the total amount of individuals; r 3 , r 4 and r 5 are random numbers which are limited in [0, 1]. The detailed expression of h u n g r y i is as follows:
h u n g r y i = 0 ,   A l l F i t n e s s i = = B F h u n g r y i + H , A l l F i t n e s s i   ! = B F
where the A l l F i t n e s s stores all individual’s fitness value generated in iterations and the A l l F i t n e s s i indicates the fitness of each independent individual in the current iteration. Notably, when the best fitness is found, the corresponding h u n g r y i value would be assigned to 0, if not, the new hungry value H would be supplied based on the actual hungry value. Equation (9) showed as follows denotes the consists of H :
H = L H × 1 + r , T H < L H T H , T H L H
T H = F i B F W F B F × r 6 × 2 × U B L B
where the sensation of hunger [76] H in Equation (9) is restricted to a lower bound L H which represents the lower limits of hunger. The setting of L H in this study is equal to 100, consisting with the settings in the literature [69,77]. B F and W F in Equation (10) means the best fitness value and the worst fitness value acquired from the current iteration, respectively; F i stands for the fitness of each individual; F i B F denotes the threshold of food consumption necessary for an individual to attain a state of complete satiation; W F B F denotes the maximal foraging ability of an individual in the current iteration; F i B F W F B F stands for the hunger ratio; r 6 is a random value limited within [0, 1]; U B and L B are the meanings of upper limits and lower limits of the dimensions, respectively. What’s more, in Equation (10), Algorithm 1. displays the pseudo-code of the HGS algorithm.
Algorithm 1: Pseudo-code of HGS.
Initialize the parameters N, T, l, D, Sum_hungry
Initialize the population X i   ( i = 1 ,   2 ,     ,   n )
While  ( t     T
 Calculate the initial fitness of all populations
 Update BF, WF, and Xb
 Calculate hungry by using Equation (8)
 Calculate W1 and W2 by using Equations (6) and (7), respectively
For i = 1 to N
    If (rand < 0.3)
     Update the position of the current search agent by using Equation (1)
    Else
     Calculate E by using Equation (2)
     Update R using Equation (4)
      Update the position of the current search agent by Equation (1)
   End if
End For
t = t + 1
End While
Return BF and Xb

2.2. The Adapted Logarithmic Spiral Strategy

Logarithmic Spiral (LS) [78] strategy, an effective search method in the exploration phase, is inspired by the spiral phenomena existing in nature [79]. Observing the famous Whale Optimization Algorithm (WOA) and Moth Flame Optimization (MFO) algorithm, the application of LS strategy can be found, which both use Equation (11) to mimic logarithmic spiral trajectory in their algorithm logic.
X ( t + 1 ) = X b X ( t ) × e b r 7 × cos 2 π r 7 + X b
In Equation (11), X b stands for the best solution location obtained from the current iteration; X t and X t + 1 means the position vector of t -th and ( t + 1 ) -th iteration, respectively; b is a constant value used to define the logarithmic spiral shape and the value of b is set to 1; r 7 is a random value range from [−1, 1].
An adapted LS strategy was proposed in the literature [71] to achieve a wider and more plausible range of exploration. This novel method combines the original LS strategy with Opposition-based Learning (OBL) [80], named LS-OBL strategy. The idea of this modified strategy is to incorporate the LS spatial trajectory between iteration-based and opposite-based solutions to boost the algorithm’s optimal efficiency.
The implementation of the LS-OBL strategy is mainly divided into three parts. Firstly, the OBL algorithm generates an opposite solution X o p based on the X b that the original HGS algorithm got in the current iteration. The mathematical model of this part is as follows:
X o p = r a n d × U B + L B X b
where X o p is the opposite position vector of X b .
Then, the dynamic logarithmic spiral space between X b and X o p is formed in each iteration. The relative mathematical model is mentioned in Equation (13).
X ( t + 1 ) = X b X o p × e b r 7 × cos 2 π r 7 + X b
Lastly, the search agent achieves random exploration throughout the whole logarithmic spiral space according to the parameter of s which is defined in Equation (14).
s = 2 × r a n d ( 0 ,   1 ) 1
where r a n d ( 0 ,   1 ) means a random number generated from (0, 1).

2.3. The Adapted Rosenbrock Method Strategy

Rosenbrock Method (RM) [81] strategy is a reliable local search method proposed by Rosenbrock. For the poor performance of the basic RM strategy on multimodal problems [82], an enhanced variant based on the RM strategy was proposed by Li et al. [71] in 2021, which makes targeted improvement in the basic RM’s initial step size σ i ( i = 1,2 , , n ) and termination conditions. Different from the σ i in tradition RM as a constant value, the σ i in adapted RM strategy can realize the dynamic change with iterations. The detailed description of σ i is as follows:
δ i = k = 1 n X k i X i ¯ n 2 + ε 1 ,   i = 1 ,   2 ,   d
X i ¯ = k = 1 n X k i n
where n represents the total amount of candidate individuals and d represents the dimension of the population. X k i stands for the vector of the k -th individual in the i -th dimension, and X i ¯ stands for average candidate individual in the i -th dimension which is explained in Equation (16). ε 1 is a constant equal to 1.0 e 150 for preventing the algorithm’s initial value from being 0.
What’s more, the two-loop termination condition in the original RM strategy is changed, which increases the participation of ε 1 , ε 2 , k 1 , and k 2 . Specifically, ε 1 and ε 2 are two parameters controlling internal and external loops, respectively. The parameter setting of ε 1 is mentioned before and the ε 2 is set to 1.0 e 4 , which values are consistent with the settings in the literature [82]. While the function of k 1 and k 2 play the role of loop counter. Algorithm 2 presents the pseudo-code of the improved RM strategy.
Algorithm 2: Pseudo-code of the adapted RM strategy.
Input    the   search   agent s   position   X i   ( i = 1 ,   2 ,     ,   n ) ,   population   position   X ,   D .
Initialize   the   orthonormal   basis   d i   ( i = 1 ,   2 ,     ,   n ) ,   the   value   of   step   size   adjustment   α   and   β ,   the   ending   instruction   ε 1 ,   ε 2 ,   N ,     k 2 = 0 .
Initialize   the   step   size   δ i   ( i = 1 ,   2 ,     ,   n )   by   using   Equation   ( 14 ) ,   set   X k = X i .
While ( ( δ m i n ε 1 )   o r   ( k 2 < 2 N ))
   Set   X =   X k ,   k 1 = 0 ,   Z = X
    While   ( k 1 < N )
      For   i = 1   to   N
       Y = X + d i δ i
       If   ( f Y < f ( X ) )
        X = Y
        δ i = a δ i   ( α > 1 )
    Else
        δ i = β δ i   ( 1 < β < 0 )
     End If
    End for
     If   ( a b s f Z f X a b s f X + ε 1 < ε 2 )
       k 2 = k 2 + 1
    Else
       k 2 = 0
    End If
     k 1 = k 1 + 1  
   End While
    If   ( f X < f ( X k ) )
      X k = X
      Update the orthonormal basis d i .
   End If
End While
Return  X b

3. Description of Proposed RLHGS

3.1. Motivation for This Work

Conventional swarm intelligence algorithms often suffer from issues such as premature convergence, slow convergence, and easy trapping in local optima. The design of the combination or hybrid algorithms can mitigate these problems to a certain extent. Maintaining the diversity of the population for simulation purposes and extending the flexibility of the algorithm to speed up the convergence velocity, hybridizing specific optimization strategies can make a good balance between the two phases of exploration and exploitation.
There is no denying that Hunger Games Search (HGS) is a good population-based optimizer. However, when dealing with challenging optimization problems, the classic HGS sometimes shows premature convergence and stagnation shortcomings. Therefore, finding approaches that enhance solution diversity and exploitation capabilities is crucial. This study incorporates two effective strategies into HGS: adaptations based on the Logarithmic Spiral (LS-OBL) and Rosenbrock Method (RM). On the one hand, LS-OBL is an exploration method that is based on the idea of Logarithmic Spiral and Opposition-based Learning. Furthermore, the idea behind this strategy is to generate a batch of new solutions through the OBL strategy and then construct a logarithmic spiral spatial trajectory between the current solution and the OBL-based solution. LS-OBL effectively alleviates the defects of the classic HGS in exploration by properly narrowing space and increasing the solution’s diversity. On the other hand, the adapted RM method is employed to optimize the exploitation process. By adjusting the search direction and step size, RM helps the search agent avoid getting trapped into local optima, ensuring stronger convergence towards global optimal results.

3.2. Flowchart and Pseudo-Code of RLHGS

The flowchart and pseudo-code are shown in Figure 1 and Algorithm 3, respectively. Notably, the execution of the RM strategy is conditional. For the high computational time of RM, the execution of RM is limited by a parameter p r o b , which plays the role of balancing RM performance and time consumption. The description of this parameter is as follows:
p r o b i = P n o N × r a n d ( 0 ,   1 )   i = 1,2 ,   n  
where N means the size of the population; P n o means the population number that is weaker than the optimal solution in the current iteration, r a n d is a random number obtained from (0, 1). When the p r o b i is higher than 0.8, the RM strategy will be invoked to search for the best further, or the exploitation strategy of standard HGS is performed.
Algorithm 3: Pseudo-code of RLHGS.
Initialize   the   parameters   N ,   T ,   l ,   D ,   S u m _ h u n g r y
Initialize   the   population   X i   ( i = 1 ,   2 ,     ,   n )
While    ( t     T )
 Calculate the initial fitness of all populations
Update   B F ,   W F ,   and   X b
Calculate   h u n g r y by using Equation (8)
Calculate   W 1   and   W 2 by using Equations (6) and (7), respectively
For  i = 1   to   N
   If ( r a n d < 0.3)
     Update the position of the current search agent by using the adapted LS-OBL strategy
   Else
      Calculate   E by using Equation (2)
     Update R using Equation (4)
     Update the position of the current search agent by Equation (1)
       If ( p r o b > 0.8)
     Update the position of the current search agent by using the adapted RM strategy
   End If
   End If
  End For
   t = t + 1
End While
Return  B F   and   X b

3.3. Computational Complexity Analysis

Computational complexity is a measure of the time and resources required by an algorithm to execute. In the original HGS, the computational complexity mainly depends on these aspects: population initialization, fitness value calculation, sorting, and population updating. In these processes, N , D , and T represent the scale of population, the scale of dimension, and the maximum number of iterations respectively. Specifically, in the initial stage, the computational complexity of population initialization is O ( N ) , whereas the computational complexity of fitness value calculation is O ( T × N ) . In the worst case, the computational complexity of sorting is O ( T × N × l o g N ) . The population updating includes hunger updating, weight updating, and location updating, with computational complexity O ( T × N ) , O ( T × N × D ) , and O ( T × N × D ) , respectively. Thus, the computational complexity of the original HGS is O ( N × ( 1 + T × N × ( 2 + l o g N + 2 × D ) ) ) . The adapted algorithm RLHGS is also compounded from the above aspects, but owing to the addition of LS-OBL and RM operators, they differ in the process of updating population positions. However, due to the random operations of RM strategy, it is hard to assess the exact computational complexity of RLHGS. Hence, evaluating the algorithm’s computational cost necessitates taking into account the running time of the code. This study evaluates the actual computational cost of RLHGS and other comparative algorithms by recording their average time cost on 23 classic benchmark test suites. The average running cost comparison is listed in section 5.3.1, and their units is seconds.

4. Designs for Experiments

4.1. Details of Benchmark Functions

To mitigate the impact of randomness in algorithms, it is necessary to employ appropriate and comprehensive test functions and case studies. This ensures that superior results are not merely coincidental but consistently achieved. Thus, a sufficient evaluation is conducted using 23 classic benchmarks [83] and CEC 2020 benchmark test suite [84]. These benchmarks serve as crucial tools for testing algorithm performance. Twenty-three classical benchmark test suites consist of unimodal, multimodal, and fixed-dimensional multi-modal functions. Specifically, F1–F13 represent high-dimensional problems, including unimodal functions (F1–F5), a step function with one minimum value (F6), a noisy quartic function (F7), and multimodal functions with multiple local optima (F8–F13). Besides, F14–F23 are low-dimensional functions with only a few local minima, which enables the assessment of the algorithm’s effectiveness in searching for near-global optima. Detailed information about these benchmark functions can be found in Table 2. What’s more, to ascertain the RLHGS’s efficacy, it is also tested on the CEC2020 benchmark test suite, which includes one unimodal function, three multimodal functions, three hybrid functions, and three composition functions. Table 3 provides further details about the CEC2020 benchmark functions. Notably, both in Table 2 and Table 3, D means the dimensions of functions, R means the domain of functions, and f m i n means the optimum solution of the functions. Figure 2 presents 3- D map of some 23 classic benchmarks functions.

4.2. Configuration of Experiment Environment

In this study, two kinds of experiments on benchmark test suites are conducted, one aims to demonstrate the effectiveness of the added strategies and the other is focused on showcasing the superiority of RLHGS through a comparison with other powerful algorithms. To maintain fair comparisons, we have adhered to the suggested principles outlined in previous AI studies [85,86,87], which emphasize the importance of employing uniform conditions during the assessment of different methodologies [88,89]. Hence, the population size of these two experiments is set to 30, and the function evaluation for D = 30. The maximum iteration T is 300,000. To minimize the random error, all involved algorithms are run 30 times independently on all benchmark functions.
Besides, including the constrained real-world engineering experiment, all experiments are accomplished on a PC with Win11, a 64-bit operating system. The CPU is Intel (R) Core (TM) i5-9400, the main frequency is 2.90 GHz, the memory is 8.00 GB, and the software is MATLAB R2018b.

4.3. Statistical Analysis Methods

Evaluating the progress made by a new proposed algorithm compared to existing techniques is a specific challenge in experimental investigations. In recent years, researchers have recognized the importance of statistical analysis in assessing the performance of novel algorithms. In this study, the effectiveness of RLHGS is evaluated through several evaluation criteria.
Firstly, the average value (Avg) and standard deviation (Std) of the optimal function value are used to evaluate the performance of algorithms. Among them, Avg is applied to evaluate the global search ability and the quality of the solution, while Std is devoted to evaluating the robustness of the algorithm. The ranking of each algorithm based on Avg is provided to reflect their performance on each function. In addition, the evaluation criteria of the Friedman test and Wilcoxon rank test also be used. In 1937, Milton Friedman first developed the concept of the Friedman test [90], which was later used to assess several different algorithms’ performance on different kinds of test functions. After its effectiveness is proven in various literature, the Friedman test has been regarded as an available method for model performance evaluation. Wilcoxon signed-rank test [91] was first proposed by a U.S. statistician named Frank Wilcoxon. As a hypothesis-testing method, the Wilcoxon rank test has been widely used to verify the algorithm’s statistical consistency after it was put forward. The principle of this method is to judge which algorithm is better by comparing the significant differences between two samples. Moreover, it can give a judgment if one algorithm is superior to another by calculating the p -value. Notably, the standard value of p -value is set to 0.05. The sign “+/−/=” in the table is utilized to indicate if the compared algorithm’s execution is better than, worse than, or comparable to that of RLHGS in terms of statistical manner.

5. Result and Discussion

5.1. Qualitative Analysis

Exploration and exploitation are two crucial processes in SI algorithms. The exploration process primarily occurs in the early stages of algorithm execution, with the main aim of conducting a comprehensive search across the feasible domain space to identify potential regions and optimal solutions. During this phase, the algorithm would place significant emphasis on global search, which expands the search scope and diversifies search strategies to discover new areas that may contain better solutions. Exploration presents an opportunity to venture into uncharted territories and acts as a foundation for the subsequent exploitation process. The goal of the exploitation is to delve into known solution spaces and enhance the quality of candidate solutions through thorough searches. This stage focuses more on specific areas that are considered to have a higher probability of obtaining better solutions. By using more refined search strategies and utilizing prior knowledge, the exploitation phase can perform local optimizations around the current solution to obtain higher-quality solutions. Its major focus revolves around enhancing optimization performance to achieve swift convergence towards the optimal or approximate optimal solution.
To evaluate the exploration and exploitation processes of RLHGS, this subsection conducts a qualitative analysis. Figure 3 presents the qualitative results of RLHGS on several functions from the CEC2020 benchmark test suite, encompassing three types: unimodal function (F1), multimodal functions (F2 and F3), and composite function (F8). In Figure 3, column (a) presents the 3-D position distribution, showcasing the nature of four functions. Column (b) illustrates the 2-D spatial distribution of search history trajectories, providing insights into the position and dispersion of the population throughout the iteration process. The red dot in the image represents the global optimal value. Upon observing the graphs in this column, it becomes apparent that the population’s search trajectory almost revolves around the red dot. This observation suggests that the search range of RLHGS is both reasonable and effective. Column (c) showcases the motion trajectory of the first individual in the first dimension. It exhibits fluctuations during the initial stage of the search but ultimately converges towards the optimal value in later stages. This behavior can be attributed to the algorithm’s continuous pursuit of higher-quality solutions during the exploration phase, underscoring RLHGS’s adaptability and exceptional exploration capability. However, although the graphs in columns (b) and (c) indicate a trend for individuals of RLHGS to explore promising areas throughout the search space and ultimately utilize the best solution, the convergence curve has not been observed or proved. Column (d) records the convergence curves of RLHGS, revealing the trend of changes in the optimal fitness value and verifying the capability of RLHGS in obtaining a near-optimal solution throughout the whole iteration.
However, when the processes of exploration and exploitation are not balanced, the optimization performance may not meet expectations. For instance, if the algorithm only possesses strong exploratory capabilities, it may yield high-quality solutions but at a slower convergence speed. On the other hand, if the algorithm leans towards exploitation, the convergence speed may improve, yet there is a higher risk of getting trapped in local optima. Hence, achieving a delicate balance between the exploration and exploitation stages becomes crucial for enhancing algorithm performance.
To further examine the impact of LS-OBL and RM on the exploration and exploitation process. This study conducts a balance analysis and comprehensive discussions of these two processes of RLHGS and HGS. The relative results are shown in Figure 4. Notably, the % EPL and % EPT indicators in column (a) represent the proportions of algorithmic exploration and exploitation processes throughout the entire execution, which are calculated by Equations (18)–(20). In equations, D i v refers to individual diversity, D i v m a x indicates maximum individual diversity, D i v j represents the j -th dimensional diversity of an individual, and n denotes the total number of individuals in the population. D represents the function’s dimension, X i j represents the j -th dimension of the i -th individual, and medium ( X j ) signifies the median value of the j -th dimension across all individuals.
% E P L = D i v D i v m a x × 100 %
% E P T = D i v D i v m a x D i v m a x × 100 %
D i v j = 1 n i = 1 n m e d i a n x j x i j
D i v = 1 d i m j = 1 d i m D i v j
As shown in Figure 4, columns (a) and (b) illustrate the exploration and exploitation stage balance diagrams throughout the execution process, showcasing trend curves representing the exploration and exploitation stages. Except for the change curve of two processes, an incremental–decremental curve is added to reflect the algorithm’s level of exploration. If the global search outweighs local development during algorithm execution, the curve exhibits an upward trend. Conversely, if local development dominates, the curve demonstrates a downward trend. In detail, upon analyzing the graphs in columns (a) and (b), it becomes apparent that the exploration and exploitation process in RLHGS has shown significant improvement compared to HGS. On the F1 function, % EPL increased from 8.1273% to 16.7709%, indicating an improvement of 8.6517%. On the F2 function, % EPL increased from 8.6423% to 24.202%, representing a boost of 15.5597%. On the F3 function, % EPL rose from 3.1199% to 15.4554%, an increase of 12.3355%. Similarly, on the F8 function, % EPL surged from 6.125% to 22.7202%, marking an increase of 16.5952%. These numerical changes in % EPL and % EPT and the convergence curve separately generated by RLHGS and HGS in column (c) demonstrate that the inclusion of LS-OBL and RM algorithms has introduced a certain degree of balance between the exploration and exploitation stages.

5.2. Inspection of Improvement Effect

Even if the aforementioned results provide evidence and validation of RLHGS’s high performance, more specific information needs to be obtained to confidently confirm whether the added mechanism effectively promotes the performance of the HGS. In this experiment, there are four algorithms participating in the comparison. Table 4 lists an intuitive description of compared algorithms, where ‘1’ indicates that strategy is embedded, and ‘0’ indicates not to be adopted. All the algorithms are used to handle ten classical benchmark functions from CEC2020.
Table 5 presents the average value (Avg) and standard deviation (Std) of the optimal function value of RLHGS, RHGS, LHGS, and HGS. Upon observing ranking in Table 5, it is evident that RLHGS obtains the highest number of optimal values. Also, RLHGS exhibits a remarkable probability of approximately 80% in achieving the best performance across 10 test functions. Specifically, the outstanding performance of RLHGS is mainly reflected in the unimodal function (F1), multimodal functions (F2–F4), hybrid functions (F5–F7), and composition function (F8). Though slightly behind RHGS and HGS in F9 and F10, RLHGS still ranked first and obtained the lowest average value of the Friedman test shown in Table 6. Such a result not only indicates that RLHGS has good global search ability and higher quality solutions but also indicates that the robustness of the algorithm is better than compared algorithms. What’s more, it can be noticed that the original HGS is only in third place, which lags behind RLHGS in the ranking, directly verifying that embedded strategies have a good effect on promoting the optimization capability of HGS.
A more intuitive comparison result can be observed from Figure 5. Looking at the convergence curves of F1, F2, F4, and F6, it can be concluded that neither the LS-OBL strategy nor the adapted RM strategy alone can effectively improve the HGS method sometimes, but when these two are simultaneously introduced, the improvement would be obvious. What’s more, the excellent performance of RLHGS can also be seen from the convergence curves of F3, F5, F7 and F8. The Wilcoxon signed-rank results in Table 7 also support the above conclusion.
In summary, the LS-OBL and adapted RM strategies work synergistically to overcome the limitations of the original algorithm, improving its overall performance in solving complex optimization problems.

5.3. Comparison with Eight Superior Algorithms

This subsection mainly introduces the experiment of RLHGS with eight state-of-the-art algorithms. Table 8 shows the parameter configuration of the algorithms involved in comparison and the brief introductions of these algorithms are as follows:
  • CS [92]: Cuckoo search algorithm, a powerful algorithm that was presented by Gandomi et al. in 2013, the internal logic of the algorithm is based on the brood parasitism of cuckoo species.
  • MFO [93]: Moth-flame optimization algorithm was a novel nature-inspired heuristic paradigm proposed by Mirjalili in 2015. The inspiration for designing this algorithm origins from the navigation method of moths in nature called transverse orientation.
  • HHO [27]: Harris Hawks optimization algorithm was first proposed by Heidari et al. in 2019, simulating Harris hawks’ hunting behavior.
  • SSA [94]: Salp Swarm Algorithm is a bio-inspired optimization algorithm that was developed by Mirjalili et al. in 2017. The idea is based on the swarming mechanism of salps.
  • JADE [95]: An adaptive differential evolution algorithm, designed by Zhang et al. in 2009, implemented with a new mutation strategy IdquoDE/current-to-best duo with optional external archive and adaptively updating control parameters into normal differential evolution algorithm.
  • ALCPSO [96]: An enhanced version of particle swarm optimization raised by Chen et al. in 2013, combined with an aging leader and challenger mechanism.
  • SCGWO [97]: A variant of the grey wolf optimization algorithm innovated by Hu et al. in 2021, introduced the improved spread and chaotic local search strategies to the standard grey wolf optimization.
  • RDWOA [98]: An improved meta-heuristic algorithm based on the original whale optimization algorithm developed in 2019, which is equipped with a random spare strategy and double adaptive weight.

5.3.1. Benchmark Function Set I: 23 Classic Test Functions

To validate the feasibility of RLHGS, RLHGS with CS, MFO, HHO, SSA, JADE, ALCPSO, SCGWO, and RDWOA are arranged to handle 23 classical numerical optimization problems in this subsection. The comparison results are presented in Table 9. According to the Avg and Std, it can be found that RLHGS achieves the highest number of optimal values across various functions, including F1, F2, F9, F12, F13, F14, F16, F17, F19, F20, F21, F22 and F23. In contrast, other algorithms such as CS, MFO, SSA, JADE, and ALCPSO perform well only on fixed-dimensional multimodal functions, while HHO, SCGWO, and RDWOA show proficiency mainly in unimodal and multimodal functions. Only RLHGS consistently achieves ideal values across all function types, which indicates its versatility and robustness. Furthermore, Table 10 and Figure 6 provide the Friedman mean level and overall rank of all compared algorithms. Comparing these results, it is clear that RLHGS attains the highest rank, further solidifying its position as a powerful stochastic optimization algorithm. The Wilcoxon signed-rank results of RLHGS and other eight superior algorithms on 23 benchmark functions are shown in Table 11. In the table, the p -value which is less than 0.05, indicating a significant difference between RLHGS and the compared algorithm, with RLHGS performing better than the compared algorithm. Table 12 lists the average running time of RLHGS and other eight superior algorithms on 23 benchmark functions. Although the results are not the shortest time consuming, it can be seen that the algorithm has a relatively reasonable time cost on most functions.
Figure 7 portrays the convergence curves of this experiment. Look at Figure 7, when dealing with test functions F1 and F2, RLHGS obtains the optimum result with the fastest optimization speed. Moreover, when dealing with multimodal functions like F12 and F13, RLHGS is far more than other compared algorithms in searching for global or near-optimal solutions, which can be intuitively seen from the convergence curves that the algorithm does not immediately fall into local optima like other competing algorithms, also proving RLHGS can jump out of local optima. What’s more, for most test functions, the performance of the RLHGS is much better than other compared methods in the early search stage. Meanwhile, the final values obtained at the late search stage are faster or much closer to the optimal value.

5.3.2. Benchmark Function Set II: CEC2020 Test Functions

The evaluation configurations of this experiment are consistent with those in Section 5.3.1. Analyzing the comparison results revealed in Table 13, RLHGS outperforms all compared methods, which achieves five optimal values in ten test functions and no other algorithm exceeds it. In detail, it can be observed that RLHGS outperforms all of the competitors on multi-modal functions (F2–F3), hybrid function (F6), and composition function (F8), this result strongly reflects that RLHGS has advantage of exploration and local optima avoidance. Meanwhile, according to the statistical standard, it can be obtained that the function quantity of RLHGS superior to CS, MFO, ALCPSO, HHO, JADE, SCGWO, RDWOA, and SSA is 10, 10, 9, 7, 7, 7, 7, and 6, respectively, showing RLHGS is a competitive algorithm. Table 14 shows the result of the Friedman test result, where RLHGS secures the first position with a rank of 2.3, followed by JADE, SSA, HHO, RDWOA, and others. More intuitive ranking can be obtained from Figure 8. Table 15 shows the p-value results of the Wilcoxon signed-rank test. Upon observation, it is clear that RLHGS exhibits significant differences compared to other algorithms and exceeds them in terms of performance. Figure 9 shows four convergence curves of nine methods in this experiment, which are F2, F3, F4, and F6. The information that can be obtained from this figure is that RLHGS successfully exceeds other strong opponents and reaches a better solution.
Although the results and analyses in Section 5.3.1 and Section 5.3.2 verify that RLHGS has capability of determining the global optimal of the test functions to a certain degree, there still has some differences between actual problems and standard function problems. For example, the global optimal value for commonly used test functions is provided, whereas the global optimal value for actual problems remains unknown. What’s more, some equality and inequality constraints are also attached to practical problems. Therefore, in addition to the performance on the benchmark function, it is necessary to test the performance of the function in practical problems. In the next subsection, RLHGS is applied to solve four practical problems.

5.4. Four Real-World Constrained Benchmark Problems

In this subsection, the proposed RLHGS is used to settle four classical constrained benchmark problems, they are tension/compression spring design, welded beam design, pressure vessel design problem, and three-bar truss design. Due to their constraints being based on the different conditions, thus find a method that can effectively solve all these problems seems particularly significant. Researchers have recently proposed a mass of processing methods combining constraints with swarm intelligence algorithms. According to the different processing ways of these methods [99], the functions of the penalty are mainly divided into five categories: co-evolutionary, static, adaptive, dynamic, and death penalty functions. Considering the characteristic of the algorithm proposed, the method used in this study to handle four constrained benchmark problems is the death penalty function, which is the modest one in constructing an objective value of a mathematical model.

5.4.1. Tension/Compression String Problem

The way to solve the tension/compression string problem is to obtain the optimal parameters that can minimize the weight. This problem has three variables, which are wire diameter ( d ), mean coil diameter ( D ), and the number of active coils ( N ). Meanwhile, to solve this problem need to pay attention to four constraints functions h 1 x , h 2 x , h 3 x and h 4 x . Its structure is shown in Figure 10. The mathematical description of this problem is shown as follows:
Consider:
x = x 1 ,   x 2 ,   x 3 = [ d ,   D ,   N ]
Objective function:
f ( x ) m i n = x 1 2 x 2 x 3 + 2 x 1 2 x 2
Subject to
h 1 x = 1 x 2 3 x 3 71,785 x 1 4 0 ,
h 2 x = 4 x 2 2 x 1 x 2 12,566 ( x 1 3 x 2 x 1 4 ) + 1 5108 x 1 2 0 ,
h 3 x = 1 140.45 x 1 x 2 3 x 3 0 ,
h 4 x = x 1 + x 2 1.5 1 0
Variable ranges:
0.05 x 1 2.00 ,
0.25 x 2 1.30 ,
2.00 x 3 15.0
The comparison results of RLHGS with other eight advanced optimization algorithms in the tension/compression spring design problem are shown in Table 16. Observing the data from the table, it can be found that RLHGS obtains the lowest value 0.0126653 which is in bold, followed by INFO, IHS, PSO, and GSA close, which is 0.012666, 0.0126706, 0.0126747 and 0.0126763 respectively. The results verify that RLHGS has good capability in optimizing this engineering problem.

5.4.2. Welded Beam Design Problem

The key to solving welded beam design problem is to acquire the optimal parameters that can minimize the cost of welded beams. In this problem, shear stress ( τ ), bending stress ( θ ), buckling load ( P c ) and deflection ( δ ) are four constraints that need to be satisfied and thickness of welding seam ( h ), length of welding joint ( l ), width of beam ( t ), and thickness of bar ( b ) are four variables need to be considered. The shape of the welded beam design problem is shown in Figure 11. The following mathematical descriptions detailed describe this problem and its constraints:
Consider:
x = x 1 ,   x 2 ,   x 3 ,   x 4 = [ h ,   l ,   t ,   b ]
Minimize:
f x = 1.10471 x 1 2 + 0.04811 x 3 x 4 ( 14.0 + x 4 )
Subject to:
g 1 x = τ x τ m a x 0
g 2 x = σ x σ m a x 0
g 3 x = δ x δ m a x 0
g 4 x = x 1 x 4 0
g 5 x = P P C x 0
g 6 x = 0.125 x 1 0
g 7 x = 1.10471 x 1 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 ) 5.0 0
Variable range:
0.1     x 1 2 ,   0.1     x 2 10 ,   0.1     x 3 10 ,   0.1     x 4 2
where:
τ x = τ 2 + 2 τ τ x 2 2 R + τ 2 , τ = P 2 x 1 x 2   τ = M R J   M = P L + x 2 2
R = x 2 2 4 + x 1 + x 3 2 2
J = 2 2 x 1 x 2 x 2 2 4 + x 1 + x 3 2 2
σ x = 6 P L x 3 2 x 4 , δ x = 6 P L 3 E x 3 2 x 4
P C x = 4.013 E x 3 2 x 4 6 36 L 2 1 x 3 2 L E 4 G
P = 6000   l b , L = 14   i n ,   δ m a x = 0.25   i n ,
E = 30 × 1 6   p s i , G = 12 × 1 0 6   p s i
τ m a x = 13,600   p s i , σ m a x = 30,000   p s i
Table 17 shows the optimization results of welded beam design problem. This problem compares RLHGS with HGS, GSA, CDE, HS, GWO, BA, IHS, and RO. According to the data of optimum cost, the optimal value is shown in bold, which is obtained by RLHGS. Thus, it is easy to conclude that RLHGS performs best in solving the welded beam design problem. When the variables are set to 0.2015, 3.3345, 9.03662391, and 0.20572964, the optimum cost can reach 1.699986, lower than all compared algorithms.

5.4.3. Pressure Vessel Design Problem

Pressure vessel design problem is a conundrum in the engineering field. The way to solve this problem should focus on minimum the cost of welding, material, and forming of a vessel. The mathematical description of four variables and four constraints are shown in the following equations. Looking at these variables, x 1 to x 4 indicate the thickness of the shell ( T s ), the thickness of the head ( T h ), the internal radius ( R ), and the vessel length excluding head ( L ), respectively. Figure 12 displays the Components of this problem.
Consider:
X = x 1 ,   x 2 ,   x 3 ,   x 4 = [ T s ,   T h ,   R ,   L ]
Minimize:
f ( x ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 1 2 x 3 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
Subject to:
g 1 X = x 1 + 0.0193 x 3 0
g 2 X = x 2 + 0.00954 x 3 0
g 3 X = π x 3 2 x 4 4 3 π x 3 3 + 1,296,000 0
g 4 X = x 4 240 0
Range of Variables:
0 x 1 99 ,
0 x 2 99 ,
10 x 3 200 ,
10 x 4 200
In solving pressure vessel design problem, RLHGS is arranged to compare with ES, PSO, GA, G-QPSO, SMA, Branch-and-bound, HIS, GA3, and CPSO. The results in Table 18 show that when T s , T h , R , L are set as 0.8125, 0.4375, 42.0984456, 176.6365958 respectively, RLHGS gets the value 6059.714335, an optimal cost. This result demonstrates the proposed algorithm in this paper is superior to other algorithms for solving these kinds of mechanical engineering problems.

5.4.4. Three-Bar Truss Design Problem

Three-bar truss design problem is a well-known constrained space problem, which is derived from civil engineering. Figure 13 presents its component. The method to solve this problem is to gain the minimum value of the weight of the bar structures. The stress constraints of each bar are the basis of the constraints in this problem. The problem is expressed mathematically in the following way:
Consider:
f x = 2 2 x 1 + x 2 × l
Subject to:
g 1 x = 2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 P σ 0
g 2 x = x 2 2 x 1 2 + 2 x 1 x 2 P σ 0
g 2 x = 1 2 x 2 + x 1 P σ 0
where:
0 x i 1   i = 1 ,   2 ,   3 ,
L   =   100   cm ,   P   =   2   kN / cm 2 ,   σ   =   2   kN / cm 2
Table 19 lists the results of seven optimization algorithms in solving the three-bar design problem. In the table, optimum cost indicates the weight of the bar structures, and it is not hard to notice that RLHGS ranks first among all algorithms based on the minimum value of 263.89584338. BWOA and MVO ranked second and third with 263.8958435 and 263.8958499, respectively. Though the narrowing gap between values, it also supports the RLHSG can provide powerful assistance for dealing with three-bar design.

6. Conclusions and Future Work

HGS is a novel heuristic algorithm that has gained attention in recent years. Building upon the original literature, it is evident that HGS demonstrates remarkable optimization capabilities, surpassing numerous robust algorithms. However, the original HGS does have some limitations, including premature convergence, susceptibility to local optima, and slow convergence speed. These shortcomings indicate room for improvement within HGS. This study introduces the RLHGS algorithm, which incorporates an adapted LS-OBL mechanism and an adapted RM mechanism into the original HGS. These additions aim to enhance the algorithm’s exploration and exploitation abilities, respectively.
To assess the efficiency of these introduced mechanisms and the superiority of RLHGS over other powerful algorithms, several evaluations are conducted using the 23 classic benchmark functions and CEC2020 test suite, encompassing various function types. The first experiment analyzes the effectiveness of embedded strategies, yielding the conclusion that RLHGS performed exceptionally well when the LS-OBL strategy and adapted RM strategy worked in tandem. This finding validates the effectiveness of the added mechanisms in overcoming HGS’s drawbacks. In the second comparison experiment, RLHGS is compared alongside CS, MFO, HHO, SSA, JADE, ALCPSO, SCGWO, and RDWOA. According to the experiment results, the performance of RLHGS not only surpasses well-established classic algorithms like CS, SSA, JADE, RDWOA, and ALCPSO but also outperforms exceptional state-of-the-art algorithms such as MFO, HHO, and SCGWO. Furthermore, RLHGS is applied to optimize parameters in four engineering design problems. Comparative analysis with other algorithms reveals that the proposed method achieves superior results. Thus, RLHGS exhibits promise in tackling complex real-world optimization problems and could serve as a valuable auxiliary method for a broader range of global optimization problems. Overall, the integration of LS-OBL and RM into HGS, resulting in RLHGS, proves to be a valuable improvement, showcasing enhanced performance and robustness in various evaluation scenarios and real-world engineering optimization challenges. Additionally, this study only scratches one of RLHGS’s potential applications. In the future, RLHGS can find utility in numerous other fields beyond engineering optimization, such as image segmentation, machine learning model optimization, and others.

Author Contributions

Conceptualization, Y.L.; Formal analysis, H.C. and Y.Z.; Funding acquisition, S.W.; Investigation, A.A.H.; Methodology, A.A.H., S.W. and Y.Z.; Project administration, Y.Z.; Resources, H.C.; Software, Y.L. and H.C.; Validation, Y.L., A.A.H. and S.W.; Writing—original draft, Y.L. and A.A.H.; Writing—review & editing, S.W., H.C. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This paper is partially supported by MRC (MC_PC_17171); Royal Society (RP202G0230); BHF (AA/18/3/34220); Hope Foundation for Cancer Research (RM60G0680); GCRF (P202PF11); Sino-UK Industrial Fund (RP202G0289); LIAS (P202ED10, P202RE969); Data Science Enhancement Fund (P202RE237); Fight for Sight (24NN201); Sino-UK Education Fund (OP202006); BBSRC (RM32G0178B8).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The relevant experimental data are within this paper.

Acknowledgments

During the preparation of this work the author(s) used ChatGPT for Grammar Enhancement, Proofreading, and Paraphrasing. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lu, Z.; Cheng, R.; Jin, Y.; Tan, K.C.; Deb, K. Neural Architecture Search as Multiobjective Optimization Benchmarks: Problem Formulation and Performance Assessment. IEEE Trans. Evol. Comput. 2022, 1. [Google Scholar] [CrossRef]
  2. Wang, Z.; Zhao, D.; Guan, Y. Flexible-constrained time-variant hybrid reliability-based design optimization. Struct. Multidiscip. Optim. 2023, 66, 89. [Google Scholar] [CrossRef]
  3. Bai, X.; Huang, M.; Xu, M.; Liu, J. Reconfiguration Optimization of Relative Motion Between Elliptical Orbits Using Lyapunov-Floquet Transformation. IEEE Trans. Aerosp. Electron. Syst. 2022, 59, 923–936. [Google Scholar] [CrossRef]
  4. Lu, C.; Zheng, J.; Yin, L.; Wang, R. An improved iterated greedy algorithm for the distributed hybrid flowshop scheduling problem. Eng. Optim. 2023, 1–19. [Google Scholar] [CrossRef]
  5. Zhang, K.; Wang, Z.; Chen, G.; Zhang, L.; Yang, Y.; Yao, C.; Wang, J.; Yao, J. Training effective deep reinforcement learning agents for real-time life-cycle production optimization. J. Pet. Sci. Eng. 2022, 208, 109766. [Google Scholar] [CrossRef]
  6. Li, B.; Tan, Y.; Wu, A.-G.; Duan, G.-R. A distributionally robust optimization based method for stochastic model predictive control. IEEE Trans. Autom. Control 2021, 67, 5762–5776. [Google Scholar] [CrossRef]
  7. Cao, B.; Zhao, J.; Yang, P.; Gu, Y.; Muhammad, K.; Rodrigues, J.J.P.C.; de Albuquerque, V.H.C. Multiobjective 3-D Topology Optimization of Next-Generation Wireless Data Center Network. IEEE Trans. Ind. Inform. 2019, 16, 3597–3605. [Google Scholar] [CrossRef]
  8. Cao, B.; Zhao, J.; Gu, Y.; Ling, Y.; Ma, X. Applying graph-based differential grouping for multiobjective large-scale optimization. Swarm Evol. Comput. 2020, 53, 100626. [Google Scholar] [CrossRef]
  9. Lv, Z.; Wu, J.; Li, Y.; Song, H. Cross-layer optimization for industrial Internet of Things in real scene digital twins. IEEE Internet Things J. 2022, 9, 15618–15629. [Google Scholar] [CrossRef]
  10. Selvakumar, A.I.; Thanushkodi, K. A new particle swarm optimization solution to nonconvex economic dispatch problems. IEEE Trans. Power Syst. 2007, 22, 42–51. [Google Scholar] [CrossRef]
  11. Li, H.Z.; Guo, S.; Li, C.J.; Sun, J.Q. A hybrid annual power load forecasting model based on generalized regression neural network with fruit fly optimization algorithm. Knowl.-Based Syst. 2013, 37, 378–387. [Google Scholar] [CrossRef]
  12. Kashef, S.; Nezamabadi-pour, H. An advanced ACO algorithm for feature subset selection. Neurocomputing 2015, 147, 271–279. [Google Scholar] [CrossRef]
  13. Mafarja, M.; Aljarah, I.; Heidari, A.A.; Faris, H.; Fournier-Viger, P.; Li, X.; Mirjalili, S. Binary dragonfly optimization for feature selection using time-varying transfer functions. Knowl.-Based Syst. 2018, 161, 185–204. [Google Scholar] [CrossRef]
  14. Li, J.D.; Liu, H. Challenges of Feature Selection for Big Data Analytics. Ieee Intell. Syst. 2017, 32, 9–15. [Google Scholar] [CrossRef]
  15. Li, Q.; Chen, H.; Huang, H.; Zhao, X.; Cai, Z.; Tong, C.; Liu, W.; Tian, X. An Enhanced Grey Wolf Optimization Based Feature Selection Wrapped Kernel Extreme Learning Machine for Medical Diagnosis. Comput. Math. Methods Med. 2017, 2017, 9512741. [Google Scholar] [CrossRef] [PubMed]
  16. Cao, B.; Li, M.; Liu, X.; Zhao, J.; Cao, W.; Lv, Z. Many-Objective Deployment Optimization for a Drone-Assisted Camera Network. IEEE Trans. Netw. Sci. Eng. 2021, 8, 2756–2764. [Google Scholar] [CrossRef]
  17. Cao, B.; Zhao, J.; Lv, Z.; Yang, P. Diversified personalized recommendation optimization based on mobile data. IEEE Trans. Intell. Transp. Syst. 2020, 22, 2133–2139. [Google Scholar] [CrossRef]
  18. Cao, B.; Fan, S.; Zhao, J.; Tian, S.; Zheng, Z.; Yan, Y.; Yang, P. Large-scale many-objective deployment optimization of edge servers. IEEE Trans. Intell. Transp. Syst. 2021, 22, 3841–3849. [Google Scholar] [CrossRef]
  19. Zhang, J.; Tang, Y.; Wang, H.; Xu, K. ASRO-DIO: Active subspace random optimization based depth inertial odometry. IEEE Trans. Robot. 2022, 39, 1496–1508. [Google Scholar] [CrossRef]
  20. Duan, Y.; Zhao, Y.; Hu, J. An initialization-free distributed algorithm for dynamic economic dispatch problems in microgrid: Modeling, optimization and analysis. Sustain. Energy Grids Netw. 2023, 34, 101004. [Google Scholar] [CrossRef]
  21. Li, R.; Wu, X.; Tian, H.; Yu, N.; Wang, C. Hybrid Memetic Pretrained Factor Analysis-Based Deep Belief Networks for Transient Electromagnetic Inversion. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–20. [Google Scholar] [CrossRef]
  22. Holland, J. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Application to Biology; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  23. Storn, R.; Price, K. Differential Evolution: A Simple and Efficient Adaptive Scheme for Global Optimization Over Continuous Spaces. J. Glob. Optim. 1995, 23, 341–359. [Google Scholar]
  24. Dan, S. Biogeography-Based Optimization. IEEE Trans. Evol. Comput. 2009, 12, 702–713. [Google Scholar]
  25. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  26. Sm, A.; Smm, B.; Al, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar]
  27. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H.L. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst.-Int. J. Escience 2019, 97, 849–872. [Google Scholar] [CrossRef]
  28. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst.-Int. J. Escience 2020, 111, 300–323. [Google Scholar] [CrossRef]
  29. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems. Inf. Sci. 2012, 183, 1–15. [Google Scholar] [CrossRef]
  30. Ramezani, F.; Lotfi, S. Social-Based Algorithm (SBA). Appl. Soft Comput. 2013, 13, 2837–2856. [Google Scholar] [CrossRef]
  31. Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4661–4667. [Google Scholar] [CrossRef]
  32. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  33. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  34. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2015, 27, 495–513. [Google Scholar] [CrossRef]
  35. Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  36. Ahmadianfar, I.; Heidari, A.A.; Noshadian, S.; Chen, H.L.; Gandomi, A.H. INFO: An efficient optimization algorithm based on weighted mean of vectors. Expert Syst. Appl. 2022, 195, 116516. [Google Scholar] [CrossRef]
  37. Cao, B.; Gu, Y.; Lv, Z.; Yang, S.; Zhao, J.; Li, Y. RFID reader anticollision based on distributed parallel particle swarm optimization. IEEE Internet Things J. 2020, 8, 3099–3107. [Google Scholar] [CrossRef]
  38. Aljarah, I.; Habib, M.; Faris, H.; Al-Madi, N.; Heidari, A.A.; Mafarja, M.; Elaziz, M.A.; Mirjalili, S. A dynamic locality multi-objective salp swarm algorithm for feature selection. Comput. Ind. Eng. 2020, 147, 106628. [Google Scholar] [CrossRef]
  39. Houssein, E.H.; Mahdy, M.A.; Shebl, D.; Manzoor, A.; Sarkar, R.; Mohamed, W.M. An efficient slime mould algorithm for solving multi-objective optimization problems. Expert Syst. Appl. 2022, 187, 115870. [Google Scholar] [CrossRef]
  40. Khunkitti, S.; Siritaratiwat, A.; Premrudeepreechacharn, S. Multi-Objective Optimal Power Flow Problems Based on Slime Mould Algorithm. Sustainability 2021, 13, 7448. [Google Scholar] [CrossRef]
  41. Huang, F.Z.; Wang, L.; He, Q. An effective co-evolutionary differential evolution for constrained optimization. Appl. Math. Comput. 2007, 186, 340–356. [Google Scholar] [CrossRef]
  42. Zhang, H.; Liu, T.; Ye, X.; Heidari, A.A.; Liang, G.; Chen, H.; Pan, Z. Differential evolution-assisted salp swarm algorithm with chaotic structure for real-world problems. Eng. Comput. 2023, 39, 1735–1769. [Google Scholar] [CrossRef]
  43. Ji, Y.; Tu, J.; Zhou, H.; Gui, W.; Liang, G.; Chen, H.; Wang, M. An Adaptive Chaotic Sine Cosine Algorithm for Constrained and Unconstrained Optimization. Complexity 2020, 2020, 6084917. [Google Scholar] [CrossRef]
  44. Yang, X.; Wang, R.; Zhao, D.; Yu, F.; Huang, C.; Heidari, A.A.; Cai, Z.; Bourouis, S.; Algarni, A.D.; Chen, H. An Adaptive Quadratic Interpolation and Rounding Mechanism Sine Cosine Algorithm with Application to Constrained Engineering Optimization Problems. Expert Syst. Appl. 2020, 213, 119041. [Google Scholar] [CrossRef]
  45. Liu, L.; Zhao, D.; Yu, F.; Heidari, A.A.; Li, C.; Ouyang, J.; Chen, H.; Mafarja, M.; Turabieh, H.; Pan, J. Ant colony optimization with Cauchy and greedy Levy mutations for multilevel COVID 19 X-ray image segmentation. Comput. Biol. Med. 2021, 136, 104609. [Google Scholar] [CrossRef] [PubMed]
  46. Zhao, D.; Liu, L.; Yu, F.; Heidari, A.A.; Wang, M.; Oliva, D.; Muhammad, K.; Chen, H. Ant colony optimization with horizontal and vertical crossover search: Fundamental visions for multi-threshold image segmentation. Expert Syst. Appl. 2021, 167, 114122. [Google Scholar] [CrossRef]
  47. Hussien, A.G.; Heidari, A.A.; Ye, X.; Liang, G.; Chen, H.; Pan, Z. Boosting whale optimization with evolution strategy and Gaussian random walks: An image segmentation method. Eng. Comput. 2023, 39, 1935–1979. [Google Scholar] [CrossRef]
  48. Dutta, T.; Dey, S.; Bhattacharyya, S.; Mukhopadhyay, S. Quantum fractional order Darwinian particle swarm optimization for hyperspectral multi-level image thresholding. Appl. Soft Comput. 2021, 113, 107976. [Google Scholar] [CrossRef]
  49. Wang, M.; Liang, Y.; Hu, Z.; Chen, S.; Shi, B.; Heidari, A.A.; Zhang, Q.; Chen, H.; Chen, X. Lupus nephritis diagnosis using enhanced moth flame algorithm with support vector machines. Comput. Biol. Med. 2022, 145, 105435. [Google Scholar] [CrossRef] [PubMed]
  50. Yu, H.; Liu, J.; Chen, C.; Heidari, A.A.; Zhang, Q.; Chen, H.; Mafarja, M.; Turabieh, H. Corn Leaf Diseases Diagnosis Based on K-Means Clustering and Deep Learning. IEEE Access 2021, 9, 143824–143835. [Google Scholar] [CrossRef]
  51. Xia, J.; Zhang, H.; Li, R.; Chen, H.; Turabieh, H.; Mafarja, M.; Pan, Z. Generalized Oppositional Moth Flame Optimization with Crossover Strategy: An Approach for Medical Diagnosis. J. Bionic Eng. 2021, 18, 991–1010. [Google Scholar] [CrossRef]
  52. Liu, J.; Wei, J.; Heidari, A.A.; Kuang, F.; Zhang, S.; Gui, W.; Chen, H.; Pan, Z. Chaotic simulated annealing multi-verse optimization enhanced kernel extreme learning machine for medical diagnosis. Comput. Biol. Med. 2022, 144, 105356. [Google Scholar] [CrossRef]
  53. Xia, J.; Zhang, H.; Li, R.; Wang, Z.; Cai, Z.; Gu, Z.; Chen, H.; Pan, Z. Adaptive Barebones Salp Swarm Algorithm with Quasi-oppositional Learning for Medical Diagnosis Systems: A Comprehensive Analysis. J. Bionic Eng. 2022, 19, 240–256. [Google Scholar] [CrossRef]
  54. Yu, S.; Heidari, A.A.; He, C.; Cai, Z.; Althobaiti, M.M.; Mansour, R.F.; Liang, G.; Chen, H. Parameter estimation of static solar photovoltaic models using Laplacian Nelder-Mead hunger games search. Solar Energy 2022, 242, 79–104. [Google Scholar] [CrossRef]
  55. Weng, X.; Heidari, A.A.; Liang, G.; Chen, H.; Ma, X. An evolutionary Nelder–Mead slime mould algorithm with random learning for efficient design of photovoltaic models. Energy Rep. 2021, 7, 8784–8804. [Google Scholar] [CrossRef]
  56. Liu, Y.; Heidari, A.A.; Ye, X.; Liang, G.; Chen, H.; He, C. Boosting slime mould algorithm for parameter identification of photovoltaic models. Energy 2021, 234, 121164. [Google Scholar] [CrossRef]
  57. Fan, Y.; Wang, P.; Heidari, A.A.; Zhao, X.; Turabieh, H.; Chen, H. Delayed dynamic step shuffling frog-leaping algorithm for optimal design of photovoltaic models. Energy Rep. 2021, 7, 228–246. [Google Scholar] [CrossRef]
  58. Liu, Y.; Chong, G.; Heidari, A.A.; Chen, H.; Liang, G.; Ye, X.; Cai, Z.; Wang, M. Horizontal and vertical crossover of Harris hawk optimizer with Nelder-Mead simplex for parameter estimation of photovoltaic models. Energy Convers. Manag. 2020, 223, 113211. [Google Scholar] [CrossRef]
  59. Liu, Y.X.; Yang, C.N.; Sun, Q.D. Thresholds Based Image Extraction Schemes in Big Data Environment in Intelligent Traffic Management. IEEE Trans. Intell. Transp. Syst. 2021, 22, 3952–3960. [Google Scholar] [CrossRef]
  60. Yang, Y.T.; Chen, H.L.; Heidari, A.A.; Gandomi, A.H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl. 2021, 177, 114864. [Google Scholar] [CrossRef]
  61. AbuShanab, W.S.; Elaziz, M.A.; Ghandourah, E.I.; Moustafa, E.B.; Elsheikh, A.H. A new fine-tuned random vector functional link model using Hunger games search optimizer for modeling friction stir welding process of polymeric materials. J. Mater. Res. Technol.-JMR T 2021, 14, 1482–1493. [Google Scholar] [CrossRef]
  62. Nguyen, H.; Bui, X.N. A Novel Hunger Games Search Optimization-Based Artificial Neural Network for Predicting Ground Vibration Intensity Induced by Mine Blasting. Nat. Resour. Res. 2021, 30, 3865–3880. [Google Scholar] [CrossRef]
  63. Xu, B.Y.; Heidari, A.A.; Kuang, F.J.; Zhang, S.Y.; Chen, H.L.; Cai, Z.N. Quantum Nelder-Mead Hunger Games Search for optimizing photovoltaic solar cells. Int. J. Energy Res. 2022, 46, 12417–12466. [Google Scholar] [CrossRef]
  64. Ma, B.J.; Liu, S.; Heidari, A.A. Multi-strategy ensemble binary hunger games search for feature selection. Knowl.-Based Syst. 2022, 248, 108787. [Google Scholar] [CrossRef]
  65. Fathy, A.; Yousri, D.; Rezk, H.; Thanikanti, S.B.; Hasanien, H.M. A Robust Fractional-Order PID Controller Based Load Frequency Control Using Modified Hunger Games Search Optimizer. Energies 2022, 15, 361. [Google Scholar] [CrossRef]
  66. Emam, M.M.; Samee, N.A.; Jamjoom, M.M.; Houssein, E.H. Optimized deep learning architecture for brain tumor classification using improved Hunger Games Search Algorithm. Comput. Biol. Med. 2023, 160, 106966. [Google Scholar] [CrossRef] [PubMed]
  67. Nassef, A.M.; Houssein, E.H.; Rezk, H.; Fathy, A. Optimal Allocation of Biomass Distributed Generators Using Modified Hunger Games Search to Reduce CO2 Emissions. J. Mar. Sci. Eng. 2023, 11, 308. [Google Scholar] [CrossRef]
  68. Zhang, Y.; Guo, Y.; Xiao, Y.; Tang, W.; Zhang, H.; Li, J. A novel hybrid improved hunger games search optimizer with extreme learning machine for predicting shrinkage of SLS parts. J. Intell. Fuzzy Syst. 2022, 43, 5643–5659. [Google Scholar] [CrossRef]
  69. Chen, Z.; Xuan, P.; Heidari, A.A.; Liu, L.; Wu, C.; Chen, H.; Escorcia-Gutierrez, J.; Mansour, R.F. An artificial bee bare-bone hunger games search for global optimization and high-dimensional feature selection. Iscience 2023, 26, 106679. [Google Scholar] [CrossRef] [PubMed]
  70. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  71. Li, C.; Li, J.; Chen, H.; Jin, M.; Ren, H. Enhanced Harris hawks optimization with multi-strategy for global optimization tasks. Expert Syst. Appl. 2021, 185, 115499. [Google Scholar] [CrossRef]
  72. Real, L.A. Animal Choice Behavior and the Evolution of cognitive Architecture. Science 1991, 253, 980–986. [Google Scholar] [CrossRef]
  73. Burnett, C.J.; Li, C.; Webber, E.; Tsaousidou, E.; Xue, S.Y.; Brüning, J.C.; Krashes, M.J. Hunger-Driven Motivational State Competition. Neuron 2016, 92, 187–201. [Google Scholar] [CrossRef]
  74. O’Brien, W.J.; Browman, H.I.; Evans, B.I. Search strategies of foraging animals. Am. Sci. 1990, 78, 152–160. [Google Scholar]
  75. Clutton-Brock, T. Cooperation between non-kin in animal societies. Nature 2009, 462, 51–57. [Google Scholar] [CrossRef] [PubMed]
  76. Friedman, M.I.; Ulrich, P.; Mattes, R.D. A figurative measure of subjective hunger sensations. Appetite 1999, 32, 395–404. [Google Scholar] [CrossRef] [PubMed]
  77. Zhou, X.; Gui, W.; Heidari, A.A.; Cai, Z.; Elmannai, H.; Hamdi, M.; Liang, G.; Chen, H. Advanced orthogonal learning and Gaussian barebone hunger games for engineering design. J. Comput. Des. Eng. 2022, 9, 1699–1736. [Google Scholar] [CrossRef]
  78. Tamura, K.; Yasuda, K. Primary Study of Spiral Dynamics Inspired Optimization. IEEJ Trans. Electr. Electron. Eng. 2011, 6, S98–S100. [Google Scholar] [CrossRef]
  79. Kawaguchi, Y. A morphological study of the form of nature. ACM Siggraph Comput. Graph. 1982, 16, 223–232. [Google Scholar] [CrossRef]
  80. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Opposition-based differential evolution. IEEE Trans. Evol. Comput. 2008, 12, 64–79. [Google Scholar] [CrossRef]
  81. Rosenbrock, H.H. An Automatic Method for Finding the Greatest or Least Value of a Function. Comput. J. 1960, 3, 175–184. [Google Scholar] [CrossRef]
  82. Kang, F.; Li, J.J.; Ma, Z.Y. Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions. Inf. Sci. 2011, 181, 3508–3531. [Google Scholar] [CrossRef]
  83. Xin, Y.; Yong, L.; Guangming, L. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar] [CrossRef]
  84. Viktorin, A.; Senkerik, R.; Pluhacek, M.; Kadavy, T.; Zamuda, A. DISH-XX Solving CEC2020 Single Objective Bound Constrained Numerical optimization Benchmark. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  85. Wang, S.; Hu, X.; Sun, J.; Liu, J. Hyperspectral anomaly detection using ensemble and robust collaborative representation. Inf. Sci. 2023, 624, 748–760. [Google Scholar] [CrossRef]
  86. Cheng, L.; Yin, F.; Theodoridis, S.; Chatzis, S.; Chang, T.-H. Rethinking Bayesian learning for data analysis: The art of prior and inference in sparsity-aware modeling. IEEE Signal Process. Mag. 2022, 39, 18–52. [Google Scholar] [CrossRef]
  87. Zhang, X.; Wen, S.; Yan, L.; Feng, J.; Xia, Y. A Hybrid-Convolution Spatial–Temporal Recurrent Network For Traffic Flow Prediction. Comput. J. 2022, bxac171. [Google Scholar] [CrossRef]
  88. Xu, J.; Mu, B.; Zhang, L.; Chai, R.; He, Y.; Zhang, X. Fabrication and optimization of passive flexible ammonia sensor for aquatic supply chain monitoring based on adaptive parameter adjustment artificial neural network (APA-ANN). Comput. Electron. Agric. 2023, 212, 108082. [Google Scholar] [CrossRef]
  89. Liu, H.; Yue, Y.; Liu, C.; Jr, B.S.; Cui, J. Automatic recognition and localization of underground pipelines in GPR B-scans using a deep learning model. Tunn. Undergr. Space Technol. 2023, 134, 104861. [Google Scholar] [CrossRef]
  90. Friedman, M. The Use of Ranks to Avoid the Assumption of Normality Implicit in the Analysis of Variance. J. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
  91. Garcia, S.; Fernandez, A.; Luengo, J.; Herrera, F. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 2010, 180, 2044–2064. [Google Scholar] [CrossRef]
  92. Gandomi, A.H.; Yang, X.-S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  93. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  94. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  95. Zhang, J.; Sanderson, A.C. JADE: Adaptive Differential Evolution With Optional External Archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  96. Chen, W.-N.; Zhang, J.; Lin, Y.; Chen, N.; Zhan, Z.-H.; Chung, H.S.-H.; Li, Y.; Shi, Y.-H. Particle Swarm Optimization with an Aging Leader and Challengers. IEEE Trans. Evol. Comput. 2013, 17, 241–258. [Google Scholar] [CrossRef]
  97. Hu, J.; Heidari, A.A.; Zhang, L.; Xue, X.; Gui, W.; Chen, H.; Pan, Z. Chaotic diffusion-limited aggregation enhanced grey wolf optimizer: Insights, analysis, binarization, and feature selection. Int. J. Intell. Syst. 2021, 37, 4864–4927. [Google Scholar] [CrossRef]
  98. Chen, H.; Xu, Y.; Wang, M.; Zhao, X. A balanced whale optimization algorithm for constrained engineering design problems. Appl. Math. Model. 2019, 71, 45–59. [Google Scholar] [CrossRef]
  99. Mezura-Montes, E.; Coello, C.A.C. Constraint-handling in nature-inspired numerical optimization: Past, present and future. Swarm Evol. Comput. 2011, 1, 173–194. [Google Scholar] [CrossRef]
  100. Mahdavi, M.; Fesanghary, M.; Damangir, E. An improved harmony search algorithm for solving optimization problems. Appl. Math. Comput. 2007, 188, 1567–1579. [Google Scholar] [CrossRef]
  101. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  102. Chen, C.; Wang, X.; Yu, H.; Wang, M.; Chen, H. Dealing with multi-modality using synthesis of Moth-flame optimizer with sine cosine mechanisms. Math. Comput. Simul. 2021, 188, 291–318. [Google Scholar] [CrossRef]
  103. Lee, K.S.; Geem, Z.W. A new meta-heuristic algorithm for continuous engineering optimization: Harmony search theory and practice. Comput. Methods Appl. Mech. Eng. 2005, 194, 3902–3933. [Google Scholar] [CrossRef]
  104. Gandomi, A.H.; Yang, X.S.; Alavi, A.H.; Talatahari, S. Bat algorithm for constrained optimization tasks. Neural Comput. Appl. 2013, 22, 1239–1255. [Google Scholar] [CrossRef]
  105. Kaveh, A.; Khayatazad, M. A new meta-heuristic method: Ray Optimization. Comput. Struct. 2012, 112, 283–294. [Google Scholar] [CrossRef]
  106. Ragsdell, K.M.; Phillips, D.T. Optimal Design of a Class of Welded Structures Using Geometric Programming. J. Eng. Ind. 1976, 98, 1021–1025. [Google Scholar] [CrossRef]
  107. Mezura-Montes, E.; Coello, C.A.C. An empirical study about the usefulness of evolution strategies to solve constrained optimization problems. Int. J. Gen. Syst. 2008, 37, 443–473. [Google Scholar] [CrossRef]
  108. Coelho, L.D. Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems. Expert Syst. Appl. 2010, 37, 1676–1683. [Google Scholar] [CrossRef]
  109. Sandgren, E. Nonlinear Integer and Discrete Programming in Mechanical Design Optimization. J. Mech. Des. 1990, 112, 223–229. [Google Scholar] [CrossRef]
  110. He, Q.; Wang, L. An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng. Appl. Artif. Intell. 2007, 20, 89–99. [Google Scholar] [CrossRef]
  111. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper optimisation algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef]
  112. Sadollah, A.; Bahreininejad, A.; Eskandar, H.; Hamdi, M. Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems. Appl. Soft Comput. 2013, 13, 2592–2612. [Google Scholar] [CrossRef]
Figure 1. Flowchart of RLHGS.
Figure 1. Flowchart of RLHGS.
Biomimetics 08 00441 g001
Figure 2. 3-D map of some 23 classic benchmark functions. Different colors represent different solutions to the function, the mark of “F1”, “F2” etc. in the figure refer to the corresponding function.
Figure 2. 3-D map of some 23 classic benchmark functions. Different colors represent different solutions to the function, the mark of “F1”, “F2” etc. in the figure refer to the corresponding function.
Biomimetics 08 00441 g002
Figure 3. (a) 3-D distributions of test functions, (b) 2-D position distribution of RLHGS, (c) RLHGS trajectories in the first dimension, (d) convergence curve of RLHGS. The line is generated by the projection of a three-dimensional figure. Different color represents different solution of function.
Figure 3. (a) 3-D distributions of test functions, (b) 2-D position distribution of RLHGS, (c) RLHGS trajectories in the first dimension, (d) convergence curve of RLHGS. The line is generated by the projection of a three-dimensional figure. Different color represents different solution of function.
Biomimetics 08 00441 g003
Figure 4. (a) balance analysis of the RLHGS, (b) balance analysis of the HGS, (c) convergence curves of RLHGS and HGS.
Figure 4. (a) balance analysis of the RLHGS, (b) balance analysis of the HGS, (c) convergence curves of RLHGS and HGS.
Biomimetics 08 00441 g004
Figure 5. Convergence curves of RLHGS, RHGS, LHGS, and HGS on eight CEC2020 functions.
Figure 5. Convergence curves of RLHGS, RHGS, LHGS, and HGS on eight CEC2020 functions.
Biomimetics 08 00441 g005
Figure 6. Friedman test results on 23 classic benchmark functions.
Figure 6. Friedman test results on 23 classic benchmark functions.
Biomimetics 08 00441 g006
Figure 7. Convergence curves of compared algorithms on six classic benchmark functions.
Figure 7. Convergence curves of compared algorithms on six classic benchmark functions.
Biomimetics 08 00441 g007
Figure 8. Friedman test results on CEC2020.
Figure 8. Friedman test results on CEC2020.
Biomimetics 08 00441 g008
Figure 9. Convergence curves of compared algorithms on four CEC2020 functions.
Figure 9. Convergence curves of compared algorithms on four CEC2020 functions.
Biomimetics 08 00441 g009
Figure 10. Structure of the tension/compression spring problem.
Figure 10. Structure of the tension/compression spring problem.
Biomimetics 08 00441 g010
Figure 11. Shape of the welded beam design problem.
Figure 11. Shape of the welded beam design problem.
Biomimetics 08 00441 g011
Figure 12. Components of the pressure vessel design problem.
Figure 12. Components of the pressure vessel design problem.
Biomimetics 08 00441 g012
Figure 13. Components of the 3-bar truss design problem.
Figure 13. Components of the 3-bar truss design problem.
Biomimetics 08 00441 g013
Table 1. Review of several classic optimization algorithms.
Table 1. Review of several classic optimization algorithms.
TypeMAsPublishedBrief Introduction
Evolutionary-basedGenetic Algorithm (GA) [22]1975It is derived from biological, genetic, and evolutionary mechanisms and an adaptive probabilistic optimization algorithm.
Differential Evolution (DE) [23]1995It can be considered based on the theory of biological evolution, which imitates the process of cooperation and competition among individuals.
Biogeography-Based Optimization (BBO) [24]2008It is based on the geographical distribution of biological organisms.
Swarm intelligence-basedParticle Swarm Optimization (PSO) [25]1995It is inspired by the collective behavior of social organisms, particularly the flocking and swarming behavior observed in birds, fish, and insects.
Grey Wolf Optimization (GWO) [26]2014Its inspiration is from observing the leadership level and hunting behaviors within grey wolves in nature.
Harris Hawk Optimization (HHO) [27]2019It draws upon the natural behavior of wolf pack hunting.
Slime Mould Algorithm (SMA) [28]2020Its principle is based on the oscillation mode of slime moulds in nature.
Human behavior-basedTeaching-Learning-Based Optimization (TLBO) [29]2011It is inspired by the idea of how teachers guide students toward better learning outcomes.
Social-Based Algorithm (SBA) [30]2013It is in the light of the evolutionary algorithm and socio-political process based Imperialist Competitive Algorithm (ICA) [31].
Physics-basedSimulated Annealing (SA) [32]1983It is proposed based on the principle of solid-state high-temperature annealing.
Gravitational Search Algorithm (GSA) [33]2009It can trace back to the law of gravity and mass interactions.
Multi-Verse Optimizer (MVO) [34]2015It is according to three cosmology concepts: white hole, black hole, and wormhole.
RUNge Kutta Optimizer (RUN) [35]2021It combines elements of the classical Runge-Kutta numerical integration method with optimization techniques.
weIghted meaN oF vectOrs (INFO) [36]2022It stems from the weight mean method, which is an enhanced optimizer in solving optimization problems.
Table 2. 23 classic benchmark functions.
Table 2. 23 classic benchmark functions.
Function D R f m i n
F 1 x = i = 1 n x i 2 30[−100, 100]0
F 2 x = i = 1 n x i + i = 1 n x i 30[−10, 10]0
F 3 x = i = 1 n   j 1 i   x j 2 30[−100, 100]0
F 4 x = m a x i x i , 1 i n 30[−100, 100]0
s F 5 x = i = 1 n 1 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 30[−30, 30]0
F 6 x = i = 1 n ( [ x i + 0.5 ] ) 2 30[−100, 100]0
F 7 x = i = 1 n i x i 4 + r a n d o m [ 0,1 ] 30[−1.28, 1.28]0
F 8 x = i = 1 n x i sin ( x i ) 30[−500, 500]−418.9829 × 30
F 9 x = i = 1 n x i 2 10 cos 2 π x i + 10 30[−5.12, 5.12]0
F 10 x = 20 exp 0.2 1 n i = 1 n x i 2 exp 1 n i = 1 n cos 2 π x i + 20 + e 30[−32, 32]0
F 11 x = 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1 30[−600, 600]0
F 12 x = π n 10   sin π y 1 + i = 1 n 1 y i 1 2 1 + 10 sin 2 π y i + 1 + y n 1 2 + i = 1 n u x i , 10,100,4 y i = 1 + x i + 1 4 u x i , a , k , m = k ( x i a ) m     x i > a 0   a < x i < a k ( x i a ) m     x i < a 30[−50, 50]0
F 13 x = 0.1 sin 2 3 π x 1 + i = 1 n x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x n + i = 1 n u x i , 5100,4 30[−50, 50]0
F 14 x = 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 1 2[−65, 65]1
F 15 x = i = 1 11   a i x 1 ( b i 2 b i x 2 ) b i 2 + b i x 3 + x 4 2 4[−5, 5]0.00030
F 16 x = 4 x 1 2 2.1 x 1 2 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
F 17 x = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10 2[−5, 5]0.398
F 18 x = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × 30 + 2 x 1 3 x 2 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) 2[−2, 2]3
F 19 x = i = 1 4 c i e x p j = 1 3 a i j ( x j p i j ] ) 2 3[1, 3]−3.86
F 20 x = i = 1 4 c i e x p j = 1 6 a i j ( x j p i j ] ) 2 6[0, 1]−3.32
F 21 x = i = 1 5 X a i X a i T + c i 1 4[0, 10]−10.1532
F 22 x = i = 1 7 X a i X a i T + c i 1 4[0, 10]−10.4028
F 23 x = i = 1 10 X a i X a i T + c i 1 4[0, 10]−10.5363
Table 3. CEC2020 benchmark functions.
Table 3. CEC2020 benchmark functions.
No.Function f m i n
F 1 Shifted and Rotated Bent Cigar Function100
F 2 Shifted and Rotated Schwefel’s Function1100
F 3 Shifted and Rotated Lunacek bi-Rastrigin Function700
F 4 Expanded Rosenbrock’s plus Criewangk’s Function1900
F 5 Hybrid Function 1 (N = 3)1700
F 6 Hybrid Function 2 (N = 4)1600
F 7 Hybrid Function 3 (N = 5)2100
F 8 Composition Function 1 (N = 3)2200
F 9 Composition Function 2 (N = 4)2400
F 10 Composition Function 3 (N = 5)2500
Table 4. Design of RLHGS, RHGS, LHGS and HGS.
Table 4. Design of RLHGS, RHGS, LHGS and HGS.
AlgorithmLS-OBL StrategyAdapted RM Strategy
RLHGS11
RHGS01
LHGS10
HGS00
Table 5. Experiment results of RLHGS, RHGS, LHGS, and HGS on CEC2020.
Table 5. Experiment results of RLHGS, RHGS, LHGS, and HGS on CEC2020.
F1 F2 F3
AvgStdRankAvgStdRankAvgStdRank
RLHGS1.1721 × 1023.8415 × 10111.7057 × 1032.1073 × 10217.3244 × 1021.0536 × 1011
RHGS9.1127 × 1091.3465 × 101044.2311 × 1036.1760 × 10241.2333 × 1031.6875 × 1024
LHGS1.1411 × 1072.5083 × 10723.6214 × 1035.0565 × 10228.7070 × 1025.5928 × 1012
HGS1.2428 × 1073.4529 × 10733.6354 × 1034.8399 × 10238.9153 × 1024.6783 × 1013
F4 F5 F6
AvgStdRankAvgStdRankAvgStdRank
RLHGS1.8836 × 1038.9986 × 10117.4880 × 1044.3342 × 10412.0053 × 1031.5192 × 1021
RHGS2.2603 × 1031.5553 × 10242.0131 × 1067.3692 × 10642.9175 × 1033.4464 × 1024
LHGS2.1305 × 1031.9996 × 10222.9555 × 1052.0583 × 10522.7490 × 1032.8624 × 1023
HGS2.1426 × 1031.5782 × 10233.7466 × 1052.8119 × 10532.6231 × 1032.8102 × 1022
F7 F8 F9
AvgStdRankAvgStdRankAvgStdRank
RLHGS4.8111 × 1043.1322 × 10412.2069 × 1032.2097 × 10013.1351 × 1033.5632 × 1023
RHGS1.4287 × 1051.2640 × 10522.3809 × 1032.9696 × 10142.6000 × 1037.2642 × 10−131
LHGS1.8992 × 1051.3637 × 10532.3053 × 1033.2974 × 10123.1590 × 1034.0319 × 1024
HGS2.2062 × 1051.7583 × 10542.3246 × 1033.3913 × 10132.6000 × 1030.0000 × 1001
F10
AvgStdRank+/−/=
RLHGS2.8647 × 1037.4926 × 1014~
RHGS2.7000 × 1034.3058 × 10−1318/2/0
LHGS2.7797 × 1031.3715 × 10238/1/1
HGS2.7000 × 1030.0000 × 10018/2/0
Table 6. The result of the Friedman test.
Table 6. The result of the Friedman test.
RLHGSRHGSLHGSHGS
Average rank1.53.22.52.6
Overall rank1423
Table 7. Wilcoxon signed-rank results of RLHGS, RHGS, LHGS, and HGS on CEC 2020.
Table 7. Wilcoxon signed-rank results of RLHGS, RHGS, LHGS, and HGS on CEC 2020.
RHGSLHGSHGS
F11.7344 × 10−61.7344 × 10−61.7344 × 10−6
F21.7344 × 10−61.7344 × 10−61.7344 × 10−6
F31.7344 × 10−61.7344 × 10−61.7344 × 10−6
F42.3534 × 10−63.7243 × 10−55.2165 × 10−6
F51.6046 × 10−41.9729 × 10−52.3534 × 10−6
F61.9209 × 10−61.7344 × 10−63.5152 × 10−6
F73.3173 × 10−49.3157 × 10−67.6909 × 10−6
F81.7344 × 10−61.7344 × 10−61.7344 × 10−6
F94.6072 × 10−51.5140 × 10−15.9493 × 10−5
F101.2290 × 10−52.0297 × 10−31.2290 × 10−5
Table 8. The parameter setting of the algorithms involved in the comparison.
Table 8. The parameter setting of the algorithms involved in the comparison.
AlgorithmParameterValue
RLHGS l 0.03
L H 100
α 50
β −0.5
HGS l 0.03
L H 100
CS N _ i t e r 0
p a 0.25
MFO b 1
t [−2, 1]
HHO b e t a 1.5
SSA c 2 [0, 1]
c 3 [0, 1]
JADE c 0.1
p 0.05
C R m 0.5
F m 0.5
ALCPSO w 0.4
c 1 2
c 2 2
l i f e s p a n 60
T 2
p r o 1 / D
SCGWO a [2, 0]
q 2
RDWOA a 1 [2, 0]
a 2 [−2, −1]
b 1
s 0
Table 9. Experiment results of RLHGS and other eight superior algorithms on 23 classic benchmark functions.
Table 9. Experiment results of RLHGS and other eight superior algorithms on 23 classic benchmark functions.
F1 F2 F3
AvgStdRankAvgStdRankAvgStdRank
RLHGS0.0000 × 1000.0000 × 10010.0000 × 1000.0000 × 10014.8140 × 10−12.6367 × 1004
CS1.0136 × 10−67.1003 × 10−781.0000 × 10100.0000 × 10092.7749 × 1033.6176 × 1027
MFO1.9684 × 1041.1885 × 10491.4177 × 1023.8959 × 10181.2467 × 1057.6037 × 1049
HHO0.0000 × 1000.0000 × 10010.0000 × 1000.0000 × 10010.0000 × 1000.0000 × 1001
SSA7.0715 × 10−86.9886 × 10−976.0408 × 1002.7973 × 10071.8429 × 1035.7033 × 1026
JADE3.7971 × 10−542.0335 × 10−5357.8126 × 10−214.2783 × 10−2055.1594 × 10−11.7506 × 1007
ALCPSO5.5275 × 10−83.0275 × 10−762.8912 × 10−13.9540 × 10−164.3973 × 1044.9839 × 1048
SCGWO0.0000 × 1000.0000 × 10011.6093 × 10−3060.0000 × 10040.0000 × 1000.0000 × 1001
RDWOA0.0000 × 1000.0000 × 10010.0000 × 1000.0000 × 10010.0000 × 1000.0000 × 1001
F4 F5 F6
AvgStdRankAvgStdRankAvgStdRank
RLHGS1.3092 × 1003.6868 × 10044.9487 × 1013.4040 × 10134.6288 × 10−111.3740 × 10−102
CS2.1938 × 1012.6548 × 10052.8015 × 1028.1279 × 10189.4781 × 10−77.5517 × 10−74
MFO9.3119 × 1012.7711 × 10093.2477 × 1074.4852 × 10792.2485 × 1041.4372 × 1049
HHO0.0000 × 1000.0000 × 10012.9834 × 10−44.0764 × 10−422.4328 × 10−63.4989 × 10−65
SSA2.3199 × 1013.5560 × 10061.5513 × 1021.0766 × 10266.9376 × 10−86.2853 × 10−93
JADE3.0183 × 1012.5466 × 10076.2184 × 1014.8003 × 10144.1087 × 10−322.7731 × 10−321
ALCPSO4.6584 × 1015.0435 × 10081.7682 × 1025.6304 × 10172.5553 × 10−51.3996 × 10−48
SCGWO0.0000 × 1000.0000 × 10016.7320 × 10−52.1095 × 10−414.6217 × 10−66.9114 × 10−66
RDWOA0.0000 × 1000.0000 × 10019.0644 × 1014.7346 × 10−152.1185 × 10−51.1602 × 10−47
F7 F8 F9
AvgStdRankAvgStdRankAvgStdRank
RLHGS8.4687 × 10−21.1647 × 10−15−3.7931 × 1047.4989 × 10350.0000 × 1000.0000 × 1001
CS4.2060 × 10−19.4582 × 10−27−2.6888 × 1048.2358 × 10272.0495 × 1023.0866 × 1016
MFO1.6836 × 1021.2589 × 1029−2.4513 × 1042.3311 × 10396.5040 × 1029.4359 × 1019
HHO1.3552 × 10−51.2912 × 10−51−4.1898 × 1041.5437 × 10−220.0000 × 1000.0000 × 1001
SSA1.4539 × 10−13.3744 × 10−26−2.4613 × 1041.5061 × 10382.1037 × 1024.3408 × 1017
JADE7.7322 × 10−22.2043 × 10−24−4.0706 × 1043.5996 × 10241.3266 × 10−13.4400 × 10−15
ALCPSO9.5572 × 10−14.4139 × 10−18−3.2131 × 1041.4817 × 10363.5699 × 1025.1277 × 1018
SCGWO1.6531 × 10−51.7151 × 10−52−4.1898 × 1047.3117 × 10−610.0000 × 1000.0000 × 1001
RDWOA1.6720 × 10−51.9081 × 10−53−4.1681 × 1041.1514 × 10330.0000 × 1000.0000 × 1001
F10 F11 F12
AvgStdRankAvgStdRankAvgStdRank
RLHGS1.5987 × 10−153.8918 × 10−1541.1433 × 1024.1889 × 10288.6139 × 10−142.7494 × 10−131
CS3.6675 × 1006.8688 × 10−181.4035 × 10−33.7653 × 10−342.6560 × 1008.6266 × 10−17
MFO1.9796 × 1013.0301 × 10−191.4780 × 1021.5074 × 10291.1987 × 1081.6068 × 1089
HHO8.8818 × 10−160.0000 × 10010.0000 × 1000.0000 × 10011.4939 × 10−82.4035 × 10−84
SSA3.5158 × 1008.7325 × 10−172.9551 × 10−35.9380 × 10−351.1052 × 1012.8571 × 1008
JADE3.0915 × 1007.0554 × 10−166.6576 × 10−22.2311 × 10−164.9293 × 10−18.7992 × 10−15
ALCPSO3.0853 × 1001.0339 × 10051.4067 × 10−11.9612 × 10−171.1087 × 1001.4219 × 1006
SCGWO8.8818 × 10−160.0000 × 10010.0000 × 1000.0000 × 10013.5795 × 10−96.2373 × 10−93
RDWOA8.8818 × 10−160.0000 × 10010.0000 × 1000.0000 × 10013.7469 × 10−101.1328 × 10−102
F13 F14 F15
AvgStdRankAvgStdRankAvgStdRank
RLHGS3.7509 × 10−111.8833 × 10−1019.9800 × 10−10.0000 × 10013.3801 × 10−41.6718 × 10−45
CS8.1878 × 1011.7632 × 10179.9800 × 10−10.0000 × 10013.0749 × 10−41.5595 × 10−191
MFO1.9189 × 1083.1803 × 10891.7906 × 1001.2289 × 10091.1968 × 10−31.4423 × 10−39
HHO1.3671 × 10−61.7078 × 10−639.9800 × 10−12.5569 × 10−1283.1053 × 10−42.9635 × 10−64
SSA1.2276 × 1022.8909 × 10189.9800 × 10−11.8895 × 10−1617.0929 × 10−44.3532 × 10−47
JADE1.1451 × 1001.8571 × 10059.9800 × 10−10.0000 × 10011.0676 × 10−33.6550 × 10−38
ALCPSO3.6431 × 1006.1741 × 10069.9800 × 10−11.0100 × 10−1613.6853 × 10−42.3232 × 10−46
SCGWO3.0464 × 10−76.2419 × 10−729.9800 × 10−11.3287 × 10−1363.1019 × 10−42.6873 × 10−63
RDWOA8.2257 × 10−31.1120 × 10−249.9800 × 10−16.2046 × 10−1273.0749 × 10−44.6780 × 10−162
F16 F17 F18
AvgStdRankAvgStdRankAvgStdRank
RLHGS−1.0316 × 1006.7752 × 10−1613.9789 × 10−10.0000 × 10013.0000 × 1002.0099 × 10−152
CS−1.0316 × 1006.7752 × 10−1613.9789 × 10−10.0000 × 10013.0000 × 1006.9974 × 10−161
MFO−1.0316 × 1006.7752 × 10−1613.9789 × 10−10.0000 × 10013.0000 × 1001.6941 × 10−154
HHO−1.0316 × 1002.8301 × 10−1573.9789 × 10−12.8584 × 10−1173.0000 × 1002.1313 × 10−128
SSA−1.0316 × 1005.4546 × 10−1663.9789 × 10−16.1435 × 10−1663.0000 × 1001.3515 × 10−147
JADE−1.0316 × 1006.7752 × 10−1613.9789 × 10−10.0000 × 10013.0000 × 1001.9039 × 10−152
ALCPSO−1.0316 × 1005.9752 × 10−1613.9789 × 10−10.0000 × 10013.0000 × 1001.8011 × 10−156
SCGWO−1.0316 × 1001.9287 × 10−693.9796 × 10−18.3481 × 10−593.0000 × 1003.7102 × 10−69
RDWOA−1.0316 × 1006.2844 × 10−1083.9789 × 10−12.8285 × 10−683.0000 × 1002.0813 × 10−155
F19 F20 F21
AvgStdRankAvgStdRankAvgStdRank
RLHGS−3.8628 × 1002.7101 × 10−151−3.3220 × 1001.3424 × 10−151−1.0153 × 1017.2269 × 10−151
CS−3.8628 × 1002.7101 × 10−151−3.3220 × 1001.2506 × 10−151−1.0153 × 1017.2269 × 10−151
MFO−3.8628 × 1002.7101 × 10−151−3.2319 × 1007.0470 × 10−27−7.7258 × 1003.1212 × 1008
HHO−3.8628 × 1001.5442 × 10−57−3.2245 × 1007.8815 × 10−28−5.2251 × 1009.3075 × 10−19
SSA−3.8628 × 1001.5668 × 10−156−3.2190 × 1004.1107 × 10−29−9.3111 × 1001.9151 × 1005
JADE−3.8628 × 1002.7101 × 10−151−3.2903 × 1005.3475 × 10−23−8.8937 × 1002.3590 × 1006
ALCPSO−3.8628 × 1002.5243 × 10−151−3.2744 × 1005.9241 × 10−26−8.7207 × 1002.4518 × 1007
SCGWO−3.8606 × 1003.6749 × 10−39−3.2902 × 1001.1989 × 10−14−1.0153 × 1011.8808 × 10−74
RDWOA−3.8625 × 1001.4390 × 10−38−3.2840 × 1006.0187 × 10−28−1.0153 × 1014.5944 × 10−151
F22 F23
AvgStdRankAvgStdRank+/−/=
RLHGS−1.0403 × 1011.7140 × 10−151−1.0536 × 1011.6820 × 10−151~
CS−1.0403 × 1011.8067 × 10−151−1.0536 × 1011.7455 × 10−15112/3/8
MFO−8.5564 × 1003.1683 × 1008−7.4807 × 1003.6232 × 100820/0/3
HHO−5.4420 × 1001.3483 × 1009−5.4890 × 1001.3720 × 100912/6/5
SSA−1.0227 × 1019.6292 × 10−15−1.0358 × 1019.7874 × 10−1519/2/2
JADE−9.7180 × 1002.1204 × 1006−9.7872 × 1002.2938 × 100710/2/11
ALCPSO−9.6985 × 1001.8230 × 1007−1.0326 × 1019.9088 × 10−1615/1/7
SCGWO−1.0403 × 1019.5393 × 10−84−1.0536 × 1011.6328 × 10−7412/6/5
RDWOA−1.0403 × 1017.6950 × 10−63−1.0536 × 1011.3526 × 10−537/5/11
Table 10. Friedman test results on 23 classic benchmark functions.
Table 10. Friedman test results on 23 classic benchmark functions.
RLHGSCSMFOHHOSSAJADEALCPSOSCGWORDWOA
Average rank2.394.227.484.355.914.265.703.743.52
Overall rank149685732
Table 11. Wilcoxon signed-rank results on 23 benchmark functions.
Table 11. Wilcoxon signed-rank results on 23 benchmark functions.
CSMFOHHOSSAJADEALCPSOSCGWORDWOA
F11.7344 × 10−61.7333 × 10−61.0000 × 1001.7333 × 10−61.7344 × 10−61.7344 × 10−61.0000 × 1001.0000 × 100
F24.3205 × 10−81.7344 × 10−61.0000 × 1001.7344 × 10−61.7344 × 10−61.7344 × 10−61.2500 × 10−11.0000 × 100
F31.7344 × 10−61.7344 × 10−63.9063 × 10−31.7344 × 10−63.1123 × 10−51.7344 × 10−63.9063 × 10−33.9063 × 10−3
F41.7344 × 10−61.7344 × 10−63.7896 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−63.7896 × 10−63.7896 × 10−6
F51.7344 × 10−61.7344 × 10−61.9209 × 10−61.9209 × 10−63.0861 × 10−11.7344 × 10−61.7344 × 10−69.3157 × 10−6
F61.7344 × 10−61.7333 × 10−61.7344 × 10−61.7344 × 10−61.7333 × 10−63.7243 × 10−51.7344 × 10−61.7344 × 10−6
F72.3534 × 10−61.7344 × 10−61.7344 × 10−61.2453 × 10−24.5281 × 10−11.7344 × 10−61.7344 × 10−61.9209 × 10−6
F83.1123 × 10−51.9729 × 10−54.4919 × 10−23.1123 × 10−51.0201 × 10−11.0570 × 10−45.7064 × 10−46.0350 × 10−3
F91.7344 × 10−61.7344 × 10−61.0000 × 1001.7344 × 10−67.8125 × 10−31.7344 × 10−61.0000 × 1001.0000 × 100
F101.7344 × 10−61.7344 × 10−61.0000 × 1001.7344 × 10−61.7344 × 10−61.7344 × 10−61.0000 × 1001.0000 × 100
F114.9498 × 10−23.5876 × 10−46.2500 × 10−24.0702 × 10−23.6811 × 10−21.4793 × 10−26.2500 × 10−26.2500 × 10−2
F121.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.1499 × 10−41.7344 × 10−61.7344 × 10−61.7344 × 10−6
F131.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−62.0589 × 10−11.9209 × 10−61.9209 × 10−61.7344 × 10−6
F141.0000 × 1004.8828 × 10−41.7344 × 10−61.0000 × 1001.0000 × 1001.0000 × 1001.7213 × 10−61.5625 × 10−2
F158.4303 × 10−61.7257 × 10−63.1123 × 10−51.0246 × 10−52.2513 × 10−21.7372 × 10−13.1123 × 10−53.1123 × 10−5
F161.0000 × 1001.0000 × 1004.8828 × 10−42.2090 × 10−51.0000 × 1001.0000 × 1001.7344 × 10−61.0000 × 100
F171.0000 × 1001.0000 × 1004.0100 × 10−51.2500 × 10−11.0000 × 1001.0000 × 1001.7344 × 10−64.8828 × 10−4
F185.7330 × 10−71.6244 × 10−41.6837 × 10−61.5871 × 10−63.4375 × 10−16.1035 × 10−51.7344 × 10−61.4307 × 10−1
F191.0000 × 1001.0000 × 1001.7344 × 10−61.2207 × 10−41.0000 × 1001.0000 × 1001.7344 × 10−61.2500 × 10−1
F201.0000 × 1004.3895 × 10−51.7344 × 10−61.7322 × 10−67.8125 × 10−34.8828 × 10−41.7344 × 10−69.7656 × 10−4
F211.0000 × 1004.8828 × 10−41.7344 × 10−61.7333 × 10−61.5625 × 10−27.8125 × 10−31.7344 × 10−65.0000 × 10−1
F221.0000 × 1007.8125 × 10−31.7344 × 10−61.7344 × 10−62.5000 × 10−16.2500 × 10−21.7344 × 10−61.2500 × 10−1
F231.0000 × 1002.4414 × 10−41.7344 × 10−61.7322 × 10−62.5000 × 10−12.5000 × 10−11.7344 × 10−61.2500 × 10−1
Table 12. Execution time of RLHGS and other eight superior algorithms on 23 benchmark functions.
Table 12. Execution time of RLHGS and other eight superior algorithms on 23 benchmark functions.
RLHGSCS MFOHHOSSAJADEALCPSOSCGWORDWOA
F18.4691 × 1022.0354 × 1001.4498 × 1001.4526 × 1001.2668 × 1001.6710 × 1018.3820 × 10−11.9997 × 1001.0154 × 100
F23.0396 × 1022.0626 × 1001.5539 × 1001.2214 × 1001.1745 × 1001.6865 × 1017.9440 × 10−11.1329 × 1018.6280 × 10−1
F32.0991 × 1034.2184 × 1003.6022 × 1003.8757 × 1003.6416 × 1001.6792 × 1012.9570 × 1004.5626 × 1003.4932 × 100
F47.0378 × 1001.9587 × 1001.4081 × 1001.1868 × 1001.1103 × 1001.6182 × 1017.3280 × 10−11.9553 × 1009.5240 × 10−1
F54.4681 × 1002.2221 × 1001.6676 × 1001.5862 × 1001.4441 × 1001.2135 × 1011.0096 × 1002.1779 × 1001.0915 × 100
F69.6883 × 1001.9612 × 1001.3927 × 1001.2491 × 1001.1359 × 1001.3613 × 1017.6700 × 10−11.8489 × 1007.8840 × 10−1
F78.3248 × 1003.1648 × 1002.6076 × 1002.4620 × 1002.4082 × 1001.3690 × 1011.9893 × 1003.1472 × 1002.0583 × 100
F81.5842 × 1012.4678 × 1001.6926 × 1001.6836 × 1001.4959 × 1001.2785 × 1011.0662 × 1002.2461 × 1001.1192 × 100
F99.9402 × 1002.1947 × 1001.6374 × 1001.4256 × 1001.3468 × 1001.3356 × 1019.2490 × 10−11.9687 × 1008.8810 × 10−1
F101.9390 × 1032.1445 × 1001.5712 × 1001.4695 × 1001.3655 × 1001.3987 × 1011.0166 × 1001.9999 × 1008.9220 × 10−1
F112.0717 × 1012.2916 × 1001.9069 × 1001.6695 × 1001.6033 × 1001.4202 × 1011.2030 × 1002.2250 × 1001.1195 × 100
F129.4747 × 1015.3738 × 1004.9777 × 1005.0041 × 1004.8790 × 1001.3993 × 1014.3605 × 1005.5637 × 1004.4780 × 100
F139.8841 × 1015.3597 × 1004.8539 × 1004.9606 × 1004.9572 × 1001.4223 × 1014.2401 × 1005.5618 × 1004.4716 × 100
F141.5509 × 1017.4750 × 1007.0009 × 1007.9889 × 1007.4952 × 1001.5530 × 1017.0055 × 1007.4837 × 1007.2453 × 100
F151.0824 × 1001.3539 × 1007.2870 × 10−19.9650 × 10−17.6850 × 10−11.4078 × 1015.9570 × 10−17.4800 × 10−15.1670 × 10−1
F169.1510 × 10−11.2614 × 1006.7380 × 10−19.9940 × 10−17.5030 × 10−11.4111 × 1015.6660 × 10−16.8380 × 10−14.9330 × 10−1
F177.1150 × 10−11.2121 × 1006.0240 × 10−19.1370 × 10−11.0718 × 1001.4722 × 1014.7700 × 10−16.1810 × 10−14.1870 × 10−1
F186.8490 × 10−11.1509 × 1005.5690 × 10−18.8080 × 10−16.1910 × 10−11.4674 × 1014.6230 × 10−15.6180 × 10−13.8660 × 10−1
F191.4690 × 1001.3812 × 1007.9660 × 10−11.1445 × 1008.4850 × 10−11.4617 × 1016.9460 × 10−18.3090 × 10−16.0620 × 10−1
F202.2828 × 1001.4905 × 1008.9780 × 10−11.1964 × 1008.6360 × 10−11.4959 × 1017.2180 × 10−19.7150 × 10−16.4140 × 10−1
F212.2604 × 1001.6074 × 1001.0257 × 1001.4272 × 1001.1057 × 1001.4296 × 1019.0980 × 10−11.0835 × 1008.2640 × 10−1
F223.3694 × 1001.7528 × 1001.1738 × 1001.5229 × 1001.2353 × 1001.4679 × 1011.0392 × 1001.2358 × 1009.6330 × 10−1
F233.9384 × 1001.9808 × 1001.3808 × 1001.6951 × 1001.4924 × 1001.4746 × 1011.2949 × 1001.4418 × 1001.1654 × 100
Table 13. Experiment results of RLHGS and other eight superior algorithms on CEC2020.
Table 13. Experiment results of RLHGS and other eight superior algorithms on CEC2020.
F1 F2 F3
AvgStdRankAvgStdRankAvgStdRank
RLHGS2.8842 × 1043.1972 × 10428.4538 × 1036.5144 × 10211.0688 × 1033.3354 × 1011
CS1.0000 × 10100.0000 × 10072.0238 × 1044.8058 × 10272.5278 × 1031.8830 × 1025
MFO1.5398 × 10115.0361 × 101091.7701 × 1042.1140 × 10355.2734 × 1031.2197 × 1039
HHO4.2402 × 1085.1004 × 10751.9801 × 1041.6613 × 10364.2440 × 1032.2379 × 1028
SSA3.3528 × 1043.0298 × 10431.6415 × 1041.7661 × 10341.8667 × 1031.7702 × 1023
JADE3.5454 × 1036.0412 × 10311.2316 × 1046.0211 × 10221.3513 × 1031.0439 × 1022
ALCPSO7.0851 × 1052.5719 × 10641.5905 × 1041.8514 × 10332.0011 × 1032.5319 × 1024
SCGWO5.2861 × 10109.9659 × 10982.4465 × 1042.7615 × 10392.9184 × 1032.4402 × 1026
RDWOA2.5547 × 1092.4393 × 10962.0553 × 1042.6338 × 10383.4061 × 1032.5938 × 1027
F4 F5 F6
AvgStdRankAvgStdRankAvgStdRank
RLHGS3.1514 × 1033.3690 × 10211.9036 × 1067.1206 × 10523.6308 × 1033.5935 × 1021
CS4.9160 × 1031.8578 × 10245.6825 × 1061.1166 × 10647.0747 × 1032.9911 × 1025
MFO5.9846 × 1036.5043 × 10274.8669 × 1074.5316 × 10789.3841 × 1031.9076 × 1038
HHO6.1115 × 1036.6172 × 10281.8198 × 1075.6157 × 10668.7679 × 1039.6870 × 1027
SSA4.9150 × 1034.9846 × 10231.9370 × 1066.8365 × 10536.6963 × 1038.9749 × 1024
JADE3.9526 × 1033.3609 × 10221.0424 × 1057.2950 × 10414.8052 × 1033.6056 × 1022
ALCPSO5.1273 × 1035.7647 × 10252.0526 × 1071.3196 × 10775.9489 × 1036.3801 × 1023
SCGWO5.6881 × 1038.3109 × 10268.5044 × 1073.3393 × 10791.0681 × 1049.6922 × 1029
RDWOA6.2967 × 1038.4208 × 10291.7386 × 1071.0132 × 10758.6443 × 1031.0865 × 1036
F7 F8 F9
AvgStdRankAvgStdRankAvgStdRank
RLHGS1.3633 × 1067.1889 × 10522.3500 × 1032.0959 × 10−1242.6883 × 1034.8347 × 1025
CS2.8415 × 1065.9766 × 10542.3500 × 1037.6080 × 10−973.5166 × 1031.1805 × 1037
MFO2.8668 × 1073.3622 × 10792.3539 × 1032.6837 × 10096.2667 × 1031.7959 × 1029
HHO8.5364 × 1062.7549 × 10672.3500 × 1031.8501 × 10−1212.6000 × 1030.0000 × 1001
SSA1.7033 × 1067.0658 × 10532.3500 × 1035.2391 × 10−1062.6006 × 1031.8723 × 1004
JADE3.4821 × 1041.4681 × 10412.3500 × 1032.5461 × 10−1152.7754 × 1036.7185 × 1026
ALCPSO6.4144 × 1065.2462 × 10652.3500 × 1039.2761 × 10−0785.9265 × 1037.2027 × 1028
SCGWO2.5290 × 1071.0245 × 10782.3500 × 1031.8501 × 10−1212.6000 × 1030.0000 × 1001
RDWOA6.7465 × 1063.3014 × 10662.3500 × 1031.8501 × 10−1212.6000 × 1030.0000 × 1001
F10
AvgStdRank+/−/=
RLHGS3.0507 × 1031.6231 × 1024~
CS3.3320 × 1034.8669 × 101610/0/0
MFO1.1700 × 1045.5656 × 103910/0/0
HHO2.7000 × 1030.0000 × 10017/1/2
SSA3.3086 × 1037.1568 × 10156/1/3
JADE3.3464 × 1037.4316 × 10177/3/0
ALCPSO3.4605 × 1031.3361 × 10289/0/1
SCGWO2.7000 × 1030.0000 × 10017/1/2
RDWOA2.7000 × 1030.0000 × 10017/1/2
Table 14. Friedman test results on CEC2020.
Table 14. Friedman test results on CEC2020.
RLHGSCSMFOHHOSSAJADEALCPSOSCGWORDWOA
Average rank2.35.68.25.03.82.95.55.85.0
Overall rank179432684
Table 15. Wilcoxon signed-rank results on CEC2020.
Table 15. Wilcoxon signed-rank results on CEC2020.
CSMFOHHOSSAJADEALCPSOSCGWORDWOA
F11.7344 × 10−61.7344 × 10−61.7344 × 10−65.3044 × 10−12.5967 × 10−52.6230 × 10−11.7344 × 10−61.7344 × 10−6
F21.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F31.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F41.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−63.8822 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F51.7344 × 10−61.7344 × 10−61.7344 × 10−67.9710 × 10−11.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F73.1817 × 10−61.7344 × 10−61.7344 × 10−61.2044 × 10−11.7344 × 10−62.3534 × 10−61.7344 × 10−61.7344 × 10−6
F81.7344 × 10−61.7344 × 10−66.2500 × 10−21.7344 × 10−62.6114 × 10−71.1123 × 10−66.2500 × 10−26.2500 × 10−2
F91.9729 × 10−51.7344 × 10−61.0000 × 1003.1123 × 10−52.6770 × 10−51.7344 × 10−61.0000 × 1001.0000 × 100
F101.7344 × 10−61.7344 × 10−61.2290 × 10−51.7344 × 10−61.7344 × 10−61.7344 × 10−61.2290 × 10−51.2290 × 10−5
Table 16. Comparison results of nine algorithms on tension/compression spring design problem.
Table 16. Comparison results of nine algorithms on tension/compression spring design problem.
MAsOptimal Values of ParametersOptimum Cost
d D N
RLHGS0.0517499790.35818502611.203458920.0126653
IHS [100]0.0511540.34987112.0764320.0126706
MFO [93]0.0530640.3907189.5424370.012699
PSO [25]0.0157280.35764411.2445430.0126747
WOA [101]0.0504510.32767513.2193410.012694
GSA [33]0.0502760.34521513.525410.0126763
INFO [36]0.0515550.35349911.480340.012666
SMA [28]0.058470.5234204866.951662210.0160198
SMFO [102]0.06573 0.328695152.6295612020.0138029
Table 17. Comparison results of ten algorithms on welded beam design problem.
Table 17. Comparison results of ten algorithms on welded beam design problem.
MAsOptimal Values of ParametersOptimum Cost
h l t b
RLHGS0.20153.33459.036623910.205729641.699986
HGS [60]0.265.10258.039610.262.302076
GSA [33]0.1821293.856979100.2023761.879952
CDE [41]0.2031373.5429989.0334980.2061791.733462
HS [103]0.24426.22318.29150.24432.3807
GWO [26]0.2056763.4783779.036810.2057781.72624000
BA [104]20.1000003.17430321.8181
IHS [100]0.2057303.4704909.0366200.205731.7248
RO [105]0.2036873.5284679.0042330.2072411.735344
SIMPLEX [106]0.2792 5.62567.75120.27962.5307
Table 18. Comparison results of ten algorithms on pressure vessel design problem.
Table 18. Comparison results of ten algorithms on pressure vessel design problem.
MAsOptimal Values of ParametersOptimum Cost
T s T h R L
RLHGS0.81250.437542.0984456176.63659586059.714335
ES [107]0.81250.437542.098087176.6405186059.7456
PSO [25]0.81250.437542.091266176.74656061.0777
GA [22]0.93750.548.329112.6796410.3811
G-QPSO [108]0.81250.437542.0984176.63726059.7208
SMA [28]0.7550.312541.17193.0016772.7333
Branch-and-bound [109]1.1250.62547.7117.718129.1036
IHS [100]1.1250.62558.2901543.692687197.73
GA3 [81]0.812500 0.43750042.0974176.65406059.9463
CPSO [110]0.812500 0.43750042.091266176.7465006061.0777
Table 19. Comparison results of seven algorithms on three-bar truss design problem.
Table 19. Comparison results of seven algorithms on three-bar truss design problem.
MAsOptimal Values of ParametersOptimum Cost
x 1 x 2
RLHGS0.7886734860.408252954263.89584338
CS [92]0.788670.40902263.9716
MFO [93]0.7882447710.409466958263.8959797
BWOA [98]0.7886663270.408273202263.8958435
GOA [111]0.7888975560.40761957263.8958815
MBA [112]0.78856500.4085597263.8958522
MVO [34]0.788602760.408453070263.8958499
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, Y.; Heidari, A.A.; Wang, S.; Chen, H.; Zhang, Y. An Enhanced Hunger Games Search Optimization with Application to Constrained Engineering Optimization Problems. Biomimetics 2023, 8, 441. https://doi.org/10.3390/biomimetics8050441

AMA Style

Lin Y, Heidari AA, Wang S, Chen H, Zhang Y. An Enhanced Hunger Games Search Optimization with Application to Constrained Engineering Optimization Problems. Biomimetics. 2023; 8(5):441. https://doi.org/10.3390/biomimetics8050441

Chicago/Turabian Style

Lin, Yaoyao, Ali Asghar Heidari, Shuihua Wang, Huiling Chen, and Yudong Zhang. 2023. "An Enhanced Hunger Games Search Optimization with Application to Constrained Engineering Optimization Problems" Biomimetics 8, no. 5: 441. https://doi.org/10.3390/biomimetics8050441

APA Style

Lin, Y., Heidari, A. A., Wang, S., Chen, H., & Zhang, Y. (2023). An Enhanced Hunger Games Search Optimization with Application to Constrained Engineering Optimization Problems. Biomimetics, 8(5), 441. https://doi.org/10.3390/biomimetics8050441

Article Metrics

Back to TopTop