Next Article in Journal
AI for Data Quality Auditing: Detecting Mislabeled Work Zone Crashes Using Large Language Models
Previous Article in Journal
General Position Subset Selection in Line Arrangements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Generation of Hybrid Metaheuristics Using Learning-to-Rank

School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Algorithms 2025, 18(6), 316; https://doi.org/10.3390/a18060316
Submission received: 15 April 2025 / Revised: 23 May 2025 / Accepted: 23 May 2025 / Published: 27 May 2025
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)

Abstract

Metaheuristic algorithms, due to their superior global exploration capabilities and applicability, have emerged as critical tools for addressing complicated optimization tasks. However, these algorithms commonly depend on expert knowledge to configure parameters and design strategies. As a result, they frequently lack appropriate automatic behavior adjustment methods for dealing with changing problem features or dynamic search phases, limiting their adaptability, search efficiency, and solution quality. To address these limitations, this paper proposes an automated hybrid metaheuristic algorithm generation method based on Learning to Rank (LTR-MHA). The LTR-MHA aims to achieve adaptive optimization of algorithm combination strategies by dynamically fusing the search behaviors of Whale Optimization (WOA), Harris Hawks Optimization (HHO), and the Genetic Algorithm (GA). At the core of the LTR-MHA is the utilization of Learning-to-Rank techniques to model the mapping between problem features and algorithmic behaviors, to assess the potential of candidate solutions in real-time, and to guide the algorithm to make better decisions in the search process, thereby achieving a well-adjusted balance between the exploration and exploitation stages. The effectiveness and efficiency of the LTR-MHA method are evaluated using the CEC2017 benchmark functions. The experiments confirm the effectiveness of the proposed method. It delivers superior results compared to individual metaheuristic algorithms and random combinatorial strategies. Notable improvements are seen in average fitness, solution precision, and overall stability. Our approach offers a promising direction for efficient search capabilities and adaptive mechanisms in automated algorithm design.

1. Introduction

Metaheuristic algorithms have attracted crucial attention due to their ability to obtain high-quality solutions within reasonable computational costs [1]. By simulating behaviors from nature or evolutionary mechanisms, these algorithms can effectively explore complex solution spaces and, to some extent, avoid the local optima that traditional methods are prone to [2]. However, despite demonstrating strong robustness and adaptability, metaheuristic algorithms still face critical limitations, such as their reliance on human expertise and insufficient flexibility when applied to diverse problem scenarios, which remains a major bottleneck to their broader application.
Metaheuristic algorithms are usually designed to handle two primary tasks: exploration and exploitation. Ensuring a balanced transition between these tasks is essential. Such equilibrium plays a key role in determining the algorithm’s success. As stated by the “No Free Lunch” theorem [3], no metaheuristics can guarantee optimal results for every optimization task. Moreover, studies have shown [4] that a single algorithm struggles to effectively handle high-dimensional and non-convex optimization problems, making algorithm hybridization one of the key strategies for performance enhancement. This opens up broad avenues for research and application in hybrid algorithm strategies and machine learning-driven improvements to metaheuristic algorithms.
Recent research has increasingly emphasized fused and hybrid algorithms to boost efficiency and adaptability. Currently, common approaches involve integrating and optimizing classical algorithms such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Differential Evolution (DE) [4]. The combination of genetic algorithms with simulated annealing has been applied to optimization tasks within complex search spaces, successfully achieving heuristic approximations of near-optimal solutions [5]. The Whale Optimization Algorithm (WOA) faces challenges due to the imbalance between exploration and exploitation. To mitigate this, the enhanced ESWOA [6], which embeds the hawk strategy, has been designed. Furthermore, research on the emerging “Dwarf Mongoose Optimization Algorithm” (DMOA) has demonstrated that algorithm hybridization strategies can effectively improve search efficiency and global convergence [7]. Bian et al. [8] recently proposed an Improved Snow Geese Algorithm (ISGA) that integrates adaptive migration and clustering strategies to enhance exploration in engineering applications and clustering optimization.
With the advancement of research on algorithm automation, hyper-heuristic algorithms and automated algorithm portfolio frameworks have emerged as new trends driving the development of intelligent algorithms [9]. By employing higher-level heuristic search and learning mechanisms, the dynamic selection and combination of multiple underlying heuristic algorithms have become effective approaches to enhancing algorithm generalization capabilities. Existing studies have proposed a unified classification system based on search components, promoting the deep integration of machine learning and metaheuristic algorithms [10]. Reinforcement learning methods have been applied to algorithmic behavior selection, utilizing historical search information to guide search strategies and thus improve solution efficiency [11]. To better balance algorithm convergence and diversity, approaches based on interactive learning frameworks and Learning-to-Rank models that significantly enhance solution performance by periodically collecting feedback and dynamically optimizing the search direction have been introduced [12]. Similarly, researchers have merged reinforcement learning with proximal policy optimization approaches to provide a general algorithm search framework, automating the design of metaheuristic algorithms across varied circumstances [1]. In addition, Seyyedabbasi [13] introduced an RL-based metaheuristic (RLWOA) that leverages a deep Q-learning agent to adjust WOA parameters online, resulting in marked improvements in convergence rate and solution quality on a suite of global optimization problems.
Drawing on these insights, developing metaheuristic algorithms can be regarded as tackling a combinatorial optimization task, with the solution space including factors such as algorithm parameters, algorithm composition, and algorithmic components. Related research can be divided into three categories: automated algorithm configuration, selection, and composition [1]. Algorithm configuration focuses on optimizing parameters for specific algorithms, while algorithm selection involves choosing algorithms based on problem characteristics. Although both configuration and selection offer distinct advantages, they require a certain degree of prior knowledge or accurate identification of problem features. In contrast, algorithm composition provides greater flexibility by generating new algorithms through the combination of basic components, thereby eliminating dependence on original algorithm structures. Furthermore, Learning-to-Rank (LTR) techniques, owing to their lightweight nature and strong interpretability, have demonstrated significant potential in automated algorithm composition and generation. LTR can directly model the mapping between candidate update behaviors and solution quality through supervised or weakly supervised learning, enabling dynamic algorithm selection and the optimization of combinatorial strategies.
This research attempts to address the limits of existing algorithm designs by investigating an automated algorithm generation strategy that incorporates Learning-to-Rank techniques. The goal is to achieve dynamic decision-making and adaptive adjustment within the algorithm generation process, thereby enhancing the generalization capability and solution efficiency of algorithms for various complex problems. By dynamically evaluating and ranking candidate solutions, LTR not only optimizes the algorithm’s search strategies but also effectively guides the algorithm in avoiding local optima and accelerating global convergence. Consequently, it offers a solution that is both theoretically innovative and practically valuable. This work focuses on validating the feasibility of the multi-algorithm generation strategy in enhancing single metaheuristic methods, rather than aiming to identify an optimal combination. To this goal, three exemplary classical metaheuristic algorithms (the GA, WOA, and HHO) are selected for the study. The key contributions of this work are the following:
  • An automated hybrid metaheuristic algorithm design method based on Learning to Rank, called the LTR-MHA, is proposed. The method uses LTR to integrate different metaheuristic algorithms, such as the WOA, HHO, and the GA, allowing for dynamic algorithm selection and collaborative optimization. The LTR-MHA can flexibly integrate the update strategies of different algorithms based on problem features and dynamic feedback during the search process, considerably improving search capability and solution quality. This approach offers a novel and effective pathway for automated algorithm design and optimization.
  • The LTR-MHA constructs a training dataset by extracting key feature information from historical search processes and trains a predictive model to capture the mapping between candidate update behaviors and solution quality. During the search process, the feature vectors of search agents are fed into the trained model in real time to dynamically evaluate and rank candidate update behaviors. This enables the algorithm to prioritize more promising update actions, thereby enhancing the specificity and scientific basis of its search decisions.
  • The LTR-MHA method is empirically evaluated on four representative function categories from the CEC2017 benchmark. The results indicate that the approach achieves significant improvements in efficiency, solution precision, and overall stability when compared with individual metaheuristic methods and random selection strategies. In particular, when addressing multimodal functions and high-dimensional tasks, the LTR-MHA achieves faster convergence and greater solution quality, demonstrating the extensive applicability and prospects of LTR-based automated algorithm combinatorial approaches in metaheuristic algorithm development.
The following sections of this work are structured as follows. Section 2 briefly introduces related work, with a particular focus on advancements in metaheuristic algorithms and automated algorithm generation techniques. Section 3 presents the background knowledge of the technologies involved. Section 4 details the proposed LTR-MHA approach. Section 5 introduces the empirical analysis, including experiment settings, experimental results, and discussion. Finally, Section 6 summarizes the work and suggests potential avenues for future research.

2. Related Work

2.1. Metaheuristic Algorithms and Optimization

Metaheuristics have been widely applied to address complex problems across diverse domains, such as engineering design, combinatorial tasks, and multi-objective scenarios [4]. Although these algorithms possess strong search capabilities, as the scale and complexity of problems continue to grow, traditional single algorithms increasingly exhibit issues such as vulnerability to local optima, sluggish convergence, and high sensitivity to parameter tuning. Consequently, improving metaheuristic algorithms, particularly through algorithm integration and hybrid strategies to enhance their performance, has become a prominent research focus.
To overcome the limitations of traditional approaches, researchers have proposed a variety of improvement strategies. Kareem et al. conducted a systematic review of current mainstream metaheuristic algorithms and their variants, highlighting that single algorithms often struggle with high-dimensional and non-convex optimization problems and that algorithm hybridization has become one of the effective means to enhance algorithm performance [4]. Talbi [14] indicated that hybrid metaheuristics, as an important future direction for optimization algorithms, is capable of enhancing solution diversity while simultaneously lowering the likelihood of local extrema. Wang et al. [5] combined a Genetic Algorithm with a simulated annealing algorithm to propose a search-based fault localization method. By transforming the fault localization problem into a search problem for optimal modeling combinations, they achieved heuristic approximations of near-optimal solutions. Jin et al. [15] proposed a Quality-of-Service (QoS) characterization model for cloud manufacturing services, as well as an QoS-aware service creation approach using Genetic Algorithms. They presented a new approach [16] that combines the hawk strategy with an enhanced Whale Optimization Algorithm to improve the QoS composition solutions. To address the limitations of the WOA, Gavvala et al. [6] introduced the ESWOA to reconcile exploration and exploitation. Empirical findings demonstrated that the ESWOA achieves superior performance in locating globally optimal solutions. Furthermore, the EGolden-SWOA [17] enhances population diversity via an elite opposition-based learning technique and uses the golden ratio to optimize the search process, successfully balancing global exploration and local exploitation capabilities. Abraham and Ngadi conducted a systematic review of the emerging Dwarf Mongoose Optimization Algorithm (DMOA), discussing its latest variants, hybrid strategies, and application trends over the past three years, and emphasized that algorithm hybridization can effectively improve search efficiency and global convergence [7].
Overall, through continuous algorithmic optimization and diversified integration strategies, metaheuristic algorithms have achieved remarkable progress in addressing complicated optimization issues. Whether by dynamically combining multiple algorithms or harmonizing exploration and exploitation, these approaches have effectively enhanced the performance of individual metaheuristic algorithms and broadened their application boundaries. However, despite these advancements, most current algorithm composition methods still rely on empirical design and static configurations, lacking flexible adaptive regulation and intelligent combination capabilities. As problem scales continue to expand, strategies that rely solely on manual design are increasingly insufficient to fully unleash the potential of these algorithms.
Against this backdrop, hyper-heuristic approaches and automated algorithm composition frameworks have emerged as important research directions for achieving intelligent scheduling and automatic evolution of algorithms. At the same time, automated algorithm composition methods that integrate machine learning with evolutionary mechanisms can dynamically select and combine multiple algorithms based on problem characteristics, enabling more targeted and efficient optimization outcomes.

2.2. Hyper-Heuristics and Automated Algorithm Generation Based on Machine Learning

Hyper-heuristics and automated algorithm generation and composition are becoming important directions in the intelligent development of algorithm design. The core idea lies in employing higher-level heuristic search and learning mechanisms to dynamically select and combine multiple underlying heuristic algorithms, thereby aiming to solve complex problems [9].
In recent years, applying machine learning to the design of efficient and robust metaheuristic algorithms has become a research hotspot, with many machine learning-enhanced metaheuristic algorithms demonstrating superior performance. Talbi [10] investigated the integration of machine learning and metaheuristic algorithms and established a unified classification system based on search components, including optimization objectives and high- and low-level components of metaheuristic algorithms, with the aim of encouraging researchers in the optimization field to explore synergistic mechanisms between machine learning and metaheuristics.
To address the challenge of ranking massive volumes of documents in information retrieval systems, a Learning-to-Rank method combining an improved Genetic Algorithm with the Nelder–Mead approach was proposed [18]. By optimizing the ranking weight coefficients, this method significantly improved ranking metrics such as NDCG. Becerra-Rozas et al. [11] enhanced the behavior selection mechanism of the WOA using SARSA, thereby improving the algorithm’s solution performance. Li et al. [12] developed a preference-driven interactive learning framework aimed at enhancing both the convergence rate and diversity of traditional multi-objective methods. By periodically collecting user feedback and dynamically optimizing the search direction using an LTR model, this approach effectively guides the heuristic algorithm toward rapid solution discovery. Oliva et al. [19] proposed a Bayesian model-guided hyper-heuristic framework, which introduces a heuristic selection strategy assisted by probabilistic graphical models for single-objective continuous optimization problems, thereby achieving automation in algorithm selection and scheduling. Yi et al. [1] developed a general search framework that employs DQN and proximal policy optimization methods to realize automated design of metaheuristic algorithms. Similarly, Kallestad et al. [20] utilized the general search framework as a foundation for analyzing algorithm components in the automated design of hyper-heuristics. Within this framework, novel metaheuristic algorithms were automatically designed for CVRPTW problems.
Hyper-heuristic algorithms and automated algorithm generation frameworks are gradually moving away from heavy reliance on expert knowledge, leveraging machine learning techniques like reinforcement learning and Learning to Rank to drive the optimization process toward greater automation and intelligence. This development trend lays a solid theoretical and technological foundation for building efficient and adaptive optimization algorithm systems. In addition, reinforcement learning, with its advantage of dynamic interaction, has demonstrated great potential in exploring complex state spaces. However, its strong dependence on reward mechanisms and the high cost of training limit its applicability in certain ranking tasks. In contrast, Learning to Rank, with its ranking-oriented nature, directly models the relationship between features and ranking objectives through supervised or weakly supervised learning. This not only enables algorithm generation and composition strategies but also offers strong interpretability and lightweight model characteristics. This indicates that Learning to Rank holds significant potential in algorithm combinatorial optimization, warranting further exploration of its effectiveness in enhancing algorithm adaptability and performance.
Moreover, recent studies on metaheuristics highlight two major types of learning mechanisms: (1) strategy-level learning mechanisms and (2) parameter control. Hsieh et al. [21,22] proposed online strategy ranking methods for ridesharing problems, where search strategies are dynamically evaluated and reordered during evolution. In contrast, Xu and Chen [23] and Norat et al. [24] employed learning techniques to adapt algorithmic parameters, such as inertia weights or crossover/mutation rates. These studies represent two distinct directions in learning-enhanced metaheuristics. Unlike the online learning mechanisms or parameter-tuning approaches, our method introduces an offline Learning-to-Rank framework for strategy selection. While online updates can be responsive, they often rely on short-term performance and may suffer from instability in complex or high-dimensional problems. Our offline LTR model is trained on broader search history data, enabling more stable, generalizable, and feature-oriented decision-making. As far as we know, this study is the first to employ LTR for the development of hybrid metaheuristic frameworks, offering a novel and theoretically grounded pathway in the landscape of automated algorithm design.
While our work shares the core goal of algorithm automation with hyperheuristics, it differs in modeling granularity and learning objectives. Traditional hyperheuristics typically focus on selecting or generating operators based on heuristic performance, often relying on manually designed rules. In contrast, our approach does not target operator selection but instead learns the mapping between candidate solution features and algorithm behaviors. This enables strategy-level integration of multiple metaheuristics.

3. Preliminaries

3.1. Problem Formulation

In this study, we focus on solving a real-parameter continuous optimization problem, formally defined as:
min x R D f ( x ) , subject to x i [ l b i , u b i ] , i = 1 , 2 , . . . , D .
where x = [ x 1 , x 2 , . . . , x D ] R D is the decision vector, and f ( x ) is the objective function to be minimized. In this work, f ( x ) is selected from the CEC2017 benchmark suite [25], which contains a diverse set of optimization problems. Unless otherwise specified, the default search space for all variables is [ 100 , 100 ] . The objective is to find the global minimum of f ( x ) within this domain.

3.2. Metaheuristic Algorithms

3.2.1. Whale Optimization Algorithm

The Whale Optimization Algorithm [26] is modeled after the hunting strategies of humpback whales. During the exploration, the agents perform a global search to locate promising solutions, while the exploitation stage focuses on localized refinement. The agents exhibit three core hunting strategies: encircling prey, bubble-net attacking, and random search. The mathematical models for each behavior are described in detail below.
(1) Encircling Prey
The WOA assumes that the prey nearest to the optimum serves as the best candidate solution. After identifying the leading agent, the rest adjust their positions to approach this optimal candidate. The equation below is used to mathematically explain the encircling mechanism in the WOA.
X ( t + 1 ) = X * ( t ) A · D i s
where D i s = C · X * ( t ) X ( t ) represents the distance between the current individual and the best individual identified so far. Here, X ( t ) and X * ( t ) indicate the position vectors of the current individual and the optimal solution at iteration t, respectively. Note that in these equations, “ | | ” denotes the absolute value applied element-wise to vectors. “·” represents element-by-element multiplication [26]. A and C are coefficient vectors, computed using the following equations:
A = 2 a · r a , C = 2 · r
where a is linearly decreased from 2 to 0 over the course of iterations. r is a random vector in the range [ 0 , 1 ] .
The switching mechanism between exploration and exploitation in the WOA is controlled by the A , enabling the algorithm to balance global and local search effectively. When the absolute value of every component of A is less than 1 ( | A |   < 1 ), i.e., | A i |   < 1 for all i = 1 , , D , the agent moves toward the best-known solution (exploitation). Conversely, if any component satisfies | A i |   1 , the agent performs exploration by moving toward a randomly selected individual.
(2) Bubble-Net Attacking
In the exploitation phase (when | A |   < 1 ), bubble-net attacking utilizes two strategies: shrinking encircling and spiral updating. The spiral updating mechanism is expressed using a formula like this:
X ( t + 1 ) = D i s · e b l · cos ( 2 π l ) + X * ( t )
where D i s = X * ( t ) X ( t ) represents the distance between the current whale and the best whale found so far, b is a constant, and l is a random number in the range [ 1 , 1 ] . During the prey hunting phase, the shrinking encircling mechanism and the spiral updating position are performed simultaneously. The equation is as follows:
X ( t + 1 ) = X * ( t ) A · D i s , if p < 0.5 D i s · e b l · cos ( 2 π l ) + X * ( t ) , if p 0.5
where p is a random number in the range [ 0 , 1 ] . These two mechanisms are selected with equal probability, and the value of p determines which mechanism is applied in each iteration.
(3) Random Search
In the WOA, a certain degree of exploration is necessary to enhance global search capability. When | A |   1 , the WOA randomly selects a whale from the population to update the position, as described by the following equation:
X ( t + 1 ) = X rand A · D i s
where D i s = C · X rand X ( t ) is the distance vector, and X rand represents the position vector of a whale randomly selected from the current population. As in the encircling mechanism, both the absolute value and multiplication operators are applied element-wise to vectors.

3.2.2. Harris Hawks Optimization Algorithm

The Harris Hawks Optimization algorithm [27] imitates Harris-hawk collaboration and chase methods during the hunting phase, and the algorithm strikes a balance between exploration and exploitation. HHO determines the exploration and exploitation via the escape energy E. When | E | 1 , the algorithm is in the exploration stage, and when | E | < 1 , it shifts to the exploitation stage. The equation goes as follows:
E = 2 E 0 1 t T
where E 0 is the initial energy of the prey, t is the current iteration number, and T is the maximum number of iterations.
(1) Exploration Phase
In exploration, search agents conduct random searches for the location of their prey. The mathematical model is expressed as:
X ( t + 1 ) = X rand ( t ) r 1 X rand ( t ) 2 r 2 X ( t ) , q 0.5 X rabbit ( t ) X m ( t ) r 3 l b + r 4 ( u b l b ) , q < 0.5
X m ( t ) = 1 N i = 1 N X i ( t )
where X ( t ) represents the current position of an individual, X rand ( t ) is the position of a randomly selected individual, and X rabbit ( t ) is the position of the prey. r 1 , r 2 , r 3 , r 4 , q ( 0 , 1 ) are random numbers, while u b and l b denote the upper and lower bounds of the search space. X m ( t ) represents the mean position of all individuals in the current population. Note that in Equation (8), “ | | ” denotes the absolute value applied element-wise to vectors. The multiplication operator “·” represents element-wise multiplication between vectors.
Assuming an equal chance q for each perching strategy, individuals either perch based on the positions of other members for q < 0.5 or choose random tall trees when q 0.5 , as described in Equation (8).
(2) Exploitation Phase
In this phase, the algorithm updates positions using four different strategies based on the escape energy E and the escape probability r ( 0 , 1 ) .
Soft besiege: When 0.5 | E | < 1 and r 0.5 , the position is updated as follows:
X ( t + 1 ) = Δ X ( t ) E J · X rabbit ( t ) X ( t )
where Δ X ( t ) = X rabbit ( t ) X ( t ) represents the distance between the individual and the prey, and J = 2 ( 1 r 5 ) denotes the random jump strength of the prey, with r 5 being a random number in the range ( 0 , 1 ) . Similar to the exploration phase, “ | | ” and “·” in this formula are applied element-wise to vectors.
Hard besiege: When | E | < 0.5 and r 0.5 , the position is updated as follows (parameters are the same as above):
X ( t + 1 ) = X rabbit ( t ) E Δ X ( t )
The absolute value and multiplication operators here also follow the element-wise rule as defined in the algorithm.
Soft besiege with progressive rapid dives: When 0.5 | E | < 1 and r < 0.5 , the position is updated as follows:
X ( t + 1 ) = Y = X rabbit ( t ) E J · X rabbit ( t ) X ( t ) , if F ( Y ) < F ( X ( t ) ) Z = Y + S × L F ( x ) , if F ( Z ) < F ( X ( t ) )
where S is a random vector of the same dimension as the problem space, with each component typically drawn uniformly from the range [ 0 , 1 ] and is used to control the perturbation of the Lévy step size. The Lévy flight function L F ( x ) generates a Lévy-distributed random vector of the same dimension as the problem, and its components are defined as:
L F ( x ) = 0.01 × u × σ | v | 1 / β , σ = Γ ( 1 + β ) × sin π β 2 Γ 1 + β 2 × β × 2 β 1 2 1 / β
where u and v follow a normal distribution, and β = 1.5 . If neither condition F ( Y ) < F ( X ( t ) ) nor F ( Z ) < F ( X ( t ) ) is satisfied, the current position remains unchanged in that iteration.
Hard besiege with progressive rapid dives: When | E | < 0.5 and r < 0.5 , the position is updated as follows (with the same parameter definitions as above):
X ( t + 1 ) = Y = X rabbit ( t ) E J · X rabbit ( t ) X m ( t ) , if F ( Y ) < F ( X ( t ) ) Z = Y + S × L F ( x ) , if F ( Z ) < F ( X ( t ) )
Similarly, if both F ( Y ) < F ( X ( t ) ) and F ( Z ) < F ( X ( t ) ) fail to hold, no update is performed, and the position remains at X ( t ) for that iteration.

3.2.3. Genetic Algorithm

The Genetic Algorithm [28] is a stochastic optimization method based on evolutionary mechanisms. The GA iteratively evolves a population to gradually approach the optimal solution of a problem. Its core procedure includes the following steps:
(1)
Population Initialization: Randomly construct initial population, with each X i representing a possible solution to the problem. Individuals are often encoded using binary strings, real-valued vectors, or other formats.
(2)
Fitness Evaluation: Compute the fitness of every individual, where the fitness function F ( X i ) assesses the quality of each corresponding solution.
(3)
Selection: Select individuals for the next generation via their fitness. Roulette wheel and tournament selection are two common selection procedures. Individuals with high fitness are more likely to be selected.
(4)
Crossover Operation: With a crossover probability ( c r o R a t e ), combine the genes of two parent individuals to generate new offspring, simulating genetic recombination in biological reproduction. In this study, we adopt single-point crossover for real-valued encoding. One parent is the current individual X p , and another X q is randomly selected from the population. For these two parent individuals X p = ( x p 1 , x p 2 , . . . , x p D ) and X q = ( x q 1 , x q 2 , . . . , x q D ) , we randomly select a crossover point c [ 1 , D 1 ] and generate offspring as X offspring = ( x p 1 , . . . , x p c , x q c + 1 , . . . , x q D ) , where the first part is taken from parent X p and the second part from parent X q . A single-point crossover is performed to generate a new individual, which replaces the current one.
(5)
Mutation Operation: With a mutation probability ( m u t R a t e ), randomly modify the genes of individuals to enhance population diversity and reduce the risk of premature convergence. In our implementation, mutation is performed by randomly selecting several dimensions of an individual and replacing their values with new random values drawn uniformly from the allowed range of each dimension. This method introduces diversity by directly altering certain genes within their feasible bounds.
(6)
Population Update: Replace the current population with newly generated individuals to form the next generation. Individuals that are not selected for crossover or mutation are retained unchanged (i.e., Replication Operation). This replication mechanism helps preserve high-quality individuals and accelerates convergence. However, excessive replication may reduce population diversity and increase the risk of premature convergence to local optima.
(7)
Termination Condition: The method ends when the maximum iterations are achieved or the fitness fulfills the predefined condition and the best solution is output.

3.3. Learning to Rank

Learning to Rank (LTR) is a machine learning technique developed to address ranking problems, with the core objective of learning a ranking model that ensures the ranking results align as closely as possible with the true relevance labels [29]. Specifically, given a set of queries Q and their corresponding document set D, the goal of LTR is to learn a ranking function f ( q , d ) , such that for each query q Q , the ranking of documents d D reflects their relevance. The relevance labels can be binary (relevant/irrelevant) or multi-level (e.g., grades from 0 to 4).
LTR algorithms can be broadly categorized into three types: pointwise methods, pairwise methods, and listwise methods. Pointwise methods treat each sample as an independent input and compute the loss function based on individual samples. Typical algorithms include McRank [30] and Prank [31]. Pairwise methods divide the data into sample pairs, such as ( x i , x j ) , and perform ranking by comparing the relative order of these sample pairs, for example, determining whether x i should be ranked higher than x j . Classic pairwise methods include RankNet [29] and RankBoost [32]. Listwise methods, on the other hand, take all samples corresponding to a query as a whole input and directly optimize the ranking error of the entire list. Representative methods include ListNet [33]. These approaches generally fall under fully supervised learning and require a large amount of labeled data.
In this study, our method employs the Random Forest algorithm. The implementation process includes the following steps:
(1)
Dataset construction: Historical search data are collected from a selected subset of benchmark problem instances. Feature vectors are extracted for each candidate update behavior, and fitness-based weight values are assigned as labels.
(2)
Model training: A Random Forest model is trained to learn the mapping from feature vectors to predicted performance scores.
(3)
Prediction: During the hybrid metaheuristic process, multiple candidate behaviors are generated for each search agent. Their feature vectors are computed, and the trained model predicts a performance score for each. The behavior with the highest predicted score is selected for execution.
This framework enables real-time ranking and selection of update strategies based on dynamic features, enhancing the adaptability of the algorithm. For full implementation details, please refer to Section 4.

4. Methodology

4.1. Method Overview

The core concept of the LTR-MHA method is to leverage the LTR framework to extract knowledge from historical execution data of multiple metaheuristic algorithms. The trained prediction model dynamically predicts the scores of candidate update behaviors during the search process, thereby intelligently selecting the update behavior most likely to improve the current solution. The overall methodological flow is illustrated in Figure 1.
First, information is extracted from the update behaviors during the solving process of metaheuristic algorithms to construct a feature set and the corresponding label set. Next, a prediction model is trained to learn the relationship between different update behaviors and solution quality, which is used to predict the effectiveness of each candidate update behavior. Once the prediction model is trained, the relevant feature vectors of all candidate behaviors for the search agent to be updated are input into the model. By evaluating the feature vector of the current search agent, the model predicts a corresponding score, which reflects the potential solution quality after applying the given update behavior. Finally, the behavior with the highest predicted score is chosen to update agents.
In this way, the search process of algorithm can be guided more intelligently, avoiding blind exploration and thereby improving convergence speed and solution quality. The innovation of the LTR-MHA lies in applying LTR to learn the ranking of metaheuristic algorithm behaviors, enabling the automated generation of new algorithms. This approach breaks through the fixed search framework of traditional metaheuristic algorithms and offers a novel perspective for tackling optimization tasks. Notably, the primary objective of this study is not to identify the optimal algorithm combination but to verify the feasibility of multi-algorithm integration strategies in boosting the performance of individual algorithms. Based on this objective, three representative classical metaheuristic algorithms (GA, WOA, and HHO) are selected as the subjects of investigation in this work.

4.2. Feature Selection

The training feature set can be represented as a matrix F S R N × M , where each row corresponds to a feature vector, and F i j denotes the feature value of the i-th feature vector at the j-th component. There are three sorts of features: search-dependent, solution-dependent, and instance-dependent [1]. Search-dependent features are relevant to the search procedure, such as overall improvement over the initial solution. Solution-dependent features are linked to the solution encoding scheme. For example, in the TSP, the whole path encoding can be defined explicitly as a feature. Instance-dependent features record the issue instance’s specific properties, such as the number of vehicles and vehicle capacity. It is worth mentioning that when search-dependent or instance-dependent features are used, the knowledge gained can be transferred to other instances of comparable issues or even used to solve different problems. However, solution-dependent features are frequently constrained by unique issues, making it challenging to create generalizable approaches.
Therefore, in this method, search-dependent features are used to construct the feature set. Furthermore, two main coefficient vectors of the WOA, A and C ; the escape energy E in HHO; and the crossover and mutation rates in the GA are also included as components of the feature set. These parameters help to more comprehensively reflect the performance characteristics of the algorithm under different update behaviors.
Moreover, quantitatively assessing the levels of exploration and exploitation in algorithms remains an open scientific question [34]. To address this issue, a feasible approach is to monitor the diversity of the population during the search process [35,36]. As an important metric, population diversity can provide a certain level of assessment of the exploration and exploitation in metaheuristics. Population diversity describes the extent of dispersion or clustering of search agents during the iterative search process. It can be calculated with the following formula:
D i v j = 1 n i = 1 n x i j median ( x j ) , D i v = 1 m j = 1 m D i v j
where median ( x j ) represents the median value of the j-th variable across all individuals in the population, and x i j denotes the value of the j-th variable for the i-th search agent. The total number of agents in the current iteration is given by n, while m represents the number of variables in the potential solution to the optimization problem. The term D i v j quantifies the average distance between each individual’s value in the j-th dimension and the median of that dimension, thus representing the diversity of the population along that specific variable. D i v , in turn, is the average of all D i v j values across all dimensions, which reflects the overall diversity of the population. It is important to note that population diversity should be recalculated at each iteration. And Table 1 presents the list of feature variables used in this study, along with their definitions.

4.3. Feature Extraction and Model Training

The core of the LTR-MHA lies in extracting key features from each iteration update of the metaheuristic algorithms. Specifically, the target problem is solved individually using the selected metaheuristic algorithms (WOA, HHO, and GA). These algorithms optimize solutions through a series of update behaviors, such as shrinking encircling, spiral updating, and random search in the WOA. To systematically represent these diverse update behaviors for the purpose of Learning-to-Rank modeling, each behavior is numerically encoded as a distinct action. Table 2 presents the update behaviors of the three metaheuristic algorithms, along with their corresponding numerical encodings for the convenience of subsequent experiments.
Based on the information in Table 2, the action space of the LTR-MHA can be defined as follows:
A c t i o n s = { RS , SE , SU , RM , RT , HB , SB , SRD , HRD , CRO , MUT , REP }
In continuous function optimization problems, the objective of metaheuristic algorithms is to search for the best solution within the continuous space that minimizes the objective function value. Let the algorithms set be A l g = { WOA , HHO , GA } , where the population of algorithm a l g i A l g is denoted as P = ( X 1 , X 2 , , X i , , X m ) , consisting of m search agents (individuals). Each search agent X i represents a candidate solution. A solution X i in D-dimensional space is encoded as:
X i = ( x i 1 , x i 2 , , x i j , , x i D )
where D represents the dimension, and x i j denotes the j-th variable in the solution vector. Each variable x i j satisfies predefined boundary constraints. By continuously updating the positions of search agents within the population, the algorithm drives the objective function value f ( X i ) progressively closer to the global optimum.
At each iteration of the algorithm, the search agent X i selects one action from multiple candidate update behaviors to update its position and constructs the corresponding feature vector F i . According to Table 1 and Table 2, suppose the update behavior selected by X i in the current iteration is a c t i Actions , where the value range of a c t i is [ 1 , 12 ] . To ensure that all feature values fall within the range [ 0 , 1 ] , normalization is applied to each feature individually. The normalization formula for a c t i is as follows:
a c t i norm = a c t i l b u b l b
where u b and l b represent the upper and lower bounds, which are 12 and 1, respectively. Similarly, i n c r e and d i f f are normalized in the same way. The values of D i v , c r o R a t e , and m u t R a t e are originally within the range [ 0 , 1 ] . Meanwhile, A and C are D-dimensional vectors with each component of A in [ 2 , 2 ] and each component of C in [ 0 , 2 ] , while E is a scalar in [ 2 , 2 ] . After normalization, these three features are also scaled to [ 0 , 1 ] . Thus, the feature vector can be expressed as follows:
F i = s t a g e i , a c t i norm , D i v i , i n c r e i norm , d i f f i norm , 1 D j = 1 D | A i j | 2 , 1 D j = 1 D | C i j | 2 , | E i | 2 , c r o R a t e i , m u t R a t e i
Here, A i j and C i j denote the j-th component of the coefficient vectors A and C corresponding to the i-th individual in the population. To convert the vector-valued parameters into scalar features, we compute the the mean value of A and C as follows: 1 D j = 1 D | A i j | 2 , 1 D j = 1 D C i j 2 .
It is worth noting that the information from unselected candidate actions is also crucial for model training. Therefore, corresponding feature samples need to be constructed for these actions as well. To prevent any impact on the optimization process, a replica of the current search agent should be created to simulate the updates of other candidate actions and record their feature data. In addition, when constructing the feature set, the core parameter values of other algorithms should be preserved and assigned a value of zero. For example, when executing the WOA algorithm, the parameter E of HHO, as well as the crossover rate and mutation rate of GA, should all be retained in the feature vector but uniformly set to zero. This design ensures the completeness of the feature vector.
Figure 2 illustrates the flowchart of training set construction and model training. To reduce the complexity of the figure, only a subset of feature vectors ( F 1 F 4 ) is retained as examples.
For the selected action, the L a b e l is assigned as “1”, while for the unselected candidate actions, the L a b e l is set to “0”. To assess the performance of three algorithms, we rank their best fitness fitness * in descending order (where lower values indicate better solutions) and assign the corresponding ranking numbers as the new Label values. For example, as shown in Figure 2, if the GA achieves the best fitness values ( f i t n e s s * = 0.3 ) among the three algorithms, then the L a b e l for the selected actions within the GA is adjusted from “1” to “3”. Similarly, the L a b e l for the WOA and HHO are adjusted to “2” and “1”, respectively. This strategy more properly reflects each algorithm’s relative performance during the optimization process, as well as providing more discriminative annotations for predictive model training. After completing the optimization phase of each algorithm, the feature sets corresponding to the optimal solutions provided by the three algorithms are merged to form the training feature set for the prediction model L T R M H A . The detailed implementation of the training set construction for the LTR-MHA is shown in Algorithm 1, with the specific feature extraction and construction process of the metaheuristic algorithm exemplified by the WOA in Algorithm 2.
Algorithm 1 The procedure of training set construction of LTR-MHA
Require: 
Metaheuristics set A l g = { WOA , GA , HHO } , Update behaviors set Actions , Dimension D, Maximum iterations I max , Population size m, Bounds l b , u b , fitness function f u n c ;
Ensure: 
Training feature set, label set;
  1:
Initialize search population P = { X i i = 1 , 2 , , m | X i | = D } ;
  2:
for algorithm a l g j A l g do                     ▹See detailed WOA update steps in Algorithm 2
  3:
    Compute fitness values for all search agents via f i t n e s s i = f u n c ( X i ( 0 ) ) ;
  4:
    Select the best search agent X * ;
  5:
    while  t < I max  do
  6:
        for search agent X i P  do
  7:
           Execute strategy of a l g j ;                                           ▹Update strategy follows Equation (16)
  8:
           Record selected action act selected ;
  9:
           Construct feature vector F selected according to Equation (19);
10:
           for candidate action act k Actions  do
11:
               if  act k = act selected  then
12:
                    Label = 1 ;
13:
               else
14:
                    Label = 0 ;
15:
                   Construct feature vector F unselected for unselected actions via Equation (19);
16:
               end if
17:
           end for
18:
        end for
19:
        Adjust search agents exceeding search space boundaries via r a n d ( l b , u b ) ;
20:
        Recalculate fitness values for all search agents via f i t n e s s i = f u n c ( X i ( t + 1 ) ) ;
21:
        Update X * when a better solution is available;
22:
         t = t + 1 ;
23:
    end while
24:
    Record best fitness value fitness j * for algorithm a l g j ;
25:
end for
26:
Sort algorithms by fitness j * values;
27:
reassign L a b e l j = rank ( fitness j * ) , where rank is the descending order;
28:
return Training feature set, label set;
Algorithm 2 The procedure of feature extraction and construction of WOA
Require: 
Initialize search population P, max iterations I max , fitness function f u n c , WOA parameters.
Ensure: 
For each agent: construct feature vectors and Label for both the selected and unselected actions.
  1:
Evaluate fitness f i t n e s s i = f u n c ( X i ( 0 ) ) for all search agents;
  2:
Select the best search agent X * ;
  3:
while  t < I max   do
  4:
    for each whale X i P  do
  5:
        Update a , A , C , l , and p;
  6:
        if  p < 0.5  then
  7:
           if  | A |   < 1  then                                                                             ▹Shrinking Encircling (SE)
  8:
               Compute new position via Equation (2);
  9:
                act i = 2 ; //The numerical coding of update behaviors is shown in Table 2
10:
                Label i 2 = 1 ;
11:
           else                                                                                                  ▹Random Search (RS)
12:
               Select random leader X rand ;
13:
               Compute new position via Equation (6);
14:
                act i = 1 ;
15:
                Label i 1 = 1 ;
16:
           end if
17:
        else                                                                                                    ▹Spiral Updating (SU)
18:
           Compute new position via Equation (4);
19:
            act i = 3 ;
20:
            Label i 3 = 1 ;
21:
        end if
22:
        Construct feature vector F selected according to Equation (19);
23:
        for candidate action act k [ 1 , 2 , 3 ]  do
24:
           if  act k = 1  then                                                                                   ▹select Random Search
25:
                Label i 2 = 0 ;
26:
               Construct feature vector F unselected for unselected action (SE) via Equation (19);
27:
                Label i 3 = 0 ;
28:
               Construct feature vector F unselected for unselected action (SU) via Equation (19);
29:
                   else if  act k = 2  then                                                              ▹select Shrinking Encircling
30:
               Similarly, construct the feature vectors for the unselected actions (RS, SU);
31:
           else                                                                                               ▹select Spiral Updating
32:
               Similarly, construct the feature vectors for the unselected actions (SE, RS);
33:
           end if
34:
        end for
35:
    end for
36:
    Adjust search agents exceeding search space boundaries via r a n d ( l b , u b ) ;
37:
    Recalculate fitness values for all search agents via f i t n e s s i = f u n c ( X i ( t + 1 ) ) ;
38:
    Update X * when a better solution is available;
39:
     t = t + 1 ;
40:
end while
41:
return The best fitness * of WOA;
In Algorithm 1, to ensure fairness in the comparison among the algorithms, a unified initial population is used. Subsequently, the algorithms in the given algorithm set are executed sequentially, and the initial fitness values are calculated (lines 1–4). Within the I max , the search agents are updated according to the strategies of the metaheuristic algorithms while recording the selected update action a c t selected and constructing the corresponding feature vector F selected (lines 6–9). The update behaviors of each metaheuristic algorithm (WOA, HHO, GA) are the original, unmodified versions. The specific procedure for feature extraction and construction is illustrated using the WOA as an example, as shown in Algorithm 2. At the same time, the information of unselected candidate actions is equally crucial for model training. Therefore, feature vectors F unselected for these actions are also constructed (lines 11–15). In each iteration, the algorithm checks whether the population exceeds the search space boundaries and performs necessary adjustments, followed by recalculating the fitness values of each solution. Whenever a better agent is available, the best agent X * is updated (lines 19–21). At the end of the algorithm, the best fitness of each algorithm is recorded (line 24). Finally, the algorithms are ranked according to their best fitness values fitness * , and their corresponding labels are reassigned based on this ranking. The ranking is in descending order of best fitness, with the resulting position used as the new L a b e l (lines 26–27). Thus, among the three algorithms, the one with the smallest best fitness value receives the highest label (i.e., Label = 3 ). The feature vectors set and its corresponding labels set together form the training dataset.
During the training stage of the predictive model L T R M H A , the constructed feature set, along with its corresponding label set, is used as the training set. Through supervised learning, the L T R M H A model is able to predict the potential quality of the solutions generated by the update actions performed by the current search agent and produce a prediction score. This score guides the algorithm in making decisions during the search process, steering the exploration toward more promising regions of the solution space. The L T R M H A model is implemented using the standard RandomForestRegressor from scikit-learn, without modification to its core training procedure. The full set of hyperparameter configurations is provided in the experimental settings.

4.4. Algorithm Implementation of LTR-MHA

Once the predictive model is trained, it can be employed to guide the search process. During each iteration, when a search agent requires position updating, the feature vectors for all candidate update actions of the current search agent are first constructed and fed into the predictive model L T R M H A . The model evaluates these feature vectors and outputs a set of predicted scores. Based on these scores, the algorithm selects the candidate action with the highest predicted score for execution. As illustrated in Figure 1, the model predicts that the action a c t 3 corresponds to the solution with the highest predicted score (0.78), indicating that this action is most likely to guide the search process towards the best solution. Therefore, in this iteration, the search agent selects a c t 3 . The implementation of the LTR-MHA is detailed in Algorithm 3.
Algorithm 3 Pseudocode of LTR-MHA
Require: 
predictive model L T R M H A , Update behaviors set Actions , Dimension D, Maximum iterations I max , Population size m, Bounds l b , u b , fitness function f u n c ;
Ensure: 
The best solution X * , the best fitness;
  1:
Initialize population P = { X i i = 1 , 2 , , m | X i | = D } ;
  2:
Compute fitness values for all search agents via f i t n e s s i = f u n c ( X i ( 0 ) ) ;
  3:
Select best agent to X * ;
  4:
while  t < I max  do
  5:
    for each agent X i P  do
  6:
        for candidate behavior act k Actions  do
  7:
           Construct feature vector F test k for act k according to Equation (19);
  8:
        end for
  9:
        Feed feature set to L T R M H A , obtain prediction scores PS ;
10:
        Determine the optimal behavior: act predicted = arg max k PS k ;
11:
        Update X i using act predicted ;                                                                    ▹Update strategy follows Equation (16)
12:
    end for
13:
    Adjust search agents exceeding search space boundaries via r a n d ( l b , u b ) ;
14:
    Recalculate fitness values via f i t n e s s i = f u n c ( X i ( t + 1 ) ) ;
15:
    Update X * if a better solution is available;
16:
     t = t + 1 ;
17:
end while
18:
return  X * , the best fitness;
First, the population of search agents P is initialized, and the fitness value of each search agent is calculated. The best search agent is selected as the initial optimal solution X * (lines 1–3). Then, within the I max , for each search agent and candidate action, the feature vectors to be evaluated are constructed. Since this method involves 12 update actions, the feature set to be predicted should contain 12 feature vectors (lines 5–8). These feature vectors are then fed into the L T R M H A predictive model, which outputs a set of predicted scores. The algorithm selects the update action corresponding to the highest predicted score for execution (lines 9–11). Subsequently, the algorithm checks whether any search agent has exceeded the search space boundaries, adjusts their positions accordingly, and recalculates the fitness values of all search agents (lines 13–14). If a better solution is found, the X * is updated (line 14). After the loop ends, the final optimal solution X * is returned (line 17).

5. Experiment Evaluation

5.1. Experimental Setup

In this study, the performance of the LTR-MHA algorithm is evaluated using the CEC2017 benchmark function set [25]. These functions fall into four categories: unimodal functions (F1–F3), simple multimodal functions (F4–F10), hybrid functions (F11–F20), and composition functions (F21–F30). However, during the actual experiments, errors occurred when invoking functions F2, F16, F17, F20, and F29. Upon investigation, it was found that these issues were primarily caused by numerical instabilities or boundary calculation anomalies in the implementation of these functions on specific platforms. Such issues include overflows, division by zero, or undefined logarithmic operations resulting from extreme inputs. Additionally, certain functions exhibited compatibility issues during the porting process. To ensure the stability and fairness of the experimental results, these functions were reasonably excluded to prevent technical problems from interfering with the overall evaluation. The range of all functions is [ 100 , 100 ] , with problem dimensions set to D = 10, 30, 50, and 100. Table 3 includes information about the CEC2017.
The objective of the experiments is to confirm the effectiveness of the LTR-MHA method, focusing on three aspects: convergence speed, solution precise, and computing efficiency. The LTR-MHA is compared against individual metaheuristic algorithms, namely the WOA, HHO, and the GA. Under the same algorithmic framework, a comparison is conducted with the Rand-MHA method, which adopts a random update strategy. In the Rand-MHA, the update operation is randomly selected at each position update step, without relying on any learning process, meaning that each update operation has an equal probability of being chosen. Additionally, our approach is compared to two recent improved algorithms, namely the ISGA [8] and RLWOA [13].
In all comparative experiments, the algorithm parameters are uniformly configured to ensure the fairness and reproducibility of the experiments. Specifically, during the training phase, one function from each of the four categories in the CEC2017 benchmark suite is selected to form the training function set. In this study, functions F1, F4, F11, and F21 are chosen as the training set. Therefore, these four functions are excluded from the testing phase to avoid evaluation bias, as they have been used to train the Learning-to-Rank model. This ensures a fair performance comparison on unseen benchmark functions. For each function in the benchmark set, algorithms are each run 15 times, with the population size and iterations set to 30 and 500, respectively. Other settings are summarized in Table 4 for reproducibility.
To further eliminate the influence of initial randomness, all algorithms use the same initial population configuration, ensuring that performance differences arise primarily from the algorithmic mechanisms rather than stochastic factors. Subsequently, the historical information of the optimal solutions is used as training data to train the predictive model based on a Random Forest ranking algorithm. During the testing phase, the trained predictive model is applied to test functions other than F1, F4, F11, and F21. To reduce the effect of randomness on the experimental outcomes, each algorithm is executed independently 15 times.

5.2. Experimental Results and Discussion

Table 5, Table 6 and Table 7 present a comparison of the results between the LTR-MHA and the baseline algorithms, focusing on the mean, standard deviation, and best solution values. The best average value is highlighted in bold. The statistics demonstrate that our proposed method significantly outperforms the others in solving various test functions across different dimensions.
Specifically, in solving unimodal and simple multimodal functions (as shown in Table 5), the LTR-MHA consistently achieves the lowest average values across most functions, demonstrating a remarkable advantage. Additionally, the LTR-MHA maintains the lowest or second-lowest standard deviation in nearly all functions and dimensions, indicating a more stable convergence process. In certain cases, such as functions F3, F7, and F9, the GA occasionally attains the lowest average value, suggesting that traditional heuristic algorithms still remain competitive when dealing with relatively simple problem structures. Furthermore, the RLWOA occasionally obtains the best average value among all compared methods, highlighting the effectiveness of parameter adaptation based on reinforcement learning. Overall, the RLWOA ranks directly after the LTR-MHA and Rand-MHA in solution precision. Moreover, although the Rand-MHA performs reasonably well in terms of solution accuracy, its generally higher standard deviations reveal a lack of stability in the algorithm’s performance.
Table 6 shows that the performance gap between algorithms increases when evaluated on hybrid functions. The LTR-MHA achieves the lowest average values on F14 and F18 and secures the best solutions across nearly all dimensions while also maintaining the lowest standard deviations. This demonstrates its ability to obtain high-precision solutions and maintain excellent convergence consistency in high-dimensional search spaces. The GA and Rand-MHA, on the other hand, approach the LTR-MHA’s performance on certain functions. In particular, the Rand-MHA and the GA outperform the LTR-MHA in terms of average performance on F13 and F15. This indicates that the LTR-MHA’s learning model still has room for improvement in terms of stability and accuracy when handling certain complex hybrid structures. By contrast, the performance of the WOA and HHO degrades significantly in high-dimensional environments, with their convergence curves exhibiting considerable fluctuations. The ISGA and RLWOA exhibit comparable results to the baseline methods but do not surpass the LTR-MHA on the tested hybrid functions, indicating limitations when addressing complex hybrid functions.
When tackling the composition functions in the CEC2017 benchmark (Table 7), the LTR-MHA demonstrates a significant advantage across most functions: it consistently achieves the lowest average values (e.g., F22, F23, F24, and F30), as well as the best solutions, while maintaining relatively low standard deviations. The GA performs notably well on certain high-dimensional problems (e.g., F25). The RLWOA obtains the best average value on F28, ranking after the LTR-MHA in overall composition function performance. Although the Rand-MHA approaches the performance of the LTR-MHA in some lower-dimensional scenarios, its results exhibit considerable fluctuations in high-dimensional settings. Overall, the WOA and HHO show relatively poor performance, indicating limited adaptability to complex problems. In summary, the LTR-MHA delivers the best overall performance in composition function optimization, particularly excelling in high-dimensional problems by balancing both accuracy and robustness, while traditional algorithms remain competitive only in specific, simpler cases.
To balance problem complexity with an effective evaluation of algorithmic performance, the convergence speed and runtime analyses are conducted using the 30-dimensional test functions. On one hand, 10-dimensional problems are relatively simple and may not adequately demonstrate the advantages of the algorithm in complex search spaces. On the other hand, 100-dimensional problems can be overly complex, potentially leading to inefficient searches and making algorithm performance highly susceptible to random factors. The 30-dimensional setting, which is widely adopted in current academic research, strikes an appropriate balance between computational cost and evaluation depth, thereby providing a reliable basis for validating the algorithm’s performance [37,38].
Figure 3 shows the average fitness convergence curves of five algorithms on nine typical functions (F3, F6, F9, F12, F14, F19, F22, F24, and F30) from the CEC2017 benchmark, with a dimensional setting of D = 30 . Overall, the proposed LTR-MHA demonstrates the best final fitness values on most test functions, with its advantages being particularly evident on multimodal and composition functions (e.g., F6, F22, and F24), approaching the optimal region within just 100 iterations, whereas the other algorithms generally converge more slowly and are more prone to local optima. Although the Rand-MHA, with its random update strategy, surpasses traditional metaheuristic algorithms on certain functions (e.g., F3, F9, F14, and F24), its overall performance still lags behind the LTR-MHA and exhibits greater variability. Among the three conventional algorithms, the GA performs the best. The ISGA yields convergence behavior similar to the GA across most functions. The RLWOA converges rapidly on a majority of the tested functions, achieving final fitness values that are comparable to or only marginally lower than those of the LTR-MHA. HHO, despite occasionally achieving a rapid decline in the early iterations, suffers from premature convergence and stagnates in the later stages, resulting in noticeably poorer final solutions. These findings demonstrate that the multi-strategy fusion update mechanism learned by the LTR-MHA enables stable and efficient optimization across various types of complex search spaces.
Table 8 presents the average runtime of the five algorithms on the CEC2017 test functions. The results indicate that the WOA consistently exhibits the lowest computational overhead across all test functions, with an average runtime ranging from 0.4 to 0.96 s. The GA follows closely, requiring approximately 0.7 to 1.5 s. Due to its more complex internal search strategies, HHO incurs a longer runtime of 1.4 to 8.9 s. The Rand-MHA, which involves randomly selecting and executing multiple update operations in each iteration, experiences a further increase in average runtime. The ISGA takes about 4 to 7 s per run, which reflects its additional clustering and adaptive migration techniques. The RLWOA holds the highest overhead among compared methods, with runtimes ranging from 56 to 60 s due to its online reinforcement learning-based parameter adjustment. The LTR-MHA, requiring feature generation and online prediction via the model at each iteration, introduces additional computational complexity, resulting in the longest runtime among the compared algorithms. This difference arises from the algorithmic design of the LTR-MHA. In each iteration, for every individual in the population, the LTR-MHA constructs m = 12 candidate update strategies. Each candidate is represented by a feature vector composed of 10 components (as shown in Equation (19)). These m feature vectors are then passed into the trained predictive model to compute scores and select the most promising strategy. As a result, the per-iteration time complexity of the LTR-MHA is approximately O ( n · m · ( k + T p r e d ) ) , where n is the population size, k = 10 is the feature dimension, T p r e d is the model’s prediction time, and D is the problem dimension. This is significantly higher than other metaheuristics that apply a single, fixed rule per individual per iteration. In conclusion, while the LTR-MHA clearly outperforms in convergence speed and solution precision, its increased computational cost requires a trade-off with time efficiency in practical applications.
Notably, the development time of automated methods and manually designed approaches is generally difficult to compare directly, as such information is typically not disclosed in published studies. Furthermore, although the proposed algorithm incurs additional computational time in both the training and testing steps, the primary goal of this research is not to discover the optimal approach but rather to explore an approach capable of automatically generating highly generalizable and state-of-the-art search strategies. In the long term, this additional computational overhead is justified by the ability to efficiently solve a broad range of problem instances.
The Wilcoxon rank-sum test was used at a significance level of 0.05 to determine whether the LTR-MHA’s optimization results outperformed the compared algorithms significantly. Table 9 presents the statistical analysis results of the LTR-MHA against the other algorithms under the 30-dimension setting. In the table, the symbols ‘+’, ‘=’, and ‘−’ demonstrate that the LTR-MHA performs significantly better than, similarly to, and significantly worse than the compared algorithm. As shown in Table 9, the LTR-MHA predominantly achieves the ‘+’ symbol across most cases, indicating that its performance on the CEC2017 benchmark optimization tasks is significantly superior to that of the other algorithms.
In summary, the LTR-MHA shows significant advantages in solution precision, convergence speed, and statistical significance. Across tests on unimodal, simple multimodal, hybrid, and composite functions, the LTR-MHA consistently yields the lowest average fitness values in most scenarios while maintaining the lowest or second-lowest standard deviations. Moreover, the LTR-MHA is able to rapidly approach the optimal regions, particularly in simple multimodal and composite functions. Although the model prediction process leads to longer runtimes, this additional computational cost is offset in the long term by the algorithm’s ability to efficiently solve a wide range of problem instances. In contrast, traditional metaheuristic algorithms exhibit acceptable performance in specific functions or lower-dimensional problems but suffer significant degradation in high-dimensional, complex environments. Therefore, by leveraging a multi-strategy fusion mechanism, the LTR-MHA effectively addresses complex optimization problems, though the trade-off between computational cost and performance should be carefully considered in practical applications based on specific problem requirements.

6. Conclusions

To address the limitations of flexibility and adaptability in traditional metaheuristic algorithms when dealing with diverse optimization problems, as well as their tendency to get trapped in local optima, this paper proposes the LTR-MHA, an automated algorithm generation method based on Learning to Rank. This approach analyzes the historical search processes of the GA, WOA, and HHO to extract key features, including the iteration stage, core algorithm parameters, and diversity metrics, for constructing the training dataset. A predictive model is then trained using the Random Forest algorithm. During the optimization process, the LTR-MHA dynamically evaluates the potential benefit scores of candidate update actions and intelligently selects the optimal update strategy. This allows the method to significantly improve search efficiency, minimize premature convergence, and improve the quality of the solution.
Experimental validation via the CEC2017 benchmark test suite demonstrates that the LTR-MHA exhibits clear advantages across unimodal, simple multimodal, hybrid, and composition function tests. Compared with single algorithms and random combination strategies, the LTR-MHA consistently yields the lowest average and the best fitness values in nearly all test scenarios while maintaining strong stability. In particular, for multimodal and high-dimensional problems, the LTR-MHA effectively accelerates convergence through its multi-strategy integration mechanism. Although the model prediction introduces additional computational overhead, considering the overall balance between algorithm performance and computational cost, the LTR-MHA still delivers a favorable cost/performance ratio in practical applications.
Future research will further explore the search characteristics of different metaheuristic algorithms, investigate the optimal number of algorithm combinations, and develop more rational combination strategies. Additionally, we will be made to incorporate other artificial intelligence techniques or optimization algorithms to improve model prediction efficiency and reduce computational overhead, ultimately achieving a more efficient real-time algorithm selection strategy. The LTR-MHA methodology will next be compared to existing state-of-the-art approaches and applied to more challenging circumstances to ensure its generalizability and practical relevance.

Author Contributions

X.X.: Data Curation, Software, Writing—Original Draft. T.S.: Methodology, Conceptualization, Investigation, Writing—Review and Editing. J.X.: Resources, Validation. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LY22F020019), the Public-Welfare Technology Application Research of Zhejiang Province in China (Grant No. LGG22F020032), the Zhejiang Science and Technology Plan Project (Grant No. 61972359), and the National Natural Science Foundation of China (Grant No. 62132014).

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yi, W.; Qu, R.; Jiao, L.; Niu, B. Automated design of metaheuristics using reinforcement learning within a novel general search framework. IEEE Trans. Evol. Comput. 2022, 27, 1072–1084. [Google Scholar] [CrossRef]
  2. Yi, W.; Qu, R. Automated design of search algorithms based on reinforcement learning. Inf. Sci. 2023, 649, 119639. [Google Scholar] [CrossRef]
  3. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  4. Kareem, S.W.; Ali, K.W.; Askar, S.; Xoshaba, F.S.; Hawezi, R. Metaheuristic algorithms in optimization and its application: A review. JAREE 2022, 6, 7–12. [Google Scholar] [CrossRef]
  5. Wang, S.; Lo, D.; Jiang, L.; Lau, H.C. Search-based fault localization. In Proceedings of the 26th IEEE/ACM International Conference on Automated Software Engineering, Lawrence, KS, USA, 6–10 November 2011; pp. 556–559. [Google Scholar]
  6. Gavvala, S.K.; Jatoth, C.; Gangadharan, G.R.; Buyya, R. QoS-aware cloud service composition using eagle strategy. Future Gener. Comput. Syst. 2019, 90, 273–290. [Google Scholar] [CrossRef]
  7. Abraham, O.L.; Ngadi, M.A. A comprehensive review of dwarf mongoose optimization algorithm with emerging trends and future research directions. Decis. Anal. J. 2025, 14, 100551. [Google Scholar] [CrossRef]
  8. Bian, H.; Li, C.; Liu, Y.; Tong, Y.; Bing, S.; Chen, J.; Zhang, Z. Improved snow geese algorithm for engineering applications and clustering optimization. Sci. Rep. 2025, 15, 4506. [Google Scholar] [CrossRef]
  9. Ryser-Welch, P.; Miller, J.F. A review of hyper-heuristic frameworks. In Proceedings of the Evo20 Workshop, AISB, London, UK, 1–4 April 2014. [Google Scholar]
  10. Talbi, E.-G. Machine learning into metaheuristics: A survey and taxonomy. ACM Comput. Surv. 2021, 54, 1–32. [Google Scholar] [CrossRef]
  11. Becerra-Rozas, M.; Lemus-Romani, J.; Crawford, B.; Soto, R.; Cisternas-Caneo, F.; Embry, A.T.; Molina, M.A.; Tapia, D.; Castillo, M.; Misra, S. Reinforcement learning based whale optimizer. In Proceedings of the International Conference on Computational Science and Its Applications, Cagliari, Italy, 13–16 September 2021; pp. 205–219. [Google Scholar]
  12. Li, K.; Lai, G.; Yao, X. Interactive evolutionary multiobjective optimization via learning to rank. IEEE Trans. Evol. Comput. 2023, 27, 749–763. [Google Scholar] [CrossRef]
  13. Seyyedabbasi, A. A reinforcement learning-based metaheuristic algorithm for solving global optimization problems. Adv. Eng. Softw. 2023, 178, 103411. [Google Scholar] [CrossRef]
  14. Talbi, E.-G. A unified taxonomy of hybrid metaheuristics with mathematical programming, constraint programming and machine learning. In Hybrid Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2013; pp. 3–76. [Google Scholar]
  15. Jin, H.; Yao, X.; Chen, Y. Correlation-aware QoS modeling and manufacturing cloud service composition. J. Intell. Manuf. 2017, 28, 1947–1960. [Google Scholar] [CrossRef]
  16. Jin, H.; Lv, S.; Yang, Z.; Liu, Y. Eagle strategy using uniform mutation and modified whale optimization algorithm for QoS-aware cloud service composition. Appl. Soft Comput. 2022, 114, 108053. [Google Scholar] [CrossRef]
  17. Lu, Y.; Yi, C.; Li, J.; Li, W. An Enhanced Opposition-Based Golden-Sine Whale Optimization Algorithm. In Proceedings of the International Conference on Cognitive Computing, Shenzhen, China, 17–18 December 2023; pp. 60–74. [Google Scholar]
  18. Semenikhin, S.V.; Denisova, L.A. Learning to rank based on modified genetic algorithm. In Proceedings of the 2016 Dynamics of Systems, Mechanisms and Machines (Dynamics), Omsk, Russia, 15–17 November 2016; pp. 1–5. [Google Scholar]
  19. Oliva, D.; Martins, M.S.R.; Hinojosa, S.; Elaziz, M.A.; dos Santos, P.V.; da Cruz, G.; Mousavirad, S.J. A hyper-heuristic guided by a probabilistic graphical model for single-objective real-parameter optimization. Int. J. Mach. Learn. Cybern. 2022, 13, 3743–3772. [Google Scholar] [CrossRef]
  20. Kallestad, J.; Hasibi, R.; Hemmati, A.; Sørensen, K. A general deep reinforcement learning hyperheuristic framework for solving combinatorial optimization problems. Eur. J. Oper. Res. 2023, 309, 446–468. [Google Scholar] [CrossRef]
  21. Hsieh, F.-S. Creating Effective Self-Adaptive Differential Evolution Algorithms to Solve the Discount-Guaranteed Ridesharing Problem Based on a Saying. Appl. Sci. 2025, 15, 3144. [Google Scholar] [CrossRef]
  22. Hsieh, F.-S. Applying “Two Heads Are Better Than One” Human Intelligence to Develop Self-Adaptive Algorithms for Ridesharing Recommendation Systems. Electronics 2024, 13, 2241. [Google Scholar] [CrossRef]
  23. Xu, T.; Chen, C. DBO-AWOA: An Adaptive Whale Optimization Algorithm for Global Optimization and UAV 3D Path Planning. Sensors 2025, 25, 2336. [Google Scholar] [CrossRef]
  24. Norat, R.; Wu, A.S.; Liu, X. Genetic Algorithms with Self-Adaptation for Predictive Classification of Medicare Standardized Payments for Physical Therapists. Expert Syst. Appl. 2023, 218, 119529. [Google Scholar] [CrossRef]
  25. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization; Technical Report; Nanyang Technological University: Singapore, 2017; Available online: https://www.researchgate.net/publication/317228117_Problem_Definitions_and_Evaluation_Criteria_for_the_CEC_2017_Competition_and_Special_Session_on_Constrained_Single_Objective_Real-Parameter_Optimization (accessed on 26 May 2025).
  26. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  27. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  28. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  29. Burges, C.; Shaked, T.; Renshaw, E.; Lazier, A.; Deeds, M.; Hamilton, N.; Hullender, G. Learning to rank using gradient descent. In Proceedings of the 22nd International Conference on Machine Learning, Bonn, Germany, 7–11 August 2005; pp. 89–96. [Google Scholar]
  30. Li, P.; Wu, Q.; Burges, C. McRank: Learning-to-rank using multiple classification and gradient boosting. In Proceedings of the 20th International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 3–6 December 2007; p. 904. [Google Scholar]
  31. Crammer, K.; Singer, Y. Pranking with ranking. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2001; Volume 14. [Google Scholar]
  32. Freund, Y.; Iyer, R.; Schapire, R.E.; Singer, Y. An efficient boosting algorithm for combining preferences. J. Mach. Learn. Res. 2003, 4, 933–969. [Google Scholar]
  33. Cao, Z.; Qin, T.; Liu, T.-Y.; Tsai, M.-F.; Li, H. Learning to rank: From pairwise approach to listwise approach. In Proceedings of the 24th International Conference on Machine Learning, Corvallis, OR, USA, 20–24 June 2007; pp. 129–136. [Google Scholar]
  34. Morales-Castañeda, B.; Zaldivar, D.; Cuevas, E.; Fausto, F.; Rodríguez, A. A better balance in metaheuristic algorithms: Does it exist? Swarm Evol. Comput. 2020, 54, 100671. [Google Scholar] [CrossRef]
  35. Fausto, F.; Reyna-Orta, A.; Cuevas, E.; Andrade, Á.G.; Perez-Cisneros, M. From ants to whales: Metaheuristics for all tastes. Artif. Intell. Rev. 2020, 53, 753–810. [Google Scholar] [CrossRef]
  36. Kriegel, H.-P.; Schubert, E.; Zimek, A. The (black) art of runtime evaluation: Are we comparing algorithms or implementations? Knowl. Inf. Syst. 2017, 52, 341–378. [Google Scholar] [CrossRef]
  37. Plevris, V.; Solorzano, G. A collection of 30 multidimensional functions for global optimization benchmarking. Data 2022, 7, 46. [Google Scholar] [CrossRef]
  38. Piotrowski, A.P.; Napiorkowski, J.J.; Piotrowska, A.E. Choice of benchmark optimization problems does matter. Swarm Evol. Comput. 2023, 83, 101378. [Google Scholar] [CrossRef]
Figure 1. Overall flowchart of the LTR-MHA.
Figure 1. Overall flowchart of the LTR-MHA.
Algorithms 18 00316 g001
Figure 2. Flowchart of training set construction and model training of the LTR-MHA.
Figure 2. Flowchart of training set construction and model training of the LTR-MHA.
Algorithms 18 00316 g002
Figure 3. Convergence curves for the functions with D = 30.
Figure 3. Convergence curves for the functions with D = 30.
Algorithms 18 00316 g003
Table 1. Feature variables and definitions.
Table 1. Feature variables and definitions.
TypeFeatureDescription
Algorithm Parameters A Core coefficient vector of WOA
C Core coefficient vector of WOA
EEscape energy, core parameter of HHO
c r o R a t e Crossover rate, core parameter of GA
m u t R a t e Mutation rate, core parameter of GA
Search-Dependent
Features
s t a g e Current algorithm stage, calculated as t / I m a x , where t is the current iteration and I m a x is the maximum iterations
D i v Population diversity, calculated according to the given formula Equation (15)
i n c r e Increase in fitness value of the current agent compared to its initial fitness value
d i f f Difference between the current individual and the best fitness value
Table 2. Update behaviors of the metaheuristic algorithms.
Table 2. Update behaviors of the metaheuristic algorithms.
MetaheuristicsActions
Encoding
Description
WOA1RS: Random Search, Equation (6)
2SE: Shrinking Encircling, Equation (2)
3SU: Spiral Updating, Equation (4)
HHO4RM: Perch relying on the locations of other random members, Equation (8)
5RT: Perch on random tall trees, Equation (8)
6HB: Hard Besiege, Equation (11)
7SB: Soft Besiege, Equation (10)
8SRD: Soft Besiege with progressive rapid dives, Equation (12)
9HRD: Hard Besiege with progressive rapid dives, Equation (14)
GA10CRO: Single-Point Crossover Operation
11MUT: Mutation Operation
12REP: Replication Operation
Table 3. CEC2017 test suite.
Table 3. CEC2017 test suite.
TypeNo.FunctionDimRangeOptimum
Unimodal FunctionF1Shifted and Rotated Bent Cigar Function10, 30, 50, 100[−100, 100]100
F3Shifted and Rotated Zakharov Function10, 30, 50, 100[−100, 100]300
Simple Multimodal FunctionF4Shifted and Rotated Rosenbrock’s Function10, 30, 50, 100[−100, 100]400
F5Shifted and Rotated Rastrigin’s Function10, 30, 50, 100[−100, 100]500
F6Shifted and Rotated Expanded Schaffer’s F6 Function10, 30, 50, 100[−100, 100]600
F7Shifted and Rotated Lunacek Bi_Rastrigin Function10, 30, 50, 100[−100, 100]700
F8Shifted and Rotated Non-Continuous Rastrigin’s Function10, 30, 50, 100[−100, 100]800
F9Shifted and Rotated Levy Function10, 30, 50, 100[−100, 100]900
F10Shifted and Rotated Schwefel’s Function10, 30, 50, 100[−100, 100]1000
Hybrid FunctionF11Hybrid Function 1 (N = 3)10, 30, 50, 100[−100, 100]1100
F12Hybrid Function 2 (N = 3)10, 30, 50, 100[−100, 100]1200
F13Hybrid Function 3 (N = 3)10, 30, 50, 100[−100, 100]1300
F14Hybrid Function 4 (N = 4)10, 30, 50, 100[−100, 100]1400
F15Hybrid Function 5 (N = 4)10, 30, 50, 100[−100, 100]1500
F18Hybrid Function 8 (N = 5)10, 30, 50, 100[−100, 100]1800
F19Hybrid Function 9 (N = 5)10, 30, 50, 100[−100, 100]1900
Composition FunctionF21Composition Function 1 (N = 3)10, 30, 50, 100[−100, 100]2100
F22Composition Function 2 (N = 3)10, 30, 50, 100[−100, 100]2200
F23Composition Function 3 (N = 4)10, 30, 50, 100[−100, 100]2300
F24Composition Function 4 (N = 4)10, 30, 50, 100[−100, 100]2400
F25Composition Function 5 (N = 5)10, 30, 50, 100[−100, 100]2500
F26Composition Function 6 (N = 5)10, 30, 50, 100[−100, 100]2600
F27Composition Function 7 (N = 6)10, 30, 50, 100[−100, 100]2700
F28Composition Function 8 (N = 6)10, 30, 50, 100[−100, 100]2800
F30Composition Function 10 (N = 3)10, 30, 50, 100[−100, 100]3000
Table 4. Parameter configurations for all algorithms used in the experiments.
Table 4. Parameter configurations for all algorithms used in the experiments.
AlgorithmParameterValue
WOAaLinearly decreased from 2 to 0
b1
HHOLévy flight β 1.5
GACrossover rate c r o R a t e 0.3
Mutation rate m u t R a t e 0.08
Selection operationTournament (size = 3)
Crossover operationSingle-point crossover
Mutation operationRandomly replace selected genes within feasible range
ISGAWeighting parameter wLinearly decreasing from 0.9 to 0.4
Perturbation factorThe perturbation is computed as a random number rand [ 1 , 1 ] multiplied by a linearly decreasing factor ( 1 t / m a x _ i t e r )
RLWOAaLinearly decreased from 2 to 0
b1
Weighting parameter σ 0.6
Learning rateLinearly decreased from 0.9 to 0.1
Discount factor0.8
LTR-MHAModelRandom Forest (scikit-learn)
RF parametersn_estimators = 50, max_depth = 10, min_samples_split = 10, min_samples_leaf = 5
Table 5. Comparison of unimodal and simple multimodal functions.
Table 5. Comparison of unimodal and simple multimodal functions.
FuncsAlgorithmDim = 10 Dim = 30 Dim = 50 Dim = 100
MeanStdBestMeanStdBestMeanStdBestMeanStdBest
F3WOA 2.09 × 10 4 8.40 × 10 2 2.01 × 10 4 3.20 × 10 11 2.59 × 10 10 2.94 × 10 11 3.82 × 10 5 2.79 × 10 3 3.79 × 10 5 7.64 × 10 5 1.66 × 10 5 5.98 × 10 5
HHO 1.45 × 10 4 2.45 × 10 3 1.20 × 10 4 4.74 × 10 11 1.36 × 10 11 3.38 × 10 11 1.97 × 10 5 2.48 × 10 3 1.95 × 10 5 3.73 × 10 5 6.72 × 10 3 3.66 × 10 5
GA 2.63 × 10 4 1.17 × 10 4 1.46 × 10 4 4.50 × 10 10 1.20 × 10 10 3.00 × 10 10 4.10 × 10 5 7.47 × 10 4 3.35 × 10 5 8.30 × 10 5 4.82 × 10 4 7.82 × 10 5
ISGA 2.20 × 10 4 9.50 × 10 3 1.50 × 10 4 2.80 × 10 9 8.10 × 10 8 1.90 × 10 9 4.30 × 10 5 7.80 × 10 4 3.50 × 10 5 8.50 × 10 5 5.00 × 10 4 8.00 × 10 5
RLWOA 1.35 × 10 4 3.80 × 10 3 1.25 × 10 4 2 . 37 × 10 9 7.23 × 10 8 1.64 × 10 9 1.90 × 10 5 1.70 × 10 4 1.70 × 10 5 3.70 × 10 5 6.00 × 10 3 3.60 × 10 5
Rand-MHA 1.30 × 10 4 3.60 × 10 3 1.24 × 10 4 5.63 × 10 9 6.23 × 10 8 4.06 × 10 9 1.87 × 10 5 2.36 × 10 4 1.85 × 10 5 3 . 60 × 10 5 5.97 × 10 3 3.55 × 10 5
LTR-MHA 5 . 89 × 10 3 2.17 × 10 2 5.67 × 10 3 4.04 × 10 10 1.19 × 10 10 2.85 × 10 10 1 . 85 × 10 5 1.62 × 10 4 1.69 × 10 5 8.90 × 10 5 8.88 × 10 2 8.90 × 10 5
F5WOA 5.67 × 10 2 1.47 × 10 0 5.66 × 10 2 9.28 × 10 2 2.99 × 10 1 8.98 × 10 2 1.18 × 10 3 1.03 × 10 1 1.17 × 10 3 2.08 × 10 3 9.71 × 10 0 2.07 × 10 3
HHO 5.59 × 10 2 2.67 × 10 0 5.56 × 10 2 8.67 × 10 2 8.78 × 10 1 8.66 × 10 2 1.16 × 10 3 3.72 × 10 1 1.12 × 10 3 2.02 × 10 3 1.33 × 10 1 2.01 × 10 3
GA 5.32 × 10 2 1.82 × 10 1 5.14 × 10 2 6.94 × 10 2 4.28 × 10 1 6.52 × 10 2 1.03 × 10 3 5.21 × 10 1 9.76 × 10 2 1.95 × 10 3 1.55 × 10 2 1.79 × 10 3
ISGA 5.30 × 10 2 1.80 × 10 1 5.12 × 10 2 7.00 × 10 2 4.30 × 10 1 6.60 × 10 2 1.04 × 10 3 5.20 × 10 1 9.80 × 10 2 1.96 × 10 3 1.50 × 10 2 1.80 × 10 3
RLWOA 5.35 × 10 2 8.50 × 10 0 5.27 × 10 2 7.30 × 10 2 2.50 × 10 1 6.90 × 10 2 9.00 × 10 2 4.30 × 10 1 8.80 × 10 2 1.66 × 10 3 1.80 × 10 1 1.64 × 10 3
Rand-MHA 5.36 × 10 2 8.46 × 10 0 5.28 × 10 2 7.28 × 10 2 2.53 × 10 1 6.88 × 10 2 8 . 95 × 10 2 4.31 × 10 1 8.78 × 10 2 1.65 × 10 3 1.82 × 10 1 1.63 × 10 3
LTR-MHA 5 . 28 × 10 2 3.88 × 10 0 5.24 × 10 2 6 . 32 × 10 2 2.13 × 10 1 6.11 × 10 2 1.04 × 10 3 3.47 × 10 1 1.01 × 10 3 1 . 48 × 10 3 1.14 × 10 1 1.37 × 10 3
F6WOA 6.51 × 10 2 1.39 × 10 0 6.50 × 10 2 6.88 × 10 2 7.33 × 10 1 6.88 × 10 2 7.36 × 10 2 4.07 × 10 0 7.32 × 10 2 7.28 × 10 2 1.29 × 10 0 7.26 × 10 2
HHO 6.49 × 10 2 1.16 × 10 1 6.37 × 10 2 6.89 × 10 2 1.88 × 10 1 6.71 × 10 2 7.27 × 10 2 4.59 × 10 0 7.23 × 10 2 7.18 × 10 2 3.54 × 10 0 7.14 × 10 2
GA 6.15 × 10 2 6.78 × 10 0 6.08 × 10 2 6.66 × 10 2 3.35 × 10 0 6.62 × 10 2 6.99 × 10 2 1.26 × 10 1 6.86 × 10 2 7.39 × 10 2 8.36 × 10 0 7.31 × 10 2
ISGA 6.20 × 10 2 8.50 × 10 0 6.12 × 10 2 6.68 × 10 2 3.40 × 10 0 6.64 × 10 2 7.00 × 10 2 1.25 × 10 1 6.87 × 10 2 7.40 × 10 2 8.40 × 10 0 7.32 × 10 2
RLWOA 6.22 × 10 2 8.50 × 10 0 6.15 × 10 2 6.63 × 10 2 2.40 × 10 0 6.60 × 10 2 7.00 × 10 2 1.45 × 10 1 6.93 × 10 2 6.98 × 10 2 4.70 × 10 0 6.93 × 10 2
Rand-MHA 6.21 × 10 2 8.48 × 10 0 6.14 × 10 2 6.62 × 10 2 2.35 × 10 0 6.59 × 10 2 6.99 × 10 2 1.45 × 10 1 6.92 × 10 2 6.97 × 10 2 4.72 × 10 0 6.92 × 10 2
LTR-MHA 6 . 12 × 10 2 1.84 × 10 0 6.10 × 10 2 6 . 44 × 10 2 3.85 × 10 1 6.43 × 10 2 6 . 80 × 10 2 1.94 × 10 0 6.78 × 10 2 6 . 73 × 10 2 2.78 × 10 0 6.51 × 10 2
F7WOA 8.18 × 10 2 6.94 × 10 0 8.11 × 10 2 1.46 × 10 3 3.60 × 10 1 1.42 × 10 3 2.06 × 10 3 1.33 × 10 1 2.05 × 10 3 3.98 × 10 3 7.42 × 10 1 3.91 × 10 3
HHO 8.25 × 10 2 6.11 × 10 0 8.18 × 10 2 1.39 × 10 3 1.25 × 10 1 1.38 × 10 3 1.95 × 10 3 1.24 × 10 1 1.94 × 10 3 3.83 × 10 3 6.42 × 10 1 3.76 × 10 3
GA 7 . 42 × 10 2 5.01 × 10 0 7.37 × 10 2 9 . 03 × 10 2 1.32 × 10 0 9.02 × 10 2 1.22 × 10 3 9.69 × 10 1 1.13 × 10 3 2 . 45 × 10 3 1.12 × 10 2 2.33 × 10 3
ISGA 7.45 × 10 2 7.50 × 10 0 7.36 × 10 2 9.05 × 10 2 1.35 × 10 0 9.03 × 10 2 1.23 × 10 3 9.70 × 10 1 1.14 × 10 3 2.46 × 10 3 1.10 × 10 2 2.34 × 10 3
RLWOA 7.50 × 10 2 7.50 × 10 0 7.40 × 10 2 1.09 × 10 3 1.80 × 10 1 1.05 × 10 3 1.63 × 10 3 1.10 × 10 2 1.52 × 10 3 3.56 × 10 3 1.40 × 10 2 3.42 × 10 3
Rand-MHA 7.48 × 10 2 7.45 × 10 0 7.39 × 10 2 1.08 × 10 3 1.75 × 10 1 1.04 × 10 3 1.62 × 10 3 1.05 × 10 2 1.51 × 10 3 3.55 × 10 3 1.38 × 10 2 3.41 × 10 3
LTR-MHA 7.65 × 10 2 2.38 × 10 0 7.63 × 10 2 1.20 × 10 3 2.59 × 10 1 1.20 × 10 3 1 . 16 × 10 3 4.66 × 10 0 1.12 × 10 3 4.00 × 10 3 3.31 × 10 1 3.96 × 10 3
F8WOA 8.44 × 10 2 5.13 × 10 0 8.39 × 10 2 1.13 × 10 3 1.71 × 10 1 1.12 × 10 3 1.47 × 10 3 2.29 × 10 1 1.45 × 10 3 2.51 × 10 3 2.72 × 10 1 2.48 × 10 3
HHO 8.53 × 10 2 2.41 × 10 1 8.29 × 10 2 1.11 × 10 3 2.59 × 10 1 1.08 × 10 3 1.45 × 10 3 7.93 × 10 0 1.45 × 10 3 2.50 × 10 3 3.09 × 10 1 2.47 × 10 3
GA 8.25 × 10 2 6.27 × 10 0 8.19 × 10 2 9.80 × 10 2 2.82 × 10 1 9.51 × 10 2 1.23 × 10 3 7.96 × 10 1 1.15 × 10 3 2.24 × 10 3 1.04 × 10 1 2.23 × 10 3
ISGA 8.27 × 10 2 6.30 × 10 0 8.21 × 10 2 9.82 × 10 2 2.80 × 10 1 9.53 × 10 2 1.24 × 10 3 7.90 × 10 1 1.16 × 10 3 2.25 × 10 3 1.00 × 10 1 2.24 × 10 3
RLWOA 8.22 × 10 2 3.50 × 10 0 8.15 × 10 2 9.60 × 10 2 3.80 × 10 1 9.25 × 10 2 1.22 × 10 3 8.60 × 10 0 1.22 × 10 3 2.15 × 10 3 4.60 × 10 0 2.14 × 10 3
Rand-MHA 8.21 × 10 2 3.47 × 10 0 8.14 × 10 2 9 . 59 × 10 2 3.82 × 10 1 9.22 × 10 2 1 . 21 × 10 3 8.54 × 10 0 1.21 × 10 3 2 . 14 × 10 3 4.56 × 10 0 2.13 × 10 3
LTR-MHA 8 . 20 × 10 2 1.79 × 10 0 7.26 × 10 2 9.04 × 10 3 2.57 × 10 1 1.04 × 10 3 1.31 × 10 3 1.92 × 10 1 1.29 × 10 3 2.21 × 10 3 3.55 × 10 1 2.17 × 10 3
F9WOA 1.34 × 10 3 1.98 × 10 1 1.32 × 10 3 1.06 × 10 4 1.61 × 10 3 9.04 × 10 3 2.95 × 10 4 5.80 × 10 2 2.89 × 10 4 7.83 × 10 4 2.42 × 10 3 7.59 × 10 4
HHO 1.41 × 10 3 9.73 × 10 1 1.32 × 10 3 7.79 × 10 3 1.32 × 10 3 6.47 × 10 3 2.87 × 10 4 1.33 × 10 3 2.74 × 10 4 7.19 × 10 4 3.96 × 10 3 6.79 × 10 4
GA 9 . 03 × 10 2 2.18 × 10 0 9.01 × 10 2 8.37 × 10 3 6.70 × 10 3 1.67 × 10 3 2.18 × 10 4 1.16 × 10 3 2.07 × 10 4 9.74 × 10 4 1.39 × 10 4 8.35 × 10 4
ISGA 9.65 × 10 2 1.15 × 10 1 9.41 × 10 2 8.40 × 10 3 6.70 × 10 3 1.70 × 10 3 2.19 × 10 4 1.15 × 10 3 2.08 × 10 4 9.75 × 10 4 1.38 × 10 4 8.36 × 10 4
RLWOA 9.67 × 10 2 1.15 × 10 1 9.43 × 10 2 7.38 × 10 3 3.72 × 10 3 6.79 × 10 3 2.18 × 10 4 4.55 × 10 3 1.93 × 10 4 6.13 × 10 4 8.70 × 10 2 6.04 × 10 4
Rand-MHA 9.66 × 10 2 1.13 × 10 1 9.42 × 10 2 7.36 × 10 3 3.70 × 10 3 6.77 × 10 3 2.17 × 10 4 4.52 × 10 3 1.92 × 10 4 6.12 × 10 4 8.64 × 10 2 6.03 × 10 4
LTR-MHA 9.38 × 10 2 8.71 × 10 0 9.29 × 10 2 5 . 04 × 10 3 1.45 × 10 2 4.90 × 10 3 1 . 79 × 10 4 3.79 × 10 3 1.41 × 10 4 6 . 14 × 10 4 5.47 × 10 1 6.03 × 10 4
F10WOA 2.39 × 10 3 2.82 × 10 2 2.10 × 10 3 7.94 × 10 3 1.28 × 10 3 6.66 × 10 3 1.44 × 10 4 2.31 × 10 2 1.42 × 10 4 3.28 × 10 4 5.53 × 10 1 3.28 × 10 4
HHO 2.26 × 10 3 1.26 × 10 2 2.13 × 10 3 7.78 × 10 3 5.25 × 10 2 7.26 × 10 3 1.41 × 10 4 6.04 × 10 1 1.40 × 10 4 3.23 × 10 4 4.39 × 10 2 3.18 × 10 4
GA 1.68 × 10 3 5.76 × 10 1 1.62 × 10 3 5.45 × 10 3 5.02 × 10 2 4.55 × 10 3 9.96 × 10 3 2.98 × 10 2 9.66 × 10 3 2.62 × 10 4 8.42 × 10 1 2.61 × 10 4
ISGA 1.70 × 10 3 5.80 × 10 1 1.64 × 10 3 5.50 × 10 3 5.00 × 10 2 4.60 × 10 3 1.00 × 10 4 3.00 × 10 2 9.70 × 10 3 2.63 × 10 4 8.40 × 10 1 2.62 × 10 4
RLWOA 1.78 × 10 3 7.80 × 10 1 1.77 × 10 3 6.36 × 10 3 5.65 × 10 2 6.28 × 10 3 1.09 × 10 4 8.70 × 10 2 1.03 × 10 4 2.48 × 10 4 7.05 × 10 2 2.41 × 10 4
Rand-MHA 1.77 × 10 3 7.76 × 10 1 1.76 × 10 3 6.35 × 10 3 5.62 × 10 2 6.27 × 10 3 1.08 × 10 4 8.66 × 10 2 1.02 × 10 4 2.47 × 10 4 7.02 × 10 2 2.40 × 10 4
LTR-MHA 1 . 65 × 10 3 7.02 × 10 1 1.58 × 10 3 5 . 17 × 10 3 1.80 × 10 2 4.79 × 10 3 9 . 31 × 10 3 1.03 × 10 3 9.21 × 10 3 2 . 15 × 10 4 5.88 × 10 2 2.09 × 10 4
Table 6. Comparison of hybrid functions.
Table 6. Comparison of hybrid functions.
FuncsAlgorithmDim = 10 Dim = 30 Dim = 50 Dim = 100
MeanStdBestMeanStdBestMeanStdBestMeanStdBest
F12WOA 8.44 × 10 7 1.91 × 10 7 6.53 × 10 7 3.06 × 10 10 1.06 × 10 10 2.00 × 10 10 2.77 × 10 11 2.54 × 10 10 2.51 × 10 11 1.28 × 10 12 1.72 × 10 11 1.11 × 10 12
HHO 8.65 × 10 8 8.25 × 10 8 4.03 × 10 7 1.10 × 10 11 2.37 × 10 10 8.67 × 10 10 4.61 × 10 11 2.68 × 10 11 1.93 × 10 11 1.51 × 10 12 1.74 × 10 11 1.34 × 10 12
GA 7.45 × 10 6 2.00 × 10 6 5.45 × 10 6 5.10 × 10 7 2.96 × 10 6 4.80 × 10 7 5 . 57 × 10 8 1.09 × 10 7 5.46 × 10 8 1.81 × 10 10 3.79 × 10 9 1.43 × 10 10
ISGA 7.00 × 10 6 1.80 × 10 6 5.10 × 10 6 4.50 × 10 7 2.50 × 10 6 4.20 × 10 7 4.80 × 10 8 9.50 × 10 6 4.70 × 10 8 1.50 × 10 10 3.20 × 10 9 1.20 × 10 10
RLWOA 4.50 × 10 6 1.20 × 10 5 4.40 × 10 6 1.80 × 10 7 6.50 × 10 7 1.50 × 10 9 2.60 × 10 10 1.70 × 10 9 2.45 × 10 10 1.20 × 10 10 3.20 × 10 9 1.15 × 10 10
Rand-MHA 6.13 × 10 6 2.24 × 10 5 1.20 × 10 7 1.23 × 10 8 4.60 × 10 7 1.09 × 10 8 2.81 × 10 9 1.35 × 10 9 2.71 × 10 9 3.33 × 10 10 9.40 × 10 9 2.39 × 10 10
LTR-MHA 3 . 82 × 10 6 5.24 × 10 4 3.77 × 10 6 1 . 33 × 10 7 5.89 × 10 7 1.27 × 10 9 2.78 × 10 10 1.86 × 10 9 2.59 × 10 10 1 . 11 × 10 10 3.05 × 10 9 1.08 × 10 10
F13WOA 2.13 × 10 4 1.07 × 10 4 1.06 × 10 4 1.04 × 10 10 2.97 × 10 9 7.40 × 10 9 1.33 × 10 11 6.98 × 10 10 6.32 × 10 10 3.30 × 10 11 8.39 × 10 10 2.46 × 10 11
HHO 1.66 × 10 4 5.48 × 10 3 1.11 × 10 4 3.86 × 10 10 3.01 × 10 10 8.58 × 10 9 3.95 × 10 11 4.45 × 10 10 3.51 × 10 11 3.92 × 10 11 1.87 × 10 10 3.73 × 10 11
GA 1.28 × 10 6 4.16 × 10 5 8.62 × 10 5 6.44 × 10 7 1.34 × 10 7 5.10 × 10 7 2.45 × 10 8 7.02 × 10 7 1.75 × 10 8 1 . 22 × 10 9 1.51 × 10 8 1.07 × 10 9
ISGA 1.10 × 10 6 3.80 × 10 5 7.50 × 10 5 5.80 × 10 7 1.20 × 10 7 4.60 × 10 7 2.20 × 10 8 6.50 × 10 7 1.60 × 10 8 1.10 × 10 9 1.40 × 10 8 9.80 × 10 8
RLWOA 1.10 × 10 4 6.00 × 10 3 9.00 × 10 3 3.00 × 10 7 1.50 × 10 7 1.50 × 10 7 1.30 × 10 8 7.80 × 10 9 7.80 × 10 7 2.30 × 10 9 2.70 × 10 8 2.00 × 10 9
Rand-MHA 1 . 04 × 10 4 5.63 × 10 3 8.69 × 10 3 2 . 84 × 10 7 1.36 × 10 7 1.42 × 10 7 1 . 20 × 10 8 7.55 × 10 9 7.37 × 10 7 2.22 × 10 9 2.60 × 10 8 1.96 × 10 9
LTR-MHA 9.98 × 10 4 6.89 × 10 3 9.29 × 10 4 3.17 × 10 8 2.12 × 10 7 2.96 × 10 8 6.79 × 10 9 2.05 × 10 9 4.74 × 10 9 2.57 × 10 10 2.31 × 10 9 2.33 × 10 10
F14WOA 5.69 × 10 3 5.14 × 10 1 5.63 × 10 3 4.73 × 10 6 1.18 × 10 6 3.56 × 10 6 9.11 × 10 6 3.86 × 10 6 5.25 × 10 6 9.83 × 10 7 5.74 × 10 7 4.09 × 10 7
HHO 3.24 × 10 3 2.52 × 10 2 1.98 × 10 3 4.13 × 10 6 1.97 × 10 6 2.16 × 10 6 1.00 × 10 8 6.73 × 10 7 3.30 × 10 7 5.51 × 10 7 1.57 × 10 7 3.94 × 10 7
GA 4.33 × 10 4 4.94 × 10 3 8.32 × 10 3 1.35 × 10 6 1.19 × 10 6 1.60 × 10 5 8.33 × 10 6 1.22 × 10 6 7.11 × 10 6 5.33 × 10 7 3.33 × 10 7 2.00 × 10 7
ISGA 4.00 × 10 4 4.50 × 10 3 7.80 × 10 3 1.20 × 10 6 1.10 × 10 6 1.40 × 10 5 7.80 × 10 6 1.10 × 10 6 6.80 × 10 6 5.00 × 10 7 3.10 × 10 7 1.90 × 10 7
RLWOA 3.20 × 10 3 1.30 × 10 3 3.10 × 10 3 2.50 × 10 5 1.60 × 10 5 8.00 × 10 4 3.30 × 10 6 4.20 × 10 5 3.25 × 10 6 1.20 × 10 7 1.30 × 10 7 1.05 × 10 7
Rand-MHA 3.38 × 10 3 4.34 × 10 3 2.10 × 10 3 3.19 × 10 5 7.45 × 10 5 1.63 × 10 5 3.22 × 10 6 4.67 × 10 5 3.21 × 10 6 1.99 × 10 7 5.11 × 10 6 1.48 × 10 7
LTR-MHA 3 . 13 × 10 3 1.12 × 10 3 3.01 × 10 3 2 . 19 × 10 5 1.45 × 10 5 7.43 × 10 4 3 . 16 × 10 6 3.97 × 10 5 3.11 × 10 6 1 . 13 × 10 7 1.22 × 10 7 1.01 × 10 7
F15WOA 2.67 × 10 4 1.74 × 10 4 9.28 × 10 3 2.98 × 10 9 1.42 × 10 8 2.84 × 10 9 2.81 × 10 10 1.50 × 10 10 1.30 × 10 10 1.74 × 10 11 3.22 × 10 10 1.41 × 10 11
HHO 1.35 × 10 4 1.56 × 10 3 1.20 × 10 4 1.20 × 10 9 9.84 × 10 8 2.19 × 10 8 3.42 × 10 10 5.26 × 10 9 2.89 × 10 10 2.18 × 10 11 1.56 × 10 10 2.02 × 10 11
GA 1.11 × 10 5 1.65 × 10 4 9.41 × 10 4 2.87 × 10 7 1.62 × 10 7 1.26 × 10 7 4 . 68 × 10 7 1.99 × 10 7 2.69 × 10 7 1 . 75 × 10 8 8.96 × 10 6 1.66 × 10 8
ISGA 1.00 × 10 5 1.50 × 10 4 8.50 × 10 4 2.60 × 10 7 1.50 × 10 7 1.10 × 10 7 4.30 × 10 7 1.80 × 10 7 2.50 × 10 7 1.60 × 10 8 8.20 × 10 6 1.50 × 10 8
RLWOA 1.50 × 10 4 5.00 × 10 3 1.30 × 10 4 7.00 × 10 6 5.50 × 10 6 4.20 × 10 6 4.80 × 10 7 1.40 × 10 7 4.50 × 10 7 4.50 × 10 8 2.00 × 10 8 2.50 × 10 8
Rand-MHA 1 . 16 × 10 4 4.56 × 10 3 1.16 × 10 4 5 . 82 × 10 6 5.21 × 10 6 3.89 × 10 6 4.70 × 10 7 1.35 × 10 7 4.41 × 10 7 4.38 × 10 8 1.93 × 10 8 2.45 × 10 8
LTR-MHA 2.31 × 10 4 2.88 × 10 4 1.43 × 10 4 1.70 × 10 7 4.21 × 10 6 1.28 × 10 7 4.83 × 10 8 1.81 × 10 8 3.02 × 10 8 9.53 × 10 9 1.26 × 10 9 8.27 × 10 9
F18WOA 1.71 × 10 4 3.30 × 10 3 1.37 × 10 4 5.25 × 10 7 4.56 × 10 7 6.90 × 10 6 5.61 × 10 7 2.97 × 10 7 2.64 × 10 7 8.41 × 10 7 4.62 × 10 7 3.79 × 10 7
HHO 1.64 × 10 4 1.04 × 10 4 2.01 × 10 3 3.56 × 10 7 3.05 × 10 7 5.14 × 10 6 5.89 × 10 7 2.50 × 10 7 3.39 × 10 7 7.96 × 10 7 3.39 × 10 6 7.62 × 10 7
GA 1.70 × 10 5 1.33 × 10 5 3.70 × 10 4 4.16 × 10 6 3.04 × 10 6 1.13 × 10 6 1.19 × 10 7 2.45 × 10 6 9.43 × 10 6 2.90 × 10 7 1.01 × 10 7 1.90 × 10 7
ISGA 1.60 × 10 5 1.25 × 10 5 3.50 × 10 4 3.90 × 10 6 2.90 × 10 6 1.00 × 10 6 1.10 × 10 7 2.30 × 10 6 9.00 × 10 6 2.70 × 10 7 9.50 × 10 6 1.80 × 10 7
RLWOA 1.50 × 10 4 2.50 × 10 4 1.45 × 10 4 1.20 × 10 6 3.60 × 10 5 8.00 × 10 5 8.60 × 10 6 4.60 × 10 6 8.40 × 10 6 1.50 × 10 7 7.00 × 10 6 1.35 × 10 7
Rand-MHA 1.68 × 10 4 1.35 × 10 5 9.07 × 10 3 9.00 × 10 6 3.22 × 10 6 6.46 × 10 6 8.75 × 10 6 4.35 × 10 6 8.62 × 10 6 1.57 × 10 7 7.42 × 10 6 8.29 × 10 6
LTR-MHA 1 . 55 × 10 4 2.34 × 10 4 1.42 × 10 4 1 . 11 × 10 6 3.39 × 10 5 7.69 × 10 5 8 . 40 × 10 6 4.50 × 10 6 8.35 × 10 6 1 . 48 × 10 7 6.86 × 10 6 1.30 × 10 7
F19WOA 3.28 × 10 4 2.07 × 10 3 3.07 × 10 4 2.12 × 10 9 2.04 × 10 9 8.36 × 10 7 1.68 × 10 10 7.12 × 10 9 9.66 × 10 9 1.09 × 10 11 2.22 × 10 10 8.68 × 10 10
HHO 1.54 × 10 5 4.87 × 10 4 1.06 × 10 5 1.09 × 10 10 3.90 × 10 9 7.02 × 10 9 2.09 × 10 10 2.27 × 10 9 1.86 × 10 10 2.21 × 10 11 2.89 × 10 10 1.92 × 10 11
GA 5.54 × 10 5 5.30 × 10 5 2.45 × 10 4 1.08 × 10 7 2.89 × 10 5 1.05 × 10 7 1.14 × 10 7 6.22 × 10 5 1.07 × 10 7 2.65 × 10 8 1.51 × 10 8 1.15 × 10 8
ISGA 5.20 × 10 5 5.00 × 10 5 2.30 × 10 4 1.00 × 10 7 2.70 × 10 5 9.80 × 10 6 1.10 × 10 7 6.00 × 10 5 1.05 × 10 7 2.50 × 10 8 1.45 × 10 8 1.10 × 10 8
RLWOA 3.50 × 10 4 4.30 × 10 3 3.30 × 10 4 5.00 × 10 7 3.50 × 10 7 1.35 × 10 7 1.15 × 10 7 8.80 × 10 7 1.10 × 10 7 2.35 × 10 8 2.25 × 10 8 2.25 × 10 8
Rand-MHA 2 . 58 × 10 4 4.35 × 10 4 1.90 × 10 4 1 . 63 × 10 6 3.22 × 10 5 4.14 × 10 5 2.77 × 10 7 6.46 × 10 4 4.90 × 10 6 3.88 × 10 8 1.27 × 10 8 2.61 × 10 8
LTR-MHA 3.76 × 10 4 4.11 × 10 3 3.35 × 10 4 4.56 × 10 7 3.27 × 10 7 1.29 × 10 7 1 . 12 × 10 7 8.69 × 10 7 1.05 × 10 7 2 . 31 × 10 8 2.19 × 10 8 2.19 × 10 8
Table 7. Comparison of composition functions.
Table 7. Comparison of composition functions.
FuncsAlgorithmDim = 10 Dim = 30 Dim = 50 Dim = 100
MeanStdBestMeanStdBestMeanStdBestMeanStdBest
F22WOA 2.77 × 10 3 3.56 × 10 2 2.41 × 10 3 9.53 × 10 3 5.17 × 10 2 9.01 × 10 3 1.57 × 10 4 2.77 × 10 2 1.54 × 10 4 3.48 × 10 4 2.12 × 10 2 3.46 × 10 4
HHO 2.83 × 10 3 3.21 × 10 2 2.51 × 10 3 9.41 × 10 3 2.20 × 10 2 9.19 × 10 3 1.62 × 10 4 5.20 × 10 2 1.57 × 10 4 3.44 × 10 4 3.04 × 10 2 3.41 × 10 4
GA 2.31 × 10 3 9.78 × 10 2 2.31 × 10 3 6.33 × 10 3 2.17 × 10 2 6.12 × 10 3 1.16 × 10 4 1.60 × 10 2 1.15 × 10 4 2.80 × 10 4 3.77 × 10 2 2.76 × 10 4
ISGA 2.31 × 10 3 9.50 × 10 2 2.31 × 10 3 6.20 × 10 3 2.10 × 10 2 6.00 × 10 3 1.14 × 10 4 1.55 × 10 2 1.13 × 10 4 2.75 × 10 4 3.70 × 10 2 2.71 × 10 4
RLWOA 2.30 × 10 3 8.00 × 10 1 2.25 × 10 3 5.90 × 10 3 3.00 × 10 3 2.90 × 10 3 1.13 × 10 4 4.80 × 10 2 1.08 × 10 4 2.40 × 10 4 1.55 × 10 2 2.30 × 10 4
Rand-MHA 2.31 × 10 3 8.57 × 10 1 2.31 × 10 3 7.71 × 10 3 3.17 × 10 2 7.53 × 10 3 1.22 × 10 4 1.38 × 10 3 1.16 × 10 4 2.98 × 10 4 4.47 × 10 2 2.94 × 10 4
LTR-MHA 2 . 30 × 10 3 7.73 × 10 1 2.22 × 10 3 5 . 84 × 10 3 3.18 × 10 3 2.66 × 10 3 1 . 12 × 10 4 4.85 × 10 2 1.07 × 10 4 2 . 38 × 10 4 1.50 × 10 2 2.27 × 10 4
F23WOA 2.68 × 10 3 2.53 × 10 1 2.65 × 10 3 3.30 × 10 3 3.88 × 10 1 3.26 × 10 3 4.33 × 10 3 7.59 × 10 1 4.26 × 10 3 6.11 × 10 3 1.43 × 10 2 5.97 × 10 3
HHO 2.70 × 10 3 1.71 × 10 1 2.68 × 10 3 3.41 × 10 3 2.82 × 10 1 3.38 × 10 3 4.34 × 10 3 6.72 × 10 1 4.28 × 10 3 6.38 × 10 3 2.29 × 10 2 6.15 × 10 3
GA 2.64 × 10 3 9.32 × 10 1 2.63 × 10 3 2.85 × 10 3 5.40 × 10 1 2.80 × 10 3 3.19 × 10 3 1.02 × 10 1 3.18 × 10 3 3.99 × 10 3 8.92 × 10 1 3.90 × 10 3
ISGA 2.64 × 10 3 9.00 × 10 1 2.63 × 10 3 2.83 × 10 3 5.20 × 10 1 2.78 × 10 3 3.17 × 10 3 9.80 × 10 0 3.16 × 10 3 3.95 × 10 3 8.70 × 10 1 3.86 × 10 3
RLWOA 2.65 × 10 3 8.50 × 10 1 2.63 × 10 3 2.84 × 10 3 1.60 × 10 1 2.82 × 10 3 3.17 × 10 3 3.10 × 10 0 3.13 × 10 3 3.10 × 10 3 8.50 × 10 1 3.02 × 10 3
Rand-MHA 2.64 × 10 3 3.23 × 10 0 2.63 × 10 3 2.99 × 10 3 3.50 × 10 1 2.91 × 10 3 3.78 × 10 3 3.33 × 10 0 3.72 × 10 3 4.74 × 10 3 2.44 × 10 2 4.49 × 10 3
LTR-MHA 2 . 63 × 10 3 8.16 × 10 1 2.63 × 10 3 2 . 83 × 10 3 1.57 × 10 1 2.81 × 10 3 3 . 16 × 10 3 3.03 × 10 0 3.12 × 10 3 3 . 09 × 10 3 8.38 × 10 1 3.01 × 10 3
F24WOA 2.78 × 10 3 2.89 × 10 1 2.76 × 10 3 3.86 × 10 3 7.38 × 10 0 3.85 × 10 3 4.52 × 10 3 1.50 × 10 2 4.37 × 10 3 9.50 × 10 3 1.95 × 10 2 9.30 × 10 3
HHO 2.82 × 10 3 6.89 × 10 0 2.82 × 10 3 3.72 × 10 3 3.71 × 10 0 3.72 × 10 3 4.68 × 10 3 8.61 × 10 1 4.60 × 10 3 9.32 × 10 3 4.34 × 10 1 9.28 × 10 3
GA 2.78 × 10 3 9.22 × 10 0 2.76 × 10 3 3.17 × 10 3 4.58 × 10 1 3.12 × 10 3 3.66 × 10 3 2.00 × 10 1 3.64 × 10 3 4.88 × 10 3 1.64 × 10 2 4.72 × 10 3
ISGA 2.78 × 10 3 9.00 × 10 0 2.76 × 10 3 3.15 × 10 3 4.40 × 10 1 3.10 × 10 3 3.64 × 10 3 1.95 × 10 1 3.62 × 10 3 4.85 × 10 3 1.60 × 10 2 4.70 × 10 3
RLWOA 2.78 × 10 3 4.50 × 10 0 2.76 × 10 3 3.06 × 10 3 2.20 × 10 0 3.06 × 10 3 3.42 × 10 3 1.05 × 10 1 3.41 × 10 3 4.82 × 10 3 5.20 × 10 1 4.77 × 10 3
Rand-MHA 2.79 × 10 3 6.72 × 10 0 2.78 × 10 3 3.28 × 10 3 4.78 × 10 1 3.16 × 10 3 3.90 × 10 3 1.02 × 10 2 3.80 × 10 3 5.62 × 10 3 4.84 × 10 1 5.57 × 10 3
LTR-MHA 2 . 77 × 10 3 4.45 × 10 0 2.76 × 10 3 3 . 05 × 10 3 2.13 × 10 0 3.05 × 10 3 3 . 41 × 10 3 1.00 × 10 1 3.40 × 10 3 4 . 80 × 10 3 5.15 × 10 1 4.75 × 10 3
F25WOA 3.18 × 10 3 1.17 × 10 2 3.06 × 10 3 4.39 × 10 3 3.85 × 10 2 4.00 × 10 3 1.02 × 10 4 4.63 × 10 2 9.73 × 10 3 2.29 × 10 4 1.96 × 10 3 2.09 × 10 4
HHO 3.34 × 10 3 1.05 × 10 2 3.24 × 10 3 4.37 × 10 3 4.35 × 10 1 4.33 × 10 3 1.50 × 10 4 1.01 × 10 3 1.40 × 10 4 2.83 × 10 4 7.32 × 10 2 2.75 × 10 4
GA 2.95 × 10 3 2.84 × 10 0 2.95 × 10 3 3.05 × 10 3 6.30 × 10 0 3.00 × 10 3 3 . 31 × 10 3 1.11 × 10 2 3.20 × 10 3 5 . 63 × 10 3 6.81 × 10 1 5.56 × 10 3
ISGA 2.95 × 10 3 2.80 × 10 0 2.95 × 10 3 3.04 × 10 3 6.10 × 10 0 2.99 × 10 3 3.30 × 10 3 1.10 × 10 2 3.19 × 10 3 5.60 × 10 3 6.70 × 10 1 5.53 × 10 3
RLWOA 2.94 × 10 3 2.40 × 10 1 2.92 × 10 3 3.03 × 10 3 1.65 × 10 1 2.30 × 10 3 4.75 × 10 3 2.85 × 10 2 4.46 × 10 3 1.69 × 10 4 4.82 × 10 2 1.65 × 10 4
Rand-MHA 2.99 × 10 3 2.56 × 10 1 2.90 × 10 3 3.04 × 10 3 6.67 × 10 0 2.99 × 10 3 3.59 × 10 3 2.96 × 10 1 3.56 × 10 3 7.29 × 10 3 1.13 × 10 2 7.18 × 10 3
LTR-MHA 2 . 93 × 10 3 2.36 × 10 1 2.91 × 10 3 3 . 02 × 10 3 1.61 × 10 1 2.27 × 10 3 4.73 × 10 3 2.82 × 10 2 4.44 × 10 3 1.68 × 10 4 4.78 × 10 2 1.64 × 10 4
F26WOA 4.31 × 10 3 3.77 × 10 2 3.93 × 10 3 9.64 × 10 3 9.57 × 10 2 8.68 × 10 3 1.79 × 10 4 5.33 × 10 2 1.73 × 10 4 5.09 × 10 4 2.74 × 10 3 4.81 × 10 4
HHO 4.28 × 10 3 2.64 × 10 2 4.01 × 10 3 1.05 × 10 4 4.48 × 10 2 1.00 × 10 4 1.56 × 10 4 6.68 × 10 2 1.49 × 10 4 5.08 × 10 4 9.32 × 10 2 4.98 × 10 4
GA 3.65 × 10 3 7.02 × 10 2 2.94 × 10 3 6.51 × 10 3 1.47 × 10 2 6.36 × 10 3 9.06 × 10 3 5.51 × 10 2 8.51 × 10 3 2.14 × 10 4 1.34 × 10 3 2.00 × 10 4
ISGA 3.60 × 10 3 6.90 × 10 2 2.90 × 10 3 6.40 × 10 3 1.45 × 10 2 6.25 × 10 3 8.95 × 10 3 5.40 × 10 2 8.40 × 10 3 2.17 × 10 4 1.30 × 10 3 1.97 × 10 4
RLWOA 2.97 × 10 3 3.10 × 10 1 2.94 × 10 3 5.02 × 10 3 1.08 × 10 3 4.00 × 10 3 9.05 × 10 3 3.15 × 10 2 8.94 × 10 3 2.13 × 10 4 5.45 × 10 1 2.07 × 10 4
Rand-MHA 2 . 95 × 10 3 3.23 × 10 1 2.82 × 10 3 5.96 × 10 3 1.46 × 10 3 3.75 × 10 3 1.18 × 10 4 1.46 × 10 2 1.16 × 10 4 3.01 × 10 4 5.49 × 10 2 2.95 × 10 4
LTR-MHA 2.96 × 10 3 3.09 × 10 1 2.93 × 10 3 5 . 00 × 10 3 1.06 × 10 3 3.94 × 10 3 9 . 03 × 10 3 3.13 × 10 2 8.92 × 10 3 2 . 12 × 10 4 5.39 × 10 1 2.06 × 10 4
F27WOA 3.21 × 10 3 2.73 × 10 1 3.18 × 10 3 3.84 × 10 3 4.25 × 10 1 3.80 × 10 3 5.72 × 10 3 5.77 × 10 2 5.14 × 10 3 1.07 × 10 4 7.04 × 10 2 1.00 × 10 4
HHO 3.21 × 10 3 2.21 × 10 1 3.19 × 10 3 5.08 × 10 3 2.34 × 10 2 4.84 × 10 3 7.23 × 10 3 7.77 × 10 2 6.45 × 10 3 1.26 × 10 4 5.55 × 10 2 1.21 × 10 4
GA 3.40 × 10 3 2.40 × 10 0 3.10 × 10 3 3 . 26 × 10 3 1.85 × 10 1 3.24 × 10 3 3.52 × 10 3 3.64 × 10 0 3.52 × 10 3 4.12 × 10 3 1.37 × 10 1 4.11 × 10 3
ISGA 3.38 × 10 3 2.35 × 10 0 3.09 × 10 3 3.35 × 10 3 1.80 × 10 1 3.23 × 10 3 3.51 × 10 3 3.60 × 10 0 3.51 × 10 3 4.12 × 10 3 1.35 × 10 1 4.09 × 10 3
RLWOA 3.13 × 10 3 2.60 × 10 1 3.11 × 10 3 3.28 × 10 3 1.35 × 10 1 3.26 × 10 3 3.46 × 10 3 3.80 × 10 1 3.32 × 10 3 4.11 × 10 3 4.55 × 10 0 4.05 × 10 3
Rand-MHA 3.15 × 10 3 3.77 × 10 1 3.10 × 10 3 3.31 × 10 3 1.65 × 10 1 3.29 × 10 3 3.73 × 10 3 1.23 × 10 2 3.61 × 10 3 5.21 × 10 3 7.67 × 10 1 5.14 × 10 3
LTR-MHA 3 . 12 × 10 3 2.57 × 10 1 3.10 × 10 3 3.27 × 10 3 1.31 × 10 1 3.25 × 10 3 3 . 45 × 10 3 3.73 × 10 1 3.31 × 10 3 4 . 10 × 10 3 4.49 × 10 0 4.04 × 10 3
F28WOA 3.75 × 10 3 1.31 × 10 2 3.62 × 10 3 6.89 × 10 3 1.22 × 10 3 5.67 × 10 3 9.97 × 10 3 1.09 × 10 3 8.88 × 10 3 2.77 × 10 4 1.26 × 10 3 2.64 × 10 4
HHO 3.80 × 10 3 6.62 × 10 1 3.74 × 10 3 7.04 × 10 3 4.09 × 10 2 6.63 × 10 3 1.26 × 10 4 8.81 × 10 2 1.17 × 10 4 2.99 × 10 4 6.29 × 10 1 2.98 × 10 4
GA 3.42 × 10 3 7.10 × 10 0 3.41 × 10 3 3.26 × 10 3 2.10 × 10 1 3.15 × 10 3 4.14 × 10 3 1.86 × 10 2 3.96 × 10 3 7.01 × 10 3 6.04 × 10 2 6.40 × 10 3
ISGA 3.48 × 10 3 7.60 × 10 1 3.38 × 10 3 3.30 × 10 3 1.45 × 10 1 3.28 × 10 3 4.63 × 10 3 7.70 × 10 1 3.55 × 10 3 7.18 × 10 3 6.00 × 10 2 6.78 × 10 3
RLWOA 3 . 38 × 10 3 7.65 × 10 1 3.38 × 10 3 3.32 × 10 3 1.48 × 10 1 3.30 × 10 3 3 . 65 × 10 3 7.76 × 10 1 3.57 × 10 3 6.69 × 10 3 6.15 × 10 2 6.13 × 10 3
Rand-MHA 3.57 × 10 3 7.46 × 10 0 3.41 × 10 3 3.39 × 10 3 1.65 × 10 1 3.35 × 10 3 4.19 × 10 3 1.00 × 10 2 4.09 × 10 3 9.41 × 10 3 3.02 × 10 2 9.11 × 10 3
LTR-MHA 3.41 × 10 3 7.00 × 10 0 3.40 × 10 3 3 . 25 × 10 3 2.08 × 10 1 3.13 × 10 3 4.13 × 10 3 1.84 × 10 2 3.95 × 10 3 6 . 67 × 10 3 6.10 × 10 2 6.11 × 10 3
F30WOA 1.79 × 10 7 3.67 × 10 6 1.42 × 10 7 1.38 × 10 9 9.29 × 10 8 4.49 × 10 8 6.04 × 10 9 1.00 × 10 9 5.03 × 10 9 2.07 × 10 11 1.95 × 10 10 1.87 × 10 11
HHO 2.33 × 10 7 7.62 × 10 6 1.57 × 10 7 1.56 × 10 9 1.29 × 10 9 2.69 × 10 8 8.27 × 10 10 8.10 × 10 9 7.46 × 10 10 2.85 × 10 11 2.39 × 10 10 2.61 × 10 11
GA 1.35 × 10 6 9.71 × 10 5 3.81 × 10 5 1.56 × 10 7 5.67 × 10 6 9.94 × 10 6 1.18 × 10 9 1.02 × 10 7 1.17 × 10 9 1.09 × 10 10 7.22 × 10 8 1.02 × 10 10
ISGA 1.30 × 10 6 9.40 × 10 5 3.60 × 10 5 1.87 × 10 7 1.98 × 10 6 4.00 × 10 6 1.48 × 10 9 1.68 × 10 6 2.85 × 10 8 3.68 × 10 10 4.90 × 10 7 2.51 × 10 8
RLWOA 1.00 × 10 6 3.00 × 10 5 7.00 × 10 5 1.58 × 10 7 5.70 × 10 6 1.00 × 10 7 1.19 × 10 9 1.03 × 10 7 1.18 × 10 9 1.10 × 10 10 7.25 × 10 8 1.03 × 10 10
Rand-MHA 1.88 × 10 6 8.98 × 10 5 1.02 × 10 6 8.35 × 10 6 5.77 × 10 6 7.71 × 10 6 1.84 × 10 8 5.15 × 10 7 1.33 × 10 8 1.12 × 10 9 3.94 × 10 8 7.23 × 10 8
LTR-MHA 9 . 66 × 10 5 2.98 × 10 5 6.68 × 10 5 6 . 22 × 10 6 2.04 × 10 6 4.18 × 10 6 3 . 14 × 10 7 1.73 × 10 6 2.97 × 10 7 3 . 13 × 10 8 5.03 × 10 7 2.62 × 10 8
Table 8. Average runtime comparison of LTR-MHA and baseline algorithms (in seconds).
Table 8. Average runtime comparison of LTR-MHA and baseline algorithms (in seconds).
FuncsWOAHHOGAISGARLWOARand-MHALTR-MHA
F10.48191.47000.77664.835257.39263.512670.4798
F30.43771.48660.78034.926758.14733.513958.2547
F40.43921.48420.77044.815957.96383.441357.8522
F50.43211.58540.82905.284758.72643.451064.2786
F60.49101.75950.77934.926156.83555.380456.2355
F70.46632.04800.83035.173857.49093.597055.0909
F80.47991.77910.77454.863258.60834.368361.1083
F90.44471.78020.76444.795357.20793.420458.7079
F100.96883.02920.88345.426758.40414.326763.9041
F110.49232.66680.80925.125257.12053.825254.6205
F120.51612.78480.85525.303457.46993.903454.9699
F130.50582.94350.81015.255857.41273.955854.9127
F140.50392.83870.82495.283159.44654.083164.9465
F150.48612.67310.79655.130757.16563.830754.6656
F180.50513.05410.81015.176657.99653.876656.4965
F190.58523.62800.93185.532358.05114.132355.5511
F210.57133.86560.86515.650356.26714.250353.7671
F220.65564.69990.98915.995557.91334.595554.4133
F230.69685.06801.03336.207356.45084.807353.9508
F240.63124.78520.94196.049456.43874.549453.9387
F250.64434.89550.97696.080755.78194.680753.2819
F260.73585.82431.07336.535757.19065.135754.6906
F270.80576.39971.16416.774258.42995.374255.9299
F280.76575.93291.37626.595655.02765.195653.5276
F300.84448.85031.15137.026857.32086.526854.8208
Table 9. Results of Wilcoxon rank-sum test.
Table 9. Results of Wilcoxon rank-sum test.
FuncsWOAHHOGAISGARLWOARand-MHA
F13.35 × 10 11 +3.34 × 10 11 +3.34 × 10 11 +3.02 × 10 11 +4.12 × 10 11 +4.45 × 10 11 +
F32.39 × 10 8 +1.58 × 10 8 +7.15 × 10 5 2.87 × 10 8 +3.22 × 10 4 3.71 × 10 4
F43.02 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +3.15 × 10 11 +3.02 × 10 11 +
F56.13 × 10 10 +7.56 × 10 11 +3.02 × 10 11 +5.44 × 10 10 +1.63 × 10 5 1.95 × 10 5
F64.70 × 10 8 +2.23 × 10 6 +3.02 × 10 11 +3.85 × 10 4 +1.42 × 10 5 +1.64 × 10 10 +
F71.17 × 10 9 +6.12 × 10 10 +3.02 × 10 5 9.28 × 10 10 +2.45 × 10 4 2.88 × 10 3
F82.46 × 10 10 +5.15 × 10 11 +3.02 × 10 11 +2.11 × 10 10 +1.87 × 10 8 +2.00 × 10 9 +
F91.36 × 10 10 +3.34 × 10 11 +3.02 × 10 11 +1.18 × 10 10 +7.92 × 10 7 =8.65 × 10 6 =
F101.96 × 10 5 =4.32 × 10 10 +3.02 × 10 11 +1.05 × 10 6 =1.32 × 10 9 +1.49 × 10 9 +
F113.45 × 10 10 +1.91 × 10 8 +3.32 × 10 11 +2.97 × 10 10 +2.31 × 10 10 +2.56 × 10 10 +
F123.45 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +
F133.02 × 10 11 +3.23 × 10 11 +3.56 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +
F143.02 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +
F153.35 × 10 11 +3.02 × 10 11 +3.24 × 10 6 3.02 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +
F183.34 × 10 11 +3.21 × 10 5 +3.02 × 10 11 +3.02 × 10 11 +1.52 × 10 7 +1.75 × 10 7 +
F193.02 × 10 11 +3.32 × 10 8 +3.02 × 10 11 +3.02 × 10 11 +2.28 × 10 8 +2.53 × 10 10 +
F213.24 × 10 9 +3.41 × 10 8 +6.74 × 10 8 +2.95 × 10 9 +8.16 × 10 6 9.47 × 10 4
F223.02 × 10 11 +3.42 × 10 9 +3.02 × 10 11 +3.02 × 10 11 +1.61 × 10 9 +1.73 × 10 7 +
F233.02 × 10 11 +3.42 × 10 9 +5.20 × 10 1 +3.02 × 10 11 +1.72 × 10 7 =1.85 × 10 6 =
F244.50 × 10 11 +4.50 × 10 11 +3.32 × 10 6 +4.50 × 10 11 +1.61 × 10 9 +1.74 × 10 8 +
F253.21 × 10 11 +3.02 × 10 11 +3.02 × 10 5 3.02 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +
F266.70 × 10 11 +5.67 × 10 11 +3.26 × 10 8 +6.12 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +
F272.33 × 10 10 +8.15 × 10 11 +9.06 × 10 7 =2.01 × 10 10 +1.95 × 10 11 +2.07 × 10 11 +
F283.02 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +
F303.02 × 10 11 +3.02 × 10 11 +3.02 × 10 6 3.02 × 10 11 +3.02 × 10 11 +3.02 × 10 11 +
+/=/−24/1/025/0/019/1/524/1/019/2/419/2/4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xue, X.; Shu, T.; Xia, J. Automated Generation of Hybrid Metaheuristics Using Learning-to-Rank. Algorithms 2025, 18, 316. https://doi.org/10.3390/a18060316

AMA Style

Xue X, Shu T, Xia J. Automated Generation of Hybrid Metaheuristics Using Learning-to-Rank. Algorithms. 2025; 18(6):316. https://doi.org/10.3390/a18060316

Chicago/Turabian Style

Xue, Xinru, Ting Shu, and Jinsong Xia. 2025. "Automated Generation of Hybrid Metaheuristics Using Learning-to-Rank" Algorithms 18, no. 6: 316. https://doi.org/10.3390/a18060316

APA Style

Xue, X., Shu, T., & Xia, J. (2025). Automated Generation of Hybrid Metaheuristics Using Learning-to-Rank. Algorithms, 18(6), 316. https://doi.org/10.3390/a18060316

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop