Next Article in Journal
Parallel Computational Algorithm for Object-Oriented Modeling of Manipulation Robots
Next Article in Special Issue
Blood Coagulation Algorithm: A Novel Bio-Inspired Meta-Heuristic Algorithm for Global Optimization
Previous Article in Journal
Mathematical and Simulation Model for Reliability Analysis of a Heterogeneous Redundant Data Transmission System
Previous Article in Special Issue
A Binary Machine Learning Cuckoo Search Algorithm Improved by a Local Search Operator for the Set-Union Knapsack Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Learning-Based Binarization Scheme Selector for Swarm Algorithms Solving Combinatorial Problems

1
Escuela de Construcción Civil, Pontificia Universidad Católica de Chile, Avenida Vicuña Mackenna 4860, Santiago 7820436, Chile
2
Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Avenida Brasil 2241, Valparaíso 2362807, Chile
3
Escuela de Negocios Internacionales, Universidad de Valparaíso, Alcalde Prieto Nieto 452, Viña del Mar, Valparaíso 2572048, Chile
4
Departamento de Informática, Universidad Técnica Federico Santa María, Avenida España 1680, Valparaíso 2390123, Chile
5
Escuela de Ingeniería en Construcción, Pontificia Universidad Católica de Valparaíso, Avenida Brasil 2147, Valparaíso 2362804, Chile
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(22), 2887; https://doi.org/10.3390/math9222887
Submission received: 21 October 2021 / Revised: 6 November 2021 / Accepted: 9 November 2021 / Published: 12 November 2021
(This article belongs to the Special Issue Metaheuristic Algorithms)

Abstract

:
Currently, industry is undergoing an exponential increase in binary-based combinatorial problems. In this regard, metaheuristics have been a common trend in the field in order to design approaches to successfully solve them. Thus, a well-known strategy includes the employment of continuous swarm-based algorithms transformed to perform in binary environments. In this work, we propose a hybrid approach that contains discrete smartly adapted population-based strategies to efficiently tackle binary-based problems. The proposed approach employs a reinforcement learning technique, known as SARSA (State–Action–Reward–State–Action), in order to utilize knowledge based on the run time. In order to test the viability and competitiveness of our proposal, we compare discrete state-of-the-art algorithms smartly assisted by SARSA. Finally, we illustrate interesting results where the proposed hybrid outperforms other approaches, thus, providing a novel option to tackle these types of problems in industry.

1. Introduction

High complexity problems in binary domains are a common sight in industry, along with high digitalization and the incorporation of artificial intelligence. Among well-known problems with great complexity, we can find the Set Covering Problem (SCP) [1], Knapsack Problem [2], Set-Union Knapsack Problem [3], and Feature Selection [4]. In order to solve these kinds of problems, the employment of exact methods can be unmanageable within restricted resources, such as computational time. Thus, approximate methods, such as metaheuristics (MH), which do not guarantee the optimality, but do obtain solutions as close as possible to the optimal in a reasonable computational time, have been a recurrent answer from the scientific community.
In the literature, there exists designed MH capable to address them without the need for modifications. However, it has been demonstrated that MH designed to work in continuous domains assisted by a discretization scheme outperforms the classic binary-based approaches [5]. The classic design includes the transformation of domains through the two-step techniques. However, novel learning-based hybrids have been reported, which focus on improvements in the transformation process [2,6].
Hybrid methods have been designed as novel approaches that use multiple optimization tools. They have been a hot topic in the field, and several improvements over MH have been reported. Among the most relevant lines of research in the literature, four can be clearly distinguished, such as MH with “Mathematical Programming” [7]; hybridization between MH [8]; “Simheuristics”, which interrelates MH with the simulation of problems [9]; and MH with Machine Learning (ML) [10,11,12].
In this work, we propose a hybrid approach composed of MH and ML, which includes continuous-based population algorithms supported by a learning-based binarization scheme. The novelty in the proposition concerns a multiple binarization scheme being balanced by a Reinforcement Learning (RL) technique, named SARSA, which is based on the run time. The main idea is to provide an adaptive binary-selector mechanism based on the knowledge generated by the processed dynamic data generated through the search, such as the diversity of solutions.
In RL approaches, the employment and management of rewards are well-known. In this work, five different types are considered in the reward system implemented: the global best, with a penalty, root adaptation, without penalty, and escalating adaptation. Regarding the population-based algorithms, in this work, the Sine–Cosine Algorithm (SCA), Harris Hawk Optimization (HHO), Whale Optimization Problem (WOA), and Grey Wolf Optimizer (GWO) are employed. This complete set of components profiting from the data generated on the run time by population-based algorithms motivated the challenge of proposing a learning-based approach with the capability to self-adapt and improve through the search.
In order to prove the competitiveness of the proposed hybrid algorithm, experimentation tests were carried out against multiple state-of-the-art binarization strategies solving the SCP. Lastly, we highlight the good performance illustrated by the proposed approach proving to be a good alternative to solving binary optimization problems.
The rest of this paper is organized as follows. In Section 2, we present a detailed description of all the implemented population-based MH, the state-of-the-art binarization scheme, how MH has been supported by ML, and the optimization problem tackled. The proposed hybrid is illustrated in Section 3, where we describe the designed learning model and the details employing SARSA with the reward system. Section 4 presents the results obtained together with their respective tables and figures. Finally, a proper analysis and discussions are illustrated in Section 5, followed by our conclusions and future lines of work.

2. Related Work

In this section, we present all the required concepts related to the proposal in order to understand the ideas and objectives behind the design.

2.1. Sine–Cosine Algorithm

This MH was designed by Mirjalili in 2016 [13] and took inspiration from the sine–cosine trigonometric functions. Sine–Cosine can be classified as a population-based algorithm where the population is randomly generated and subsequently perturbed by the following methodology:
X i , j t + 1 =   X i , j t + 1 = X i , j t + r 1 · s i n ( r 2 ) · | r 3 P j t X i , j t | , r 4 < 0.5 X i , j t + 1 = X i , j t + r 1 · c o s ( r 2 ) · | r 3 P j t X i , j t | , r 4 0.5
where parameter r 1 and uniform random numbers r 2 , r 3 , and r 4 are illustrated in Equations (2)–(5), respectively. In this regard, parameter r 1 determines the direction of the motion, that is towards or away from the best known solution. r 2 indicates the magnitude of the motion. r 3 gives how random the motion will be, thus, when r 3 > 1 , it will be highly stochastic. r 4 determines which equation will be employed. In other words, it determines the phases of algorithm (exploration or exploitation) [14].
r 1 = a t · a T
r 2 = 2 · π · R a n d [ 0 , 1 ]
r 3 = 2 · R a n d [ 0 , 1 ]
r 4 = R a n d [ 0 , 1 ]

2.2. Harris Hawk Optimizacion

This MH was designed by Hidari et al. in 2019 [15] and named Harris Hawk Optimization (HHO). This was inspired by the cooperative and hunting behavior of Harris Hawks over peregrines. In this regard, in each iteration, the best peregrine is assigned as the X r a b b i t and is the objective for the rest of the population. Initially, for each falcon, their energy E and jump J are computed. E determines if exploitation or exploration is performed. This energy decreases over time and can be interpreted as the flock getting weaker after eluding the attacks of hawks. This situation can be mathematically modeled as follows.
E = 2 · E 0 · ( 1 t T ) t { 1 , 2 ,   , T } T = maximum iterations
In each iteration, the initial energy level E 0 is randomly adjusted by [ 1 , 1 ] . When E 0 decreases from 0 to 1, it demonstrates that the energy is running out for the flock; and when E 0 increases from 0 to 1, it means that the flock is gaining energy. However, as the iterations progress, the current energy E follows a decreasing trend. While | E 1 | , HHO performs exploration, and this situation changes to exploitation when | E < 1 | . The exploration is mathematically modeled as follows.
X i t + 1 =   X r a n d t r 1 · | X r a n d t 2 · r 2 · X i t | q > 0.5 ( X r a b b i t t X m t ) r 3 · ( L B + r 4 ( U B L B ) ) q < 0.5
Thus, when q > 0.5 , the scenario applied is for the hawks to randomly search the solution space. When q < 0.5 , this represents a scenario where Peregrines perch around the flock.
Additionally, X i t + 1 corresponds to the updated position of the current falcon, X r a n d t is a randomly selected falcon, X r a b b i t t is the position of the best solution, and X i t is the current position of the falcon. r 1 to r 4 , and q are uniform random numbers ranging between [0,1], while L B and U B are the limits of the search space, and the mean location of the population is X m t .
The exploitation strategies are carried out according to Equations (8), (9), (11) and (12). To decide what type of exploitation behavior is going to be used, the value of the current energy | E | and r is used. In this context, r corresponds to a random number between [ 0.1 ] , and when | E | 0.5 and r 0.5 , we employ Equation (8).
X i t + 1 = Δ X i t E · | J · X r a b b i t t X i t | Δ X i t = X r a b b i t t X i t J = 2 · ( 1 r 5 )
where Δ X i t is the distance between the best position discovered thus far and the i-th hawk’s present position. r 5 corresponds to a random number between [0,1] and represents the rabbit’s erratic hop in an attempt to escape the predator. When | E | 0.5 and r < 0.5 , then we apply Equation (9).
X i t + 1 =   Y If f ( Y ) < f ( x i t ) Z If f ( Z ) < f ( x i t ) Y = X r a b b i t t E · | J · X r a b b i t t X i t | Z = Y + S · L F ( D )
where D and S are, respectively, the dimensions of the problem and a D-size vector containing random numbers, while f ( Y ) and f ( Z ) are values of the objective functions for the given vectors. L F represents the Lévy flight, which can be represented with the Equation (10).
L F ( D ) = 0.01 · μ · σ | v | 1 β σ = Γ ( 1 + β ) · sin ( π β 2 ) Γ ( 1 + β 2 ) · β 2 β 1 2
where μ and v are random numbers between [0,1], β is a constant with a value of 1.5. The value of 0.01 is used to control the step length, which can be changed to fit the problem landscape. When | E | < 0.5 and r 0.5 , we apply Equation (11).
X i t = X r a b b i t t E | Δ X i t |
Finally, if  | E | < 0.5 and r < 0.5 , we apply Equation (12).
X i t + 1 =   Y If f ( Y ) < f ( x i t ) Z If f ( Z ) < f ( x i t ) Y = X r a b b i t t E · | J · X r a b b i t t X m t | Z = Y + S · L F ( D )

2.3. Whale Optimization Algorithm

The Whale Optimization Algorithm (WOA) was designed by Mirjalili and Lewis in 2015 [16] and is inspired by humpback whale hunting behavior—particularly how they utilize a technique known as “bubble netting”. WOA begins with a set of randomly generated solutions. The whales change their locations considering a randomly selected whale or the best solution found thus far at each iteration. In this context, when the Equation (13) has the value A 1 , a new random whale is picked. However, when A < 1 , the best solution is chosen. On the other hand, through the parameter p , WOA decides between a spiral and circular motion. In this regard, there are three motions that are critical:
  • Searching for prey ( p < 0.5 and | A | 1 ): The whales randomly search for prey based on the position of each prey. When the algorithm determines that A   1 , we may say that it is exploring, allowing WOA to carry out a global search. This initial move is mathematically represented as follows:
    X i t + 1 = X r a n d t A · D D =   | C · X r a n d t X i t |
    where t indicates the current iteration, A and C are coefficient vectors, and  X r a n d is a randomly selected position vector (i.e., a random whale) from the current population. The vectors A and C may be calculated using the following Equation (14):
    A = 2 a · r a C = 2 · r
    where, a linearly decrease from 2 to 0 over iterations (in both the exploration and exploitation phases) and r is a uniform random vector of values between [ 0 , 1 ] .
  • Encircling the prey ( p < 0.5 and | A | < 1 ): When the whales find their target, they proceed to surround them. At the beginning, an optimal location is unknown; thus, each agent focuses on the nearest prey. After the best search agent has been identified, the other agents attempt to update their locations towards that agent. This movement is mathematically represented by Equation (15):
    X i t + 1 = X * i t A · D D =   | C · X * i t X i t |
    where X * is the position vector of the best solution found thus far, and X is the current position vector. Equation (14) is used to compute the vectors A and C . It is worth noting that, if a better solution exists, X must be changed at each iteration.
  • Bubble net attack ( p 0.5 ): The “shrinking net method” is given by this movement. This behavior is accomplished by reducing the value of a in the Equation (14). As the whale spins, the bubble net decreases until the prey is captured. The following Equation (16) is used to represent this motion:
    X i t + 1 = D · e b l · cos ( 2 π l ) + X * i t D =   | X * i t X i t |
    where D is the distance between the i-th whale and the prey (the best solution obtained thus far), b is a constant employed to define the form of the logarithmic spiral, and l is a random integer between [ 1 , 1 ] .
Moreover, humpback whales swim around their prey in a decreasing circle while also following a spiral trajectory. To simulate this behavior, there is a 50% chance of selecting either the encircling prey mechanism (2) or the spiral model (3) to update the location of the whales during optimization. Here is the mathematical model:
X i t + 1 =   X * i t A · D If p < 0.5 D · e b l · cos ( 2 π l ) + X * i t If p 0.5

2.4. Grey Wolf Optimizer

The Grey Wolf Optimizer (GWO) is inspired by the behavior of gray wolves. The hierarchy employed is led by the alpha wolf ( α ), which is followed by beta ( β ) wolves and delta wolves ( δ ). The remaining members of the pack are referred to as omegas [17]. The global optimum represents the location of the prey, and the alpha, beta, and delta wolves are the closest to the prey. The rest of the pack, formerly known as omegas, are updated in the search space based on the leaders. In GWO, in order to hunt prey, the following steps are required: encircle, stalk, raid, and search for prey.
  • Encircling the prey: The objective is for the pack to surround the prey, in order to carry out this movement; thus, each wolf will be moving toward the target.
    X ( t + 1 ) = X p ( t ) A · D
    where t denotes the current iteration, X p ( t ) denotes the prey’s location in the t-th iteration, X ( t ) denotes the wolf’s position, and  D may be described as follows:
    D =   C · X p ( t ) X ( t )
    Additionally, the coefficient vectors A and C of Equations (18) and (19) are computed as follows:
    A = 2 a · r 1 a , C = 2 r 2
    where a is a parameter, r 1 , and r 2 are uniform random vectors with values from 0 to 1.
  • Stalking the prey until it stops: This action is carried out by the whole pack based on information provided by the α , β , and  δ wolves, who are supposed to be aware of the position of the prey. This action may be mathematically represented as follows:
    X ( t + 1 ) = X 1 ( t ) + X 3 ( t ) + X 3 ( t ) 3
    where X 1 ( t ) , X 2 ( t ) , and  X 3 ( t ) are defined as illustrated in Equation (18). X 1 ( t ) replaces X p ( t ) , A and D by X α ( t ) , A 1 and D α , respectively. X 2 ( t ) replaces X p ( t ) , A , and  D by X β ( t ) , A 2 , and  D β , respectively. Lastly, X 3 ( t ) replaces X p ( t ) , A , and  D by X δ ( t ) , A 1 , and  D δ , respectively.
    On the other hand, X α , X β , and  X δ are the iteration’s first three best answers. They define A 1 , A 2 , and  A 3 as in the Equation (20). Finally, D α , D β , and  D δ as the Equation (19), where D α replaces C and X p ( t ) by C 1 and X α ( t ) , respectively. D β replace C and X p ( t ) by C 2 and X β ( t ) , respectively. D δ replaces C and X p ( t ) by C 3 and X δ ( t ) , respectively. C 1 , C 2 , and  C 3 are specified in the equation for vector C (Equation (20)).
  • Attack the prey: the main parameter in this movements is a, which manages the exploration or exploitation performed by GWO, i.e., moving closer to or further away from the prey. In this regard, a is defined between [0,2] and is mathematically illustrated as follows:
    a = 2 t 2 T
    where t is the current iteration and T is the total amount of iterations. According to the corresponding author, the range of possible values for a enables a seamless transition between exploration and exploitation. Thus, when a is close to 0, the wolves are attacking the prey or rather, the MH is exploiting the search space.
  • Search for prey: In order to hunt down their prey, wolves disperse. This behavior is mimicked by setting the parameter a to a value closer to 2. It is worth noting that every wolf can discover a more suitable (ideal) prey. If a wolf approaches the prey, it becomes the new alpha, and the remaining wolves are classified as beta, delta, or omegas according to their distance from the prey.
The four metaheuristics described above were created to solve continuous optimization problems. In order for continuous metaheuristics to solve discrete optimization problems, a transfer of solutions is necessary.

2.5. Two-Step Binarization Scheme

The methodology behind binarization techniques for continuous MH consists in transferring the values of the continuous domain of the MH to a binary domain; this is done to preserve the quality movements that have continuous MH in order to generate quality binary solutions. Although there are MH that work in binary domains without the need to incorporate a binary scheme, continuous MH assisted by a binary scheme has proven to achieve great performance in multiple combinatorial NP-Hard problems—for instance, the Binary Bat Algorithm [18], Particle Swarm Optimization [19], Binary Salp Swarm Algorithm [20], Binary Dragonfly [21], and Binary Magnetic Optimization Algorithm [22].
In the literature, among the binary schemes, two large groups can be defined. First, the operators that do not provoke alterations in operations related to different elements of the MH. In this regard, the two-step techniques stand out, as they are the most used in the last decade [5] and the Angle Modulation technique [23]. The second group includes the methods that alter the normal functioning of a MH. For instance, Quantum Binary [24] and Set-Based Approaches, in addition to the techniques based on clustering [2,6].
In the scientific community, the two-step binary schemes are of great relevance. They have been employed to tackle multiple types of problems [25]. This binarization scheme, as the name implicates, it is composed of two steps. The first step is the transfer function [19], which transfers the values generated by the continuous MH to a continuous interval between 0 and 1. The second step involves binarization, which consists in transferring the number between that interval in a binary value, Figure 1.

2.5.1. First Step: Transfer Functions

Kennedy et al. in 1997 [26] introduced transfer functions to the optimization field. Their main advantage is the delivery of a probability between 0 and 1 at a low computational cost. There are two types of functions, the S-Shaped [19,27] and the V-Shaped [28], which are illustrated in the Figure 2. For each type of function, four variations were proposed, Table 1.

2.5.2. Second Step: Binarization

The binarization functions discretize the probability obtained from the transfer function and deliver a binary value. For this step, there are different techniques in the literature [29], such as those exemplified in Table 2.

2.6. Set Covering Problem

The SCP is defined as a binary matrix (A) of m-rows and n-columns, where a i , j   0 , 1 is the value of each cell in the matrix A, where i and j are the size of the m-rows and n-columns, respectively:
A = a 1 , 1 a 1 , 2 a 1 , n a 2 , 1 a 2 , 2 a 2 , n a m , 1 a m , 2 a m , n
Defining the column j satisfies a row i if a i j is equal to 1, and this is the contrary case if this is 0. In addition, there is an associated cost c j C , where C =   c 1 , c 2 , c n , together with that I =   1 , 2 , , m and J =   1 , 2 , , n are the sets of rows and columns, respectively.

3. The Proposal: Binarization Scheme Selector

This paper proposes a binarization scheme selector that incorporates multiple transfer functions and discretization methods. The main objective includes the smart selection, employment, and correct balance of them led by SARSA.
This novel binarization approach is based on the behavior of hyperheuristics, which have been proven to be effective for several issues [30]. The proposed design determines which types of binarization are more appropriate to apply in each iteration. The decision is based on the knowledge processed from dynamic data generated through the search in the run time. In this context, more adequate binarization methods can be applied with a higher probability to achieve good results. Figure 3 and Algorithm 1 illustrate the proposed design for the binary scheme selector, where a key element is depicted as Δ , which represents the dimensional perturbation on each byte in the solution vector, in other words, we can represent the perturbations performed by the MHs.
Algorithm 1 Data-driven dynamic discretization framework
1: Initialize a random swarm
2: Initialize Q-Table
3: for i t e r a t i o n ( t ) do
4:   Select action a t for s t from the Q-Table
5:   for  s o l u t i o n ( i )  do
6:     for  d i m e n s i o n ( d )  do
7:        X i , d t + 1 = X i , d t + Δ ( a t )
8:     end for
9:   end for
10:   Get immediate reward r t
11:   Get the maximum Q value for the next state s t + 1
12:   Update Q-Table
13:   Update the current state s t = s t + 1
14:  end for

3.1. SARSA

Temporal Difference (TD) algorithms are well-known RL approaches that focus on the study of the environment, generate knowledge, and update the current state [31]. The difference between the present assessment of a state’s worth, the discounted value of the future state, and the reward are displayed by the TD algorithms. These algorithms concentrate on state-to-state transitions and state-learned values.
Among the TD algorithms, there is SARSA, an online control algorithm and on-policy method [31]. In other words, SARSA algorithms are online algorithms because they perform the updates of the action-value function estimation at the end of each step without waiting for the term condition. Due to this, the Q-value is available to be used in the next state. They are control algorithms since they perform actions to achieve their purpose, which is the state–action optimal value function estimation.
On the other hand, they are on-policy, which means that the agent learns the value of the state–action pair based on the action performed and, thus, evaluating the current policy, unlike other techniques, such as Q-learning, which performs one policy and evaluates another.
These kinds of policies allow agents to learn to act optimally by experiencing the consequences of their actions without having to develop domain maps. The “environment” includes the current “states” in which the agent interacts and makes decisions, and there are several recognized states. In this context, each agent has a set of actions that cause a modification in the “reward” as well as in the subsequent state.
Thus, when the value reached is equal to one, the state is modified. In addition, when the agent selects an action to perform, he receives a reward for his decision. Rewards are delayed, and the agents must learn from the system to receive them. The value of the state–action pair is learned by the agent as a function of the action performed. Thus, when the value of the current state–action is updated, the next action a t + 1 is performed.
In Figure 4, the state-to-state transitions are considered, and the values of each have been learned. To understand the algorithm, let us consider the transitions as a pair of values, state–action to state–action, where the values of the state–action pairs are learned. Formally, these cases are identical: both are Markov chains with a reward process. The theorems ensuring the convergence of state values under TD are also applied to the corresponding algorithm for the action values. The update performed by the state–action can be defined as follows, Equation (24):
Q ( s t , a t ) Q ( s t , a t ) + α · [ r t + 1 + γ · Q ( s t + 1 , a t + 1 ) Q ( s t , a t ) ]
After each transition, the state is updated, until a terminal state is reached. When a state s t + 1 is terminal, then Q ( s t + 1 , a t + 1 ) is defined as zero. Each transition process is composed of five events: s t , a t , r t + 1 , s t + 1 , and a t + 1 (State–Action–Reward–State–Action); providing the name for the SARSA algorithm. Algorithm 2 of the algorithm is shown below:
Algorithm 2 SARSA: on-policy TD control
1: Algorithm parameters: α ϵ [ 0 , 1 ]
2: Initialize Q ( s , a )
3: while t Maximum number of iterations do
4:   Initialize s
5:   Choose a of s using the policy obtained from Q
6:   while  s s t e r m i n a l  do
7:     Choose action a, Note r
8:     Create s
9:     Choose a of s using the policy obtained from Q
10:      Q ( s , a ) Q ( s , a ) + α · [ r + γ · Q ( s , a ) Q ( s , a ) ]
11:   end while
12:  end while

3.2. Rewards

The rewards in RL algorithms are a critical component in the performance. It is such an important issue that several methods have been proposed in the literature [32,33,34,35]. The value assigned to r in the generic SARSA is determined by the type of reward from the chosen metrics as illustrated in Figure 5.
In Table 3, we illustrate detailed information about the rewards employed by SARSA. First, we use the penalty employed by Xu Yue and Song Shin Siang in [32,33], respectively. This reward applies a fixed increment or reduction in value for actions that result in an improvement or absence of it in the overall fitness. Regarding the reward without penalty, employed by Abed-alguni [34]. In this reward, no penalty is attached to the action committed. On the other hand, we have three additional sorts of incentives, which were reviewed and presented by Nareyek Alexander in [35].

3.3. Metric (getMetric)

Several metrics have been proposed in the literature. In this work, the fitness improvement metric is employed. The objective is directly related to the following scenarios, if the fitness function improves, SARSA will reward; if the fitness function remains unchanged, SARSA will apply a penalty as indicated by the type of reward.

3.4. State Determination (getState)

As is well known, metaheuristics have two phases that allow them to perform the optimization process. The phases are the exploration of the search space to find tentative regions with good solutions and the exploitation phase where the search for the best regions to find better solutions is intensified. Our proposal will have, as states for both Q-Learning and SARSA, the exploration and exploitation phases. To use these phases, we need to measure the exploration and exploitation of our algorithms. One of the techniques that stands out is the use of diversity metrics.
There are numerous methods for determining diversity [36]. In this work, diversity is computed by using the method proposed by Hussain Kashif et al. [37], which is expressed mathematically as:
D i v = 1 l · n d = 1 l i = 1 n | x ¯ d x i d | ,
where D i v represents the diversity status determination, x ¯ d denotes the mean values of the individuals in dimension d, x i d denotes the value of the i-th individual in dimension d, n denotes the population size, and l denotes the size of the individuals’ dimension.
We consider the exploration and exploitation percentages to be X P L % and X P T % , respectively. The percentages X P L % and X P T % are computed from the study of Morales-Castañeda et al. [38] as follows:
X P L % = D i v D i v m a x · 100 ,
X P T % = | D i v D i v m a x | D i v m a x · 100 .
where D i v represents the diversity state determined by Equation (25), and D i v m a x denotes the maximum value of the diversity state discovered throughout the optimization process.

4. Experimental

In order to determine if the integration of SARSA as a binary scheme selector improves the performance of a MH, five versions of SARSA corresponding to different types of rewards are implemented and compared against Q-Learning [39], Table 4 illustrates the details corresponding to the name assigned to each approach and the type employed.
In this work, the performance comparison is carried out by analyzing five versions of SARSA, five versions of Q-Learning, and two well-known and recommended binarization strategies, Table 5. The 12 approaches described were applied on HHO, GWO, WOA, and SCA solving the SCP, as shown in Table 6, in order to demonstrate the robustness of our hybridization proposal.
The configuration of the parameters of the four metaheuristics was carried out based on the original authors of each one of them.

4.1. Experimental Results

In Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13 and Table 14, the achieved results are illustrated. The detailed information presented in each table can be described as follows: the first column corresponds to the name of the Beasley’s instances (45 in total) [40], the second column is the best value known to date, the next three columns (Best, Avg, and RPD) present the best value reached, the averages, and the RPD obtained from the independent runs. The RPD corresponds to the Relative Percentage Deviation as defined in Equation (28). These three columns mentioned above are repeated for all versions (BCL, MIR, QL1, SA1, QL2, SA2, QL3, SA3, QL4, SA4, QL5, and SA5).
Finally, the last row corresponds to the mean of each column, and we highlight in bold the best values reached. For each MH implemented, the population size employed was 40, and 1000 iterations were performed per run. With this, the stopping condition was at 40,000 evaluations of the objective function as employed in [29]. The implementation was developed in Python 3.8.5 and processed using the free Google Colaboratory services [41]. The parameter settings for SARSA and QL algorithms were as follows: γ = 0.4 and α = 0.1 .
In Table 7 and Table 8, the approach SA1 leads with mean values for the columns Best, Avg, and RPD with 259.56, 263.21, and 1.78, respectively. Nevertheless, in terms of the computed performance, other SA approaches follow right behind SA1. In Table 9 and Table 10, the lead in performance is shared by SA1 with the best mean value for the column Best with 259.91 and SA5 with the best computed mean values for the columns Avg and RPD with 265.11 and 2.02, respectively. Here, we can observe more robustness in the performance by SA5 and some inconsistency by SA1.
In Table 11 and Table 12, the approach BCL leads the overall performance with the minimum mean values for the columns Best, Avg, and RPD, followed by SA5 and SA1. In Table 13 and Table 14, the approach SA1 leads the best values reached (Best) with the smallest RPD values with 259.44 and 1.84, respectively. The approach with the best mean for the column Avg is BCL with 264.47. Nevertheless, approaches employing SARSA follow close to the leaders in the performance, which proves the effectiveness of the proposal.
RPD = 100   ·   B e s t O p t O p t .

4.2. Statistical Results

A p-value lesser than 0.05 means that the difference between the techniques is statistically significant, and thus the comparison of their averages is valid.
The results obtained are grouped in Table 15, Table 16, Table 17 and Table 18. For each table, a matrix of the averages obtained from the 45 instances is illustrated. The description for each table is as follows: the first row and first columns present the 11 versions of the MH to be compared. We can read the information by row as follows: obtaining a p-value less than 0.05 means that the version in the row obtained a better performance over the version located in the column for the SCP and that the difference between the results is statistically significant.
p-values greater than 0.05 have been replaced by “>0.05” to facilitate the reading of the comparison matrix. In this context, significant differences in the performance can be observed in Table 15 and Table 16. First, all the hybrid approaches employing learning-based components outperformed the classic approach employing two-step transformation (BCL). The performances between approaches employing the same RL techniques, such as Q-learning (QL1-QL5 vs. QL1-QL5) and SARSA (SA1-SA5 vs. SA1-SA5), performed equally.
Lastly, approaches employing the RL technique SARSA in almost all instances significantly outperformed the ones employing Q-learning. On the other hand, an interesting phenomenon can be observed in Table 17 and Table 18. The only major differences in performance were observed between BCL against the approaches employing Q-learning (QL1–QL5) and SA4–SA5. The performances between approaches employing the same and different RL techniques were not significantly different.

4.3. Action Charts

The charts of average actions performed are illustrated in Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13. They represent the graphical representation of the average choice of Q-Learning or SARSA during the iterative process. They generate a weight system in order to properly select the binarization schemes, and the objective is the identification of preferences according to the state of exploitation or exploration of the environment using the run time. The graphs are composed in the x-axis by the average number of times the action was selected for the respective state, while the 40 possible actions to be taken for the binarization scheme selector are in the y-axis.

4.4. Exploration and Exploitation Charts

The visualization of MH metrics is fundamental to understanding their behavior through the search, and thus the exploration and exploitation graphs presented in Morales-Castañeda et al. [38] are of great contribution to the analysis of exploration and exploitation in terms of diversity among the solutions. In this work, the decision-making of the state was calculated by means of the Dimensional-Hussain diversity (Section 3.4). In the exploration and exploitation plots illustrated in Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20 and Figure 21, the x-axis is the number of total iterations, while the y-axis is the percentage of exploration and exploitation. They are measured by Equations (26) and (27)—the results obtained from the representative instances for each MH.

5. Discussions and Analysis

In this section, a discussion and detailed analysis of the results achieved in Section 4 are presented.

5.1. Best, Average, and RDP

First, by observing the results obtained and summarized in Figure 22, Figure 23, Figure 24 and Figure 25, the following ideas are highlighted:
  • In SCA, the approach SA1 obtained the best performance regarding the best values reached (Best). However, if we observe the values computed for average, and RPD, the best performance was achieved by the approach SA5. In WOA, the approach SA1 obtained the best performance all-around reported for the values Best, average, and RPD.
  • In GWO, the approach BCL obtained the best performance when comparing the computed values for Best, average, and RPD.
  • In HHO, the approach SA1 obtained the best performance when comparing Best and RPD. However, if we observe the values computed for average, the best performance was achieved by the BCL version.
  • In SCA and WOA, all versions with a binarization scheme selector obtained better performance against versions with static binarization schemes when comparing the Best, average, and RPD.
  • In SCA, WOA, GWO, and HHO, the best performance overall was observed for Best, average, and RPD was achieved by the proposed SARSA versions against the ones employing Q-Learning. Nevertheless, the approach SA5 was the only version employing SARSA that was outperformed by QL5.
  • In SCA, WOA, GWO, and HHO, observing the overall performance achieved by the two well-known static binarization methods, BCL led the results and greatly outperformed the MIR approaches.
While conducting a comparison of behaviors of the MH, we can observe that the SCA and WOA, achieved improvements in their performance when using Q-Learning and SARSA. On the other hand, regarding HHO and GWO, statistically significant improvements were not obtained. In this context, one of the reasons for this behavior lies in the movement operators of each MH. In HHO and GWO, we observe operators of higher complexity, which follow different logic according to the behavior of their internal parameters.
For instance, E in the case of HHO, the energy is decreasing during the iterations, thus, influencing the motion operator employed. This is based on the logic that, the first iterations should be explored and the last ones exploited. In the case of SCA and WOA, we observe simpler movement operators, where the use of the exploration or exploitation operators depends on random decisions. For instance, SCA with parameter r 4 and p in the case of WOA.

5.2. Average Wilcoxon Test

The Wilcoxon–Mann–Whitney test is the non-parametric test we used to compare two independent samples. In Table 15, Table 16, Table 17 and Table 18, the average p-values obtained are presented in order to simplify their visualization. From these tables, the following is observed:
  • All the MIR versions obtained a worse performance compared to the rest of the versions, in addition to the fact that their difference is statistically significant, and thus it was eliminated from the Table 15, Table 16, Table 17 and Table 18, in order to facilitate the comparison.
  • For WOA and SCA, there was no statistically significant difference between BCL versus Q-Learning versions.
  • For GWO and HHO, there was a statistically significant difference between the versions of BCL and Q-Learning.
  • For WOA, SCA, GWO, and HHO, there was no statistically significant difference between the versions of BCL and SARSA, except for GWO with the versions SA4 and SA5.
  • For WOA, there was a statistically significant difference between the versions of SARSA versus Q-Learning.
  • For SCA, there was a statistically significant difference between the SARSA versions versus the Q-Learning versions, except for the SA1 version versus the QL1, QL2, and QL3 versions.
  • For GWO and HHO, there was no statistically significant difference between the SARSA versions versus the Q-Learning versions.

5.3. Choice of Binarization Schemes

It is known from the literature that binarization schemes have a strong impact on the performance of the MH [5], and thus Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 give us detailed information related to the employment of these during the exploration and exploitation processes, which can be described as follows:
  • The action corresponding to MIR (V4 and Complement), had the lowest selection rate for actions by both Q-Learning and SARSA versions.
  • For both Q-Learning and SARSA binarization scheme selectors, when in a scanning state, the preference observed was for S-type transfer functions.
  • For both Q-Learning and SARSA binarization scheme selectors, when in a scanning state, there was a preference for Standard and Static binarization followed by Complement.
  • In the selectors of binarization schemes with Q-Learning, when in an exploitation state, there was a preference for V-type transfer functions.
  • In the selectors of binarization schemes with SARSA, when in an operational state, there was a preference for the transfer function types S1, S2, S3, and V1.
  • For both Q-Learning and SARSA binarization scheme selectors, the Elitist and Elitist Roulette binarization were mainly preferred when in an operational state.

5.4. Exploration and Exploitation

The exploration and exploitation measurements proposed in [38] provide detailed information related to the behaviors observed through the exploration and exploitation metrics during the iterative process. This process allows the proper observation of the influence of the binarization schemes employed between different versions, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20 and Figure 21 illustrate the following information:
  • In SCA, WOA, GWO, and HHO, the BCL version presented a sharp increase in the exploitation percentage in the initial iterations and remained mostly constant.
  • In SCA and WOA, the approaches based in MIR presented high exploitation values during the whole search process. This behavior, according to Morales-Castañera, can be attributed to random search behaviors, which are related to the fact of achieving the worst performance among all the versions compared by RDP.
  • In GWO, the approach based in MIR, presented a slight increase in the exploitation percentage performed. However, the values reached were low in quality. This observation was equally done over the approaches employing BCL but with a worse performance when compared with the RPD.
  • In HHO, the approach based in MIR, obtained higher exploration values in the early runs compared to the approaches employing static schemes. This can be explained by the similarities to the Q-Learning and SARSA approaches, where the movements are focused on exploration, which is well-known of HHO.
  • In WOA and GWO, the approach SA1 presented a behavior similar to the one obtained by MIR in GWO. However, greater amplitude in variations were observed as recurrent results of Q-Learning and SARSA in the experimentation phase. They were the third and fourth approaches with better performances among the versions with a binarization scheme selector compared by RPD.
  • In SCA and GWO, within Q-Learning and SARSA versions, exploration and exploitation graphs with constant changes in each iteration were presented.
  • In HHO, the Q-Learning version presented an equal amount of variations as the ones observed in SCA and GWO. However, a change during the second half of the iterative process was observed, a common change of movements in HHO.
Along with this, we can observe a different influence of the binarization schemes to each MH. In the literature, the recommendations for the case of BCL were V4, which is associated with exploration, and Elitist, which is associated with exploitation. On the other hand, concerning MIR, both V4 and Complement were associated with exploration. The different behaviors observed in the performance and balance of exploration and exploitation opens the following question, Are MH more susceptible to binarization schemes?. In this regard, future works will focus on exposing this relationship and building scientific evidence regarding this issue.

6. Conclusions

In this work, a novel learning-based binarization scheme selector was proposed. In this context, novel approaches have proven to be highly efficient in tackling hard optimization problems. The designed learning-based method employed a Reinforcement Learning technique, named SARSA, which utilizes the dynamic data generated through the search by continuous populated-based algorithms. The main objective behind the proposed approach was to design a balanced binarization scheme selector.
Regarding the results achieved, the five different versions of SARSA demonstrated competitive performances. The experimentation solving the SCP illustrated that WOA and SCA assisted by Q-Learning and SARSA obtained statistically significant better results. However, regarding HHO and GWO, the opposite phenomenon was observed for the version applying Q-Learning. In this regard, the implementations employing static binarization schemes (V4 and Elitist), presented better performance in most of the 45 instances. Nevertheless, the implementations applying SARSA maintained good overall performance.
On the other hand, observing the real profits given by the employed rewards with Q-Learning, we could not demonstrate a significant difference. The results achieved were similar, and thus it cannot be concluded that, for the solved problems, the type of reward used directly impacts the quality of the solutions. In the case of SARSA, an equal phenomenon was observed. No statistically significant differences were determined. However, comparing the versions of Q-Learning and SARSA, the latter achieved significance for SCA and WOA.
Within future works, along with answering the question in Section 5.3, the option of evaluating other MH with exploration and exploitation behaviors must be pursued in order to further exploit the benefits and continue building solid evidence using the improvements of learning-based models. Likewise, different diversities can be evaluated to determine if there are significant differences in their results and the possibility of grouping them under another classification according to the exploration and exploitation percentages they generate.
This is an area of great interest due to a large number of methods for calculating diversity. Other future works can contemplate the increase of actions for the proposed selection scheme, i.e., to add more transfer and binarization functions, such as O-Shaped [42], Z-Shaped [43], Q-Shaped [44], and U-Shaped [45]. In addition to evaluating other techniques of Temporal Difference, it is possible to explore new options, such as using techniques focused on large multi-dimensional variable sizes. This context includes the “Deep Q-Network” and others from deep learning.

Author Contributions

J.L.-R.: Conceptualization, Investigation, Methodology, Writing—Review and editing, Project administration, Resources, and Formal analysis. M.B.-R., F.C.-C., E.V., M.C., D.T., G.A., C.C. and J.G.: Conceptualization, Investigation, Validation. B.C., R.S. and W.P.: Validation, Funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by Grant ANID/FONDECYT/REGULAR/1210810.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

You can find the code used and replicate the results in: https://github.com/joselemusr/BSS-QL.

Acknowledgments

Broderick Crawford, Wenceslao Palma and Gino Astorga are supported by Grant ANID/FONDECYT/REGULAR/1210810. Ricardo Soto is supported by grant CONICYT/FONDECYT/ REGULAR/1190129. José Lemus-Romani is supported by National Agency for Research and Development (ANID)/ Scholarship Program/DOCTORADO NACIONAL/2019-21191692. Marcelo Becerra-Rozas is supported by National Agency for Research and Development (ANID)/ Scholarship Program/DOCTORADO NACIONAL/2021-21210740. Emanuel Vega is supported by National Agency for Research and Development ANID/Scholarship Program/DOCTORADO NACIONAL/2020-21202527. José García was supported by the Grant CONICYT/FONDECYT/INICIACION/11180056. José García acknowledge funding provided by DI Interdisciplinaria Pontificia Universidad Católica de Valparaíso (PUCV). Valparaíso (PUCV), 039.414/2021. Broderick Crawford, Ricardo Soto and Marcelo Becerra-Rozas are supported by Grant Nucleo de Investigacion en Data Analytics/VRIEA/PUCV/ 039.432/2020. Marcelo Becerra-Rozas are supported by Grant DI Investigación Interdisciplinaria del Pregrado/VRIEA/PUCV/039.421/2021.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Villavicencio, G.; Valenzuela, M.; Causa, L.; Moraga, P.; Pinto, H. A Machine Learning Firefly Algorithm Applied to the Matrix Covering Problem. In Computer Science Online Conference; Springer: Berlin/Heidelberg, Germany, 2021; pp. 316–325. [Google Scholar]
  2. García, J.; Crawford, B.; Soto, R.; Castro, C.; Paredes, F. A k-means binarization framework applied to multidimensional knapsack problem. Appl. Intell. 2018, 48, 357–380. [Google Scholar] [CrossRef]
  3. García, J.; Lemus-Romani, J.; Altimiras, F.; Crawford, B.; Soto, R.; Becerra-Rozas, M.; Moraga, P.; Becerra, A.P.; Fritz, A.P.; Rubio, J.M.; et al. A binary machine learning cuckoo search algorithm improved by a local search operator for the set-union knapsack problem. Mathematics 2021, 9, 2611. [Google Scholar] [CrossRef]
  4. Mafarja, M.M.; Mirjalili, S. Hybrid whale optimization algorithm with simulated annealing for feature selection. Neurocomputing 2017, 260, 302–312. [Google Scholar] [CrossRef]
  5. Crawford, B.; Soto, R.; Astorga, G.; García, J.; Castro, C.; Paredes, F. Putting continuous metaheuristics to work in binary search spaces. Complexity 2017, 2017, 8404231. [Google Scholar] [CrossRef] [Green Version]
  6. García, J.; Moraga, P.; Valenzuela, M.; Crawford, B.; Soto, R.; Pinto, H.; Peña, A.; Altimiras, F.; Astorga, G. A Db-Scan Binarization Algorithm Applied to Matrix Covering Problems. Comput. Intell. Neurosci. 2019, 2019, 3238574. [Google Scholar] [CrossRef] [Green Version]
  7. Maniezzo, V.; Stützle, T.; Voß, S. (Eds.) Matheuristics; Vol. 10 of Annals of Information Systems; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  8. Talbi, E.G. Combining metaheuristics with mathematical programming, constraint programming and machine learning. Ann. Oper. Res. 2016, 240, 171–215. [Google Scholar] [CrossRef]
  9. Juan, A.A.; Faulin, J.; Grasman, S.E.; Rabe, M.; Figueira, G. A review of simheuristics: Extending metaheuristics to deal with stochastic combinatorial optimization problems. Oper. Res. Perspect. 2015, 2, 62–72. [Google Scholar] [CrossRef] [Green Version]
  10. Talbi, E. Machine Learning into Metaheuristics: A Survey and Taxonomy of Data-Driven Metaheuristics. ACM Comput. Surv. 2022, 54, 129. [Google Scholar] [CrossRef]
  11. Song, H.; Triguero, I.; Özcan, E. A review on the self and dual interactions between machine learning and optimisation. Prog. Artif. Intell. 2019, 8, 143–165. [Google Scholar] [CrossRef] [Green Version]
  12. Calvet, L.; de Armas, J.; Masip, D.; Juan, A.A. Learnheuristics: Hybridizing metaheuristics with machine learning for optimization with dynamic inputs. Open Math. 2017, 15, 261–280. [Google Scholar] [CrossRef]
  13. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  14. Cisternas-Caneo, F.; Crawford, B.; Soto, R.; de la Fuente-Mella, H.; Tapia, D.; Lemus-Romani, J.; Castillo, M.; Becerra-Rozas, M.; Paredes, F.; Misra, S. A Data-Driven Dynamic Discretization Framework to Solve Combinatorial Problems Using Continuous Metaheuristics. In Innovations in Bio-Inspired Computing and Applications; Abraham, A., Sasaki, H., Rios, R., Gandhi, N., Singh, U., Ma, K., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 76–85. [Google Scholar] [CrossRef]
  15. Heidari, A.A. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  16. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  17. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  18. Mirjalili, S.; Mirjalili, S.M.; Yang, X.S. Binary bat algorithm. Neural Comput. Appl. 2014, 25, 663–681. [Google Scholar] [CrossRef]
  19. Mirjalili, S.; Lewis, A. S-shaped versus V-shaped transfer functions for binary particle swarm optimization. Swarm Evol. Comput. 2013, 9, 1–14. [Google Scholar] [CrossRef]
  20. Faris, H.; Mafarja, M.M.; Heidari, A.A.; Aljarah, I.; Ala’M, A.Z.; Mirjalili, S.; Fujita, H. An efficient binary salp swarm algorithm with crossover scheme for feature selection problems. Knowl.-Based Syst. 2018, 154, 43–67. [Google Scholar] [CrossRef]
  21. Mafarja, M.; Aljarah, I.; Heidari, A.A.; Faris, H.; Fournier-Viger, P.; Li, X.; Mirjalili, S. Binary dragonfly optimization for feature selection using time-varying transfer functions. Knowl.-Based Syst. 2018, 161, 185–204. [Google Scholar] [CrossRef]
  22. Mirjalili, S.; Hashim, S.Z.M. BMOA: Binary magnetic optimization algorithm. Int. J. Mach. Learn. Comput. 2012, 2, 204. [Google Scholar] [CrossRef] [Green Version]
  23. Leonard, B.J.; Engelbrecht, A.P.; Cleghorn, C.W. Critical considerations on angle modulated particle swarm optimisers. Swarm Intell. 2015, 9, 291–314. [Google Scholar] [CrossRef]
  24. Zhang, G. Quantum-inspired evolutionary algorithms: A survey and empirical study. J. Heuristics 2011, 17, 303–351. [Google Scholar] [CrossRef]
  25. Saremi, S.; Mirjalili, S.; Lewis, A. How important is a transfer function in discrete heuristic algorithms. Neural Comput. Appl. 2015, 26, 625–640. [Google Scholar] [CrossRef] [Green Version]
  26. Kennedy, J.; Eberhart, R.C. A discrete binary version of the particle swarm algorithm. In Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, Orlando, FL, USA, 12–15 October 1997; Volume 5, pp. 4104–4108. [Google Scholar]
  27. Olivares-Suarez, M.; Palma, W.; Paredes, F.; Olguín, E.; Norero, E. A binary coded firefly algorithm that solves the set covering problem. Sci. Technol. 2014, 17, 252–264. [Google Scholar]
  28. Rajalakshmi, N.; Subramanian, D.P.; Thamizhavel, K. Performance enhancement of radial distributed system with distributed generators by reconfiguration using binary firefly algorithm. J. Inst. Eng. (India) Ser. B 2015, 96, 91–99. [Google Scholar] [CrossRef]
  29. Lanza-Gutierrez, J.M.; Crawford, B.; Soto, R.; Berrios, N.; Gomez-Pulido, J.A.; Paredes, F. Analyzing the effects of binarization techniques when solving the set covering problem through swarm optimization. Expert Syst. Appl. 2017, 70, 67–82. [Google Scholar] [CrossRef]
  30. Asta, S.; Özcan, E.; Curtois, T. A tensor based hyper-heuristic for nurse rostering. Knowl.-Based Syst. 2016, 98, 185–199. [Google Scholar] [CrossRef] [Green Version]
  31. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  32. Xu, Y.; Pi, D. A reinforcement learning-based communication topology in particle swarm optimization. Neural Comput. Appl. 2019, 32, 10007–10032. [Google Scholar] [CrossRef]
  33. Choong, S.S.; Wong, L.P.; Lim, C.P. Automatic design of hyper-heuristic based on reinforcement learning. Inf. Sci. 2018, 436, 89–107. [Google Scholar] [CrossRef]
  34. Abed-alguni, B.H. Bat Q-learning algorithm. Jordanian J. Comput. Inf. Technol. (JJCIT) 2017, 3, 56–77. [Google Scholar]
  35. Nareyek, A. Choosing search heuristics by non-stationary reinforcement learning. In Metaheuristics: Computer Decision-Making; Springer: Berlin/Heidelberg, Germany, 2003; pp. 523–544. [Google Scholar]
  36. Choi, S.S.; Cha, S.H.; Tappert, C.C. A survey of binary similarity and distance measures. J. Syst. Cybern. Inform. 2010, 8, 43–48. [Google Scholar]
  37. Hussain, K.; Zhu, W.; Salleh, M.N.M. Long-term memory Harris’ hawk optimization for high dimensional and optimal power flow problems. IEEE Access 2019, 7, 147596–147616. [Google Scholar] [CrossRef]
  38. Morales-Castañeda, B.; Zaldivar, D.; Cuevas, E.; Fausto, F.; Rodríguez, A. A better balance in metaheuristic algorithms: Does it exist? Swarm Evol. Comput. 2020, 54, 100671. [Google Scholar] [CrossRef]
  39. Crawford, B.; Soto, R.; Lemus-Romani, J.; Becerra-Rozas, M.; Lanza-Gutiérrez, J.M.; Caballé, N.; Castillo, M.; Tapia, D.; Cisternas-Caneo, F.; García, J.; et al. Q-Learnheuristics: Towards Data-Driven Balanced Metaheuristics. Mathematics 2021, 9, 1839. [Google Scholar] [CrossRef]
  40. Beasley, J.; Jörnsten, K. Enhancing an algorithm for set covering problems. Eur. J. Oper. Res. 1992, 58, 293–300. [Google Scholar] [CrossRef]
  41. Bisong, E. Google colaboratory. In Building Machine Learning and Deep Learning Models on Google Cloud Platform; Springer: Berlin/Heidelberg, Germany, 2019; pp. 59–64. [Google Scholar]
  42. Feng, Y.; An, H.; Gao, X. The importance of transfer function in solving set-union knapsack problem based on discrete moth search algorithm. Mathematics 2019, 7, 17. [Google Scholar] [CrossRef] [Green Version]
  43. Guo, S.S.; Wang, J.S.; Guo, M.W. Z-shaped transfer functions for binary particle swarm optimization algorithm. Comput. Intell. Neurosci. 2020, 2020, 6502807. [Google Scholar] [CrossRef] [PubMed]
  44. Too, J.; Abdullah, A.R.; Mohd Saad, N. A new quadratic binary harris hawk optimization for feature selection. Electronics 2019, 8, 1130. [Google Scholar] [CrossRef] [Green Version]
  45. Ahmed, S.; Ghosh, K.K.; Mirjalili, S.; Sarkar, R. AIEOU: Automata-based improved equilibrium optimizer with U-shaped transfer function for feature selection. Knowl.-Based Syst. 2021, 228, 107283. [Google Scholar] [CrossRef]
Figure 1. Example of classic Binarization Scheme.
Figure 1. Example of classic Binarization Scheme.
Mathematics 09 02887 g001
Figure 2. (a) S and (b) V transfer functions.
Figure 2. (a) S and (b) V transfer functions.
Mathematics 09 02887 g002
Figure 3. Binary scheme selector with SARSA as a smart operator.
Figure 3. Binary scheme selector with SARSA as a smart operator.
Mathematics 09 02887 g003
Figure 4. A SARSA algorithm sequence.
Figure 4. A SARSA algorithm sequence.
Mathematics 09 02887 g004
Figure 5. A SARSA scheme for different rewards.
Figure 5. A SARSA scheme for different rewards.
Mathematics 09 02887 g005
Figure 6. WOA-SA1—The average number of actions in the exploitation state.
Figure 6. WOA-SA1—The average number of actions in the exploitation state.
Mathematics 09 02887 g006
Figure 7. WOA-SA1—The average number of actions in the exploration state.
Figure 7. WOA-SA1—The average number of actions in the exploration state.
Mathematics 09 02887 g007
Figure 8. SCA-QL5—The average number of actions in the exploitation state.
Figure 8. SCA-QL5—The average number of actions in the exploitation state.
Mathematics 09 02887 g008
Figure 9. SCA-QL5—The average number of actions in the exploration state.
Figure 9. SCA-QL5—The average number of actions in the exploration state.
Mathematics 09 02887 g009
Figure 10. GWO-QL5—The average number of actions in the exploitation state.
Figure 10. GWO-QL5—The average number of actions in the exploitation state.
Mathematics 09 02887 g010
Figure 11. GWO-QL5—The average number of actions in the exploration state.
Figure 11. GWO-QL5—The average number of actions in the exploration state.
Mathematics 09 02887 g011
Figure 12. HHO-SA1—The average number of actions in the exploitation state.
Figure 12. HHO-SA1—The average number of actions in the exploitation state.
Mathematics 09 02887 g012
Figure 13. HHO-SA1 —The average number of actions in the exploration state.
Figure 13. HHO-SA1 —The average number of actions in the exploration state.
Mathematics 09 02887 g013
Figure 14. WOA-MIR-510 Dimensional Hussain.
Figure 14. WOA-MIR-510 Dimensional Hussain.
Mathematics 09 02887 g014
Figure 15. WOA-SA1-510 Dimensional Hussain.
Figure 15. WOA-SA1-510 Dimensional Hussain.
Mathematics 09 02887 g015
Figure 16. SCA-BCL-65 Dimensional Hussain.
Figure 16. SCA-BCL-65 Dimensional Hussain.
Mathematics 09 02887 g016
Figure 17. SCA-QL5-65 Dimensional Hussain.
Figure 17. SCA-QL5-65 Dimensional Hussain.
Mathematics 09 02887 g017
Figure 18. GWO-BCL-b5 Dimensional Hussain.
Figure 18. GWO-BCL-b5 Dimensional Hussain.
Mathematics 09 02887 g018
Figure 19. GWO-QL5-b5 Dimensional Hussain.
Figure 19. GWO-QL5-b5 Dimensional Hussain.
Mathematics 09 02887 g019
Figure 20. HHO-MIR-d5 Dimensional Hussain.
Figure 20. HHO-MIR-d5 Dimensional Hussain.
Mathematics 09 02887 g020
Figure 21. HHO-SA1-d5 Dimensional Hussain.
Figure 21. HHO-SA1-d5 Dimensional Hussain.
Mathematics 09 02887 g021
Figure 22. The average Best fitness.
Figure 22. The average Best fitness.
Mathematics 09 02887 g022
Figure 23. The average Avg fitness.
Figure 23. The average Avg fitness.
Mathematics 09 02887 g023
Figure 24. The average RPD.
Figure 24. The average RPD.
Mathematics 09 02887 g024
Figure 25. The average RPD Ranking.
Figure 25. The average RPD Ranking.
Mathematics 09 02887 g025
Table 1. Transfer functions [27].
Table 1. Transfer functions [27].
Transfer FunctionType
T ( d w j ) = 1 1 + e 2 d w j S1
T ( d w j ) = 1 1 + e d w j S2
T ( d w j ) = 1 1 + e d w j 2 S3
T ( d w j ) = 1 1 + e d w j 3 S4
T ( d w j ) =   e r f π 2 d w j V1
T ( d w j ) =   t a n h ( d w j ) V2
T ( d w j ) =   d w j 1 + ( d w j ) 2 V3
T ( d w j ) =   2 π a r c t a n π 2 d w j V4
Table 2. Techniques of binarization [5].
Table 2. Techniques of binarization [5].
BinarizationType
X n e w j =   1 i f r a n d T ( d w j ) 0 e l s e . Standard
X n e w j =   X w j i f r a n d T ( d w j ) 0 e l s e . Complement
X n e w j =   0 i f T ( d w j ) α X w j i f α < T ( d w j ) 1 2 ( 1 + α ) 1 i f T ( d w j ) 1 2 ( 1 + α ) Static Probability
X n e w j =   X B e s t j i f r a n d < T ( d w j ) 0 e l s e . Elitist
X n e w j =   P [ X n e w j = ζ j ] = f ( ζ ) δ Q g f ( δ ) if rand T ( d w j ) P [ X n e w j = 0 ] = 1 e l s e . Elitist Roulette
Table 3. Reward types.
Table 3. Reward types.
Reward TypesMathematical Formula
With Penalty r n =   + 1 , If fitness improves 1 , Otherwise
Without Penalty r n =   + 1 , If fitness improves 0 , Otherwise
Global Best r n =   W B e s t F i t n e s s , If fitness improves 0 , Otherwise
Root Adaption r n =   B e s t F i t n e s s , If fitness improves 0 , Otherwise
Scalating Adaption r n =   W · B e s t F i t n e s s , If fitness improves 0 , Otherwise
Table 4. SARSA and Q-Learning implementations.
Table 4. SARSA and Q-Learning implementations.
NameReward Types
SA1With Penalty
SA2Without Penalty
SA3Global Best
SA4Root Adaption
SA5Scalating Adaption
QL1With Penalty
QL2Without Penalty
QL3Global Best
QL4Root Adaption
QL5Scalating Adaption
Table 5. Recommended binarization schemes in the literature.
Table 5. Recommended binarization schemes in the literature.
NameBinarizationTransfer FunctionCite
BCLElitistV4[29]
MIRComplementV4[19]
Table 6. Configuration details from SCP instances employed in this work.
Table 6. Configuration details from SCP instances employed in this work.
Instance SetmnCost RangeDensity (%)
42001000[1,100]2
52002000[1,100]2
62001000[1,100]5
A3003000[1,100]2
B3003000[1,100]5
C4004000[1,100]2
D4004000[1,100]5
Table 7. Result comparison with WOA employing the approaches BCL, MIR, QL1, SA1, QL2, and SA2.
Table 7. Result comparison with WOA employing the approaches BCL, MIR, QL1, SA1, QL2, and SA2.
BCLMIRQL1SA1QL2SA2
Inst.Opt.BestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPD
41429543582.8226.57664751.7454.78521529.1721.45519521.020.98530532.423.54521524.121.45
42512554581.728.2699762.2936.52543548.676.05521533.41.76538546.445.08523535.62.15
43516565597.229.5717798.6838.95539546.894.46522527.61.16537543.784.07525532.71.74
44494541559.899.51635694.4228.54513522.893.85502510.21.62519526.335.06497508.00.61
45512565591.010.35700773.8736.72535541.434.49522525.81.95537541.894.88523528.42.15
46560593626.225.89745874.6833.04579584.443.39565568.30.89573580.332.32567570.31.25
47430455482.175.81540613.3225.58444446.293.26435436.01.16440445.292.33435439.51.16
48492536566.678.94732779.148.78505509.52.64498501.71.22505507.832.64496499.60.81
49641717751.4211.869461013.3547.58680689.06.08658672.82.65686690.87.02671677.64.68
410514543582.825.64664751.7429.18521529.171.36519521.00.97530532.43.11521524.11.36
51253288298.3313.83369416.7745.85276277.09.09266271.05.14277278.339.49267273.15.53
52302346368.3314.57456521.0350.99329332.838.94316325.74.64326332.177.95319325.35.63
53226240251.426.19323351.8142.92232233.672.65229229.91.33232233.52.65230230.81.77
54242267275.6710.33330362.4536.36251252.673.72245248.01.24250252.53.31247249.42.07
55211223236.925.69274294.929.86217218.332.84213214.40.95216218.832.37212214.50.47
56213237255.0811.27311343.9746.01224228.335.16214220.40.47227229.06.57218221.32.35
57293330337.7512.63403450.9437.54306311.834.44296300.41.02311313.26.14302304.63.07
58288306328.436.25408445.0341.67298298.53.47288291.60.0298299.333.47291293.91.04
59279307322.8210.04403443.0644.44287289.82.87281283.30.72284287.41.79282284.31.08
510265288298.338.68369416.7739.25276277.04.15266271.00.38277278.334.53267273.10.75
61138161170.416.67336368.0143.48143147.233.62141142.62.17144146.684.35141144.12.17
62146164193.5512.33415506.68184.25155156.176.16148150.81.37154155.835.48147152.30.68
63145172194.518.62390474.71168.97149150.332.76145147.80.0149150.42.76147148.41.38
64131136151.03.82262318.9100.0134134.832.29131132.30.0132134.170.76131133.10.0
65161188209.1716.77379514.0135.4178181.8310.56161167.90.0180181.511.8163172.21.24
a1253284300.812.25583626.6130.43261268.383.16260262.32.77263266.843.95260263.22.77
a2252284306.1212.7553615.9119.44271271.677.54255262.11.19266269.835.56261264.03.57
a3232276284.7518.97505568.9117.67242246.54.31239242.83.02244245.65.17240243.43.45
a4234282308.6720.51518568.48121.37245249.04.7238242.01.71251251.87.26238242.51.71
a5236262283.8811.02531570.32125.0246247.54.24241243.22.12242247.332.54241244.22.12
b16990104.230.43549592.4695.657171.552.96969.90.07071.681.456970.50.0
b27694118.2523.68487587.03540.797980.03.957677.00.07879.52.637677.20.0
b380110134.1737.5662766.94727.58282.672.58181.31.258282.172.58181.71.25
b479101123.9227.85617683.74681.018383.835.067981.20.08383.835.067981.30.0
b57282116.4213.89521603.65623.617373.831.397272.50.07374.331.397272.90.0
c1227266280.417.18707732.6211.45243248.277.05235238.23.52243247.817.05234238.43.08
c2219264280.520.55703799.94221.0236239.837.76225230.12.74234238.836.85229232.34.57
c3243287322.218.11798930.16228.4255259.674.94246249.41.23258260.836.17249254.22.47
c4219261283.5819.18721788.58229.22232233.835.94221226.00.91232233.835.94227229.03.65
c5215262288.8321.86692765.71221.86227231.05.58218222.01.4229231.336.51221225.42.79
d16099135.465.0781869.41201.676264.613.336061.70.06364.975.06162.31.67
d26684119.5827.27902988.871266.676969.04.556767.61.526869.03.036767.41.52
d37293139.5829.179071082.391159.727778.336.947475.22.787677.335.567374.81.39
d46278128.525.81760880.651125.816363.671.616262.50.06263.40.06262.20.0
d56187115.442.62777877.11173.776465.174.926162.40.06364.333.286162.60.0
286.91310.8617.01572.09643.15276.64267.02270.364.94259.56263.211.78267.38270.294.9260.98264.662.28
Table 8. Result comparison with WOA employing the approaches QL3, SA3, QL4, SA4, QL5, and SA5.
Table 8. Result comparison with WOA employing the approaches QL3, SA3, QL4, SA4, QL5, and SA5.
QL3SA3QL4SA4QL5SA5
Inst.Opt.BestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPD
41429524530.022.14518522.420.75530532.523.54518522.820.75526531.6422.61518523.620.75
42512543548.06.05528534.93.12534544.444.3525534.12.54524547.02.34525536.32.54
43516533540.333.29519531.80.58535540.783.68522529.81.16536544.113.88524533.51.55
44494516524.114.45502508.31.62513524.223.85504511.32.02517525.554.66508514.42.83
45512540545.785.47521525.81.76537544.784.88521531.01.76531545.03.71524532.62.34
46560577583.113.04568570.61.43577584.783.04563569.90.54573582.352.32566571.11.07
47430444447.143.26434436.80.93438444.671.86435438.71.16438445.01.86434439.00.93
48492506510.172.85497500.11.02505509.02.64495500.30.61504508.912.44495501.20.61
49641680684.256.08662674.63.28680690.06.08667673.14.06672689.044.84664674.13.59
410514524530.01.95518522.40.78530532.53.11518522.80.78526531.642.33518523.60.78
51253276279.339.09269272.86.32274278.08.3268273.15.93273277.487.91270272.36.72
52302330332.839.27319324.25.63327331.338.28317324.04.97325331.967.62319324.75.63
53226233234.53.1230231.81.77231233.672.21229230.51.33231233.962.21229230.41.33
54242246250.01.65246249.21.65250251.53.31247249.82.07249252.352.89247250.02.07
55211218219.333.32213215.30.95217218.332.84212215.70.47215218.221.9212215.40.47
56213225227.05.63215222.20.94228229.57.04218220.82.35223228.044.69218221.02.35
57293307310.24.78301304.02.73303311.03.41298303.31.71307311.814.78302306.43.07
58288297298.03.12291292.71.04295297.832.43289293.50.35294297.872.08290292.80.69
59279284289.171.79281284.80.72287290.52.87282283.61.08284289.571.79281284.80.72
510265276279.334.15269272.81.51274278.03.4268273.11.13273277.483.02270272.31.89
61138144146.744.35141143.52.17142146.392.9142143.82.9143146.613.62142143.92.9
62146152156.04.11148153.31.37156157.336.85149152.32.05154156.655.48150152.92.74
63145148149.172.07148149.02.07149150.332.76146147.90.69147149.961.38147148.41.38
64131134134.672.29131132.90.0131134.50.0131133.10.0133135.041.53132133.90.76
65161176179.59.32168174.54.35177179.179.94164170.71.86175180.08.7165172.92.48
a1253264266.974.35261263.73.16264266.874.35261263.43.16264267.224.35260263.22.77
a2252265270.45.16260265.13.17269271.06.75261264.63.57266270.835.56259264.12.78
a3232242246.04.31240242.13.45243245.54.74241242.73.88240246.173.45240243.43.45
a4234246246.65.13240243.12.56249250.06.41238242.81.71244249.044.27240244.12.56
a5236241248.172.12241244.42.12246248.174.24240242.51.69243248.742.97241244.42.12
b1697071.871.456970.30.06971.680.06970.60.07171.652.96969.90.0
b2767879.172.637677.00.07879.52.637677.30.07879.872.637677.00.0
b3808282.672.58181.41.258282.02.58181.61.258182.261.258181.51.25
b4798383.55.068081.91.278384.05.068081.61.278383.875.068081.41.27
b5727474.52.787273.00.07374.331.397273.10.07374.181.397273.00.0
c1227241247.486.17233239.62.64241247.296.17236241.03.96243247.757.05237240.64.41
c2219238240.178.68230233.35.02238239.68.68229232.54.57232239.815.94229232.44.57
c3243261261.87.41248253.42.06258261.336.17248253.92.06256260.615.35247251.81.65
c4219228234.174.11226229.23.2230233.55.02223227.41.83229233.094.57227229.43.65
c5215229231.06.51221224.02.79223228.673.72222224.83.26226231.05.12222224.73.26
d1606465.066.676162.71.676464.796.676162.21.676465.136.676162.61.67
d2666768.751.526667.40.06868.833.036767.81.526869.043.036767.81.52
d3727677.335.567475.62.787677.335.567375.01.397777.76.947475.82.78
d4626363.81.616262.60.06363.671.616262.10.06263.430.06262.40.0
d5616365.03.286262.51.646565.676.566162.90.06365.33.286162.20.0
266.842774.75260.89264.512.38266.71270.24.77260.64264.422.25265.24270.314.27261.22264.962.49
Table 9. Result comparison with SCA employing the approaches BCL, MIR, QL1, SA1, QL2, and SA2.
Table 9. Result comparison with SCA employing the approaches BCL, MIR, QL1, SA1, QL2, and SA2.
BCLMIRQL1SA1QL2SA2
Inst.Opt.BestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPD
41429557580.029.84545734.4827.04533538.024.24515520.9320.05530537.8323.54516519.220.28
42512573605.7811.91550725.17.42548552.897.03524531.382.34537551.114.88527535.92.93
43516557598.837.95559766.848.33548552.676.2524529.01.55543554.445.23524529.01.55
44494533557.067.89547688.4810.73519531.225.06500510.381.21530533.787.29502514.21.62
45512563591.59.96565751.3510.35540549.225.47518527.691.17537551.674.88524530.32.34
46560594635.226.07591840.425.54578587.333.21564568.10.71577589.893.04564571.450.71
47430449483.444.42456586.976.05440448.112.33434439.50.93442448.252.79435440.11.16
48492515565.674.67518727.235.28507514.63.05494498.250.41512516.04.07497501.91.02
49641713759.7511.23698964.688.89689695.677.49657674.02.5696700.838.58665676.93.74
410514557580.08.37545734.486.03533538.03.7515520.930.19530537.833.11516519.20.39
51253289303.3314.23282396.5511.46276281.09.09270272.836.72279282.1710.28270272.86.72
52302346366.9214.57335486.8710.93333334.510.26316322.834.64334336.010.6318326.275.3
53226246258.178.85238331.745.31233235.53.1230230.911.77231235.672.21230231.11.77
54242257276.56.2253338.844.55255256.05.37247249.332.07253255.674.55247250.52.07
55211227237.927.58226289.037.11216221.02.37213215.180.95218221.03.32213216.360.95
56213244258.5814.55234324.779.86223230.674.69214220.150.47221230.173.76218221.82.35
57293323342.7510.24313427.16.83317319.68.19296303.831.02310314.45.8297305.271.37
58288320333.311.11302444.354.86298299.333.47291294.181.04300301.84.17290294.550.69
59279312326.9211.83298414.266.81290293.673.94281285.180.72291294.44.3283284.71.43
510265289303.339.06282396.556.42276281.04.15270272.831.89279282.175.28270272.81.89
61138152165.210.14348369.8152.17141145.772.17142145.452.9144148.164.35142145.362.9
62146170196.1716.44161484.9710.27157159.837.53146152.630.0158159.838.22151153.03.42
63145156179.757.59151436.714.14149151.332.76145150.650.0150151.673.45148149.52.07
64131139155.256.11137303.04.58135136.333.05131133.530.0136136.173.82131133.60.0
65161193215.2519.88185450.0614.91177183.179.94161169.720.0178183.6710.56165171.822.48
a1253286302.813.04272596.87.51262267.133.56261264.733.16266269.425.14260263.92.77
a2252289304.214.68281577.5211.51271273.837.54258264.822.38272273.677.94254266.20.79
a3232266283.4414.66250555.527.76245248.65.6242259.54.31246249.06.03238245.52.59
a4234271289.315.81256544.719.4250253.06.84239247.912.14248253.65.98241246.32.99
a5236266286.8612.71253513.97.2249250.675.51243249.642.97248252.835.08242244.62.54
b16981108.617.39527585.0663.777071.741.456999.00.07172.682.96970.90.0
b27693110.3322.3781529.326.587880.332.637679.530.08082.05.267677.80.0
b38090117.0812.584687.065.08283.332.58182.841.258383.833.758084.40.0
b47996116.4221.5284582.876.338384.05.068086.131.278384.835.068283.33.8
b57283104.0915.2875573.14.177475.02.787275.530.07575.334.177274.270.0
c1227269302.818.5254536.611.89240245.555.73235241.13.52241251.326.17237240.54.41
c2219264284.020.55243715.5210.96235242.57.31226236.583.2241244.1710.05230235.75.02
c3243273306.2512.35265745.749.05261263.177.41248258.72.06259263.176.58252254.53.7
c4219251280.6714.61235669.427.31236237.07.76225243.672.74233235.836.39228231.64.11
c5215239271.1711.16232569.457.91228232.336.05218226.331.4233234.338.37222225.93.26
d1608993.248.3367701.411.676264.423.336167.621.676466.06.676264.63.33
d26681105.8322.7369802.454.556970.334.556771.351.526969.54.556773.271.52
d37281109.7512.579845.299.727878.678.337477.172.787879.08.337581.84.17
d4626891.429.6864675.353.236364.01.616274.310.06364.51.616264.60.0
d56177101.026.2367767.19.846566.66.566167.40.06667.08.26269.271.64
284.16307.6813.94290.16581.9726.03269.16273.085.55259.91266.962.04269.67273.926.01261.2265.922.62
Table 10. Result comparison with SCA employing the approaches QL3, SA3, QL4, SA4, QL5, and SA5.
Table 10. Result comparison with SCA employing the approaches QL3, SA3, QL4, SA4, QL5, and SA5.
QL3SA3QL4SA4QL5SA5
Inst.Opt.BestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPD
41429534537.524.48517521.8520.51533536.8324.24518523.7520.75530535.1723.54518522.3220.75
42512547552.676.84524533.642.34552556.897.81526534.32.73537552.374.88521534.521.76
43516540555.04.65522529.421.16535550.223.68524534.171.55536547.053.88523531.21.36
44494518531.334.86500510.571.21512532.563.64501509.081.42511532.843.44499510.431.01
45512544552.566.25520526.21.56541552.675.66526531.62.73542549.795.86522529.451.95
46560577588.223.04567571.791.25584592.564.29566571.01.07568589.31.43565569.580.89
47430447450.53.95434437.620.93447452.883.95434438.580.93439451.952.09434438.130.93
48492509513.03.46494500.420.41508515.833.25496502.30.81507515.043.05493499.670.2
49641692696.837.96664678.693.59688694.837.33666674.313.9684698.486.71658672.582.65
410514534537.53.89517521.850.58533536.833.7518523.750.78530535.173.11518522.320.78
51253277282.59.49268271.925.93278282.339.88267274.735.53274282.398.3270273.526.72
52302332336.59.93322327.626.62328334.58.61320326.45.96329336.288.94313324.53.64
53226235236.173.98230230.541.77236236.834.42230231.181.77230236.131.77229231.11.33
54242253255.174.55248249.912.48252256.174.13246250.01.65251254.963.72243249.140.41
55211218222.333.32212215.920.47221222.04.74212215.690.47217221.092.84212215.430.47
56213231232.88.45213221.270.0228233.27.04218222.452.35224231.25.16217222.221.88
57293313317.336.83298303.751.71317321.178.19297304.081.37314317.847.17296303.681.02
58288298301.833.47291294.911.04298300.03.47290295.550.69297301.393.12289293.00.35
59279289293.53.58281286.910.72292293.674.66280284.00.36286293.172.51281284.820.72
510265277282.54.53268271.921.13278282.334.91267274.730.75274282.393.4270273.521.89
61138144148.424.35141143.882.17146148.295.8141144.422.17146148.655.8141144.722.17
62146157159.177.53148152.01.37155158.06.16150153.832.74154158.265.48149152.252.05
63145151151.834.14148149.382.07150151.673.45148150.142.07149151.652.76145148.520.0
64131134135.672.29131134.140.0135136.23.05131134.380.0134136.522.29131133.860.0
65161182183.6713.04162170.920.62179183.1711.18161171.830.0175182.968.7163170.431.24
a1253263268.813.95260263.52.77263268.93.95262265.183.56265269.684.74262264.443.56
a2252273275.08.33263266.364.37271272.837.54262267.43.97270274.337.14261265.593.57
a3232249252.07.33241243.363.88251251.338.19242247.04.31242248.744.31240243.183.45
a4234249252.26.41240243.12.56250252.26.84237243.641.28247253.55.56240244.822.56
a5236245250.333.81242244.42.54247251.54.66243246.12.97248251.225.08241244.442.12
b1697272.94.356972.070.07272.874.356970.420.07273.034.356972.80.0
b2768081.55.267677.750.07880.672.637680.540.07881.392.637677.680.0
b3808283.672.58182.51.258284.02.58182.071.258183.351.258181.581.25
b4798284.333.88182.932.538384.835.068183.912.538485.266.337982.430.0
b5727474.832.787275.920.07475.02.787274.250.07474.782.787274.380.0
c1227244251.037.49234239.093.08239251.05.29234237.53.08244250.947.49237242.54.41
c2219241243.3310.05229231.934.57236241.87.76225231.752.74236242.177.76226232.413.2
c3243260262.87.0247253.091.65259262.836.58252260.13.7256263.875.35248255.412.06
c4219236237.837.76227233.823.65234236.676.85225235.312.74234237.136.85225231.592.74
c5215232233.837.91222225.03.26227231.835.58219224.711.86226233.625.12221225.472.79
d1606466.06.676163.731.676465.816.676164.081.676366.065.06064.160.0
d2666969.84.556771.271.526970.04.556770.421.526870.133.036769.141.52
d3727879.08.337377.271.397879.338.337380.171.397879.098.337378.01.39
d4626363.831.616265.30.06464.43.236264.730.06364.31.616264.150.0
d5616466.174.926167.140.06565.836.566164.60.06466.354.926164.740.0
277273.866.08260.62265.262.27269.6273.895.94260.82266.02.29267.36273.585.1262265.112.02
Table 11. Result comparison with GWO employing the approaches BCL, MIR, QL1, SA1, QL2, and SA2.
Table 11. Result comparison with GWO employing the approaches BCL, MIR, QL1, SA1, QL2, and SA2.
BCLMIRQL1SA1QL2SA2
Inst.Opt.BestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPD
41429517519.720.51524529.4222.14528530.623.08516521.4320.28523529.8321.91518522.8720.75
42512515525.880.59533541.234.1537541.434.88519527.671.37536543.884.69522531.271.95
43516517523.440.19536541.583.88537542.04.07518526.290.39531541.112.91522532.061.16
44494496503.060.4514519.844.05516521.334.45500513.061.21512520.03.64500511.941.21
45512514523.440.39536543.354.69537543.224.88521529.831.76537542.254.88520531.151.56
46560562567.280.36572579.872.14567574.891.25563568.930.54571576.781.96565572.080.89
47430433434.670.7437442.651.63438442.831.86434439.240.93438441.861.86435438.381.16
48492494498.920.41498505.161.22505507.832.64494499.290.41506507.672.85494500.850.41
49641651662.251.56674685.395.15682686.836.4665673.883.74683686.56.55666678.443.9
410514517519.70.58524529.421.95528530.62.72516521.430.39523529.831.75518522.870.78
51253265268.734.74269277.066.32271276.57.11266271.445.14272276.337.51269273.06.32
52302318322.365.3324330.877.28322325.66.62314324.243.97328330.58.61316325.924.64
53226229229.81.33230233.01.77231232.672.21228231.060.88230233.171.77229230.931.33
54242242246.830.0248251.262.48250251.673.31245249.891.24248251.52.48247250.252.07
55211212213.670.47214217.421.42215217.671.9212214.720.47216217.172.37212216.450.47
56213213216.570.0221225.653.76222225.84.23215221.940.94228228.57.04216220.831.41
57293302302.03.07307311.194.78310312.55.8296302.221.02308309.675.12298303.361.71
58288291291.01.04292296.611.39297297.03.12288292.070.0292296.251.39291294.071.04
59279280281.570.36284287.651.79284285.251.79281285.070.72287288.832.87280283.430.36
510265265268.730.0269277.061.51271276.52.26266271.440.38272276.332.64269273.01.51
61138140143.01.45143145.83.62140144.91.45141144.062.17143145.613.62141143.942.17
62146146149.830.0152155.554.11155156.06.16148151.81.37150156.332.74146152.150.0
63145145147.830.0147149.611.38147148.831.38146149.940.69147150.01.38146148.840.69
64131131132.00.0132134.260.76131133.50.0131133.550.0134135.172.29131133.810.0
65161161167.330.0172178.96.83166176.173.11161173.290.0168175.834.35161169.860.0
a1253256260.21.19266266.755.14260264.422.77259264.252.37262265.973.56262264.083.56
a2252257261.671.98267271.065.95270272.87.14260264.583.17264269.24.76260267.913.17
a3232238240.572.59241247.033.88247247.256.47242244.54.31244246.05.17241248.453.88
a4234237239.831.28246250.295.13246249.25.13237244.311.28245249.334.7240245.432.56
a5236240242.331.69245248.653.81241246.672.12239246.21.27247248.54.66242244.252.54
b1696970.00.07172.02.96974.10.06971.410.06971.650.06972.610.0
b2767676.450.07981.133.957779.171.327679.380.07879.52.637677.620.0
b3808081.090.08284.292.58182.171.258084.950.08283.52.58081.740.0
b4797980.780.08285.063.88183.672.538082.391.278183.172.537982.810.0
b5727272.50.07475.12.787374.01.397273.690.07374.41.397274.040.0
c1227234238.43.08249251.29.69231242.291.76237241.14.41239246.615.29235240.773.52
c2219227227.53.65238242.818.68234238.256.85225239.02.74236238.337.76226234.53.2
c3243245249.50.82258264.236.17259262.26.58245257.330.82255258.334.94249253.642.47
c4219223226.921.83229235.394.57232233.85.94224234.832.28231233.675.48223229.861.83
c5215220221.52.33230234.586.98221228.672.79220225.072.33225227.754.65223226.093.72
d1606161.61.676667.210.06170.871.676067.610.06164.161.676164.321.67
d2666667.330.07072.846.066869.03.036772.331.526768.251.526668.750.0
d3727273.780.07780.356.947778.06.947480.292.787576.674.177476.832.78
d4626262.830.06466.423.236363.331.616267.00.06263.330.06264.290.0
d5616162.10.06467.324.926264.171.646164.710.06264.171.646162.680.0
258.47261.71.46265.562784.61265.33269.033.9259.4265.391.79265.36268.963.96260.29265.392.05
Table 12. Result comparison with GWO employing the approaches QL3, SA3, QL4, SA4, QL5, and SA5.
Table 12. Result comparison with GWO employing the approaches QL3, SA3, QL4, SA4, QL5, and SA5.
QL3SA3QL4SA4QL5SA5
Inst.Opt.BestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPD
41429526529.622.61517523.4320.51522528.3321.68516523.6520.28522528.7921.68516523.5920.28
42512538543.675.08524533.742.34535542.224.49526538.162.73534541.324.3516531.350.78
43516535541.673.68524533.841.55524538.671.55518529.80.39537541.054.07520531.040.78
44494516519.54.45498508.50.81508518.892.83502511.061.62510520.953.24500510.111.21
45512532542.223.91521529.191.76535542.784.49519527.291.37529540.473.32522528.691.95
46560571575.781.96566571.01.07571576.141.96564569.380.71568576.81.43565571.190.89
47430438443.251.86434438.550.93438441.331.86436438.791.4434442.230.93433437.520.7
48492503505.52.24493500.940.2498504.01.22494499.820.41496505.650.81493500.290.2
49641677683.05.62657677.452.5675684.55.3656675.02.34666682.433.9654676.382.03
410514526529.62.33517523.430.58522528.331.56516523.650.39522528.791.56516523.590.39
51253271275.177.11266272.575.14274278.08.3269273.656.32273277.487.91267273.25.53
52302329331.178.94316325.374.64327332.08.28318326.155.3325330.217.62318326.125.3
53226230232.81.77229231.061.33231233.02.21229231.711.33230232.961.77229231.041.33
54242251252.53.72247248.92.07247252.02.07246250.121.65247251.612.07244249.730.83
55211214216.671.42212215.120.47214216.21.42212215.520.47212217.170.47212215.20.47
56213226227.56.1215220.360.94220225.53.29213220.580.0219226.462.82215221.80.94
57293309311.05.46298303.941.71307308.04.78299304.02.05303310.33.41297303.01.37
58288294296.332.08289293.890.35295297.172.43289293.470.35289296.570.35288293.480.0
59279283285.831.43281283.790.72286287.332.51281285.060.72283287.381.43280284.110.36
510265271275.172.26266272.570.38274278.03.4269273.651.51273277.483.02267273.20.75
61138144145.874.35141143.812.17142145.742.9141144.862.17142145.522.9140143.731.45
62146153156.04.79146152.00.0151154.173.42149153.132.05147154.420.68146153.170.0
63145149150.332.76145149.00.0149150.332.76147150.941.38148149.432.07148149.082.07
64131133134.171.53131133.090.0133134.331.53132134.00.76131134.220.0131133.170.0
65161175179.338.7161171.40.0177179.09.94164171.191.86168177.874.35161171.770.0
a1253263266.263.95261267.173.16262266.653.56259264.42.37261266.353.16258263.151.98
a2252265267.755.16262265.363.97262270.23.97261266.073.57264270.124.76259264.752.78
a3232243244.834.74240244.363.45241245.23.88239243.333.02243246.384.74238243.052.59
a4234247248.675.56240245.922.56249251.06.41242244.923.42244248.744.27238243.361.71
a5236241248.02.12242245.092.54240244.331.69242245.072.54240247.171.69241244.052.12
b1696971.550.06970.880.06971.840.06971.170.06971.680.06970.870.0
b2767880.02.637678.040.07879.52.637678.00.07779.041.327678.120.0
b3808181.831.258182.01.258182.01.258184.061.258182.611.258081.420.0
b4798284.03.88082.451.278182.832.538083.721.278283.833.88081.731.27
b5727474.82.787273.530.07474.42.787274.120.07274.610.07275.480.0
c1227232248.132.2235242.823.52239247.455.29236239.573.96237246.94.41236241.383.96
c2219236239.07.76225233.292.74238239.258.68227231.883.65227237.723.65224233.142.28
c3243259260.676.58250257.912.88259261.176.58249255.122.47248259.722.06248254.732.06
c4219230233.835.02224230.872.28228233.674.11225232.062.74228233.384.11224228.542.28
c5215226227.675.12219224.751.86224227.04.19220224.822.33226230.335.12219226.231.86
d1606264.483.336063.190.06264.263.336164.331.676163.91.676163.141.67
d2666869.03.036769.211.526869.03.036768.121.526769.481.526768.681.52
d3727677.55.567376.291.397677.05.567377.711.397377.391.397375.571.39
d4626464.03.236263.060.06263.670.06265.290.06264.150.06263.230.0
d5616264.61.646164.920.06465.174.926267.071.646365.33.286162.880.0
265.6268.894.26259.84265.291.92264.71268.74.01260.18265.452.19262.96268.813.07259.2264.871.76
Table 13. Result comparison with HHO employing the approaches BCL, MIR, QL1, SA1, QL2, and SA2.
Table 13. Result comparison with HHO employing the approaches BCL, MIR, QL1, SA1, QL2, and SA2.
BCLMIRQL1SA1QL2SA2
Inst.Opt.BestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPD
41429520522.6421.21523530.3521.91527529.022.84514523.1719.81524528.3322.14516522.520.28
42512528535.713.12537544.14.88538541.785.08517530.940.98536541.784.69519535.01.37
43516526530.671.94538544.14.26531540.02.91523530.361.36537540.384.07522531.911.16
44494503511.561.82510521.353.24513519.383.85500508.651.21514521.04.05500508.51.21
45512520530.731.56539546.455.27536541.224.69518530.361.17534541.564.3523529.542.15
46560565570.710.89573581.712.32569576.01.61567572.141.25574577.712.5566573.331.07
47430436437.81.4438444.031.86438442.571.86435437.191.16440443.52.33436438.081.4
48492497501.831.02504508.582.44498504.01.22496501.470.81498503.831.22494499.330.41
49641664672.03.59677685.455.62677679.675.62657677.352.5682684.336.4663678.423.43
410514520522.641.17523530.351.75527529.02.53514523.170.0524528.331.95516522.50.39
51253269272.826.32272278.037.51273276.177.91267272.05.53279279.010.28270272.086.72
52302318323.735.3327331.948.28326328.67.95314324.943.97330330.69.27316325.084.64
53226230231.01.77232233.612.65232233.02.65230231.211.77230232.01.77228231.00.88
54242246248.921.65249252.032.89249251.332.89246249.381.65250251.53.31247249.822.07
55211214215.71.42214217.551.42215217.01.9212215.750.47217218.02.84214216.311.42
56213220221.253.29222227.294.23227227.336.57213219.50.0218221.672.35216221.581.41
57293303304.333.41308312.325.12309310.675.46298303.291.71310310.675.8297304.251.37
58288292293.81.39296297.742.78296297.02.78291294.51.04292296.01.39292296.421.39
59279284285.221.79283288.351.43285286.672.15280283.250.36286287.672.51282284.171.08
510265269272.821.51272278.032.64273276.173.02267272.00.75279279.05.28270272.081.89
61138141142.62.17146146.45.8143144.843.62141144.962.17143145.323.62141145.582.17
62146149151.582.05155156.846.16151154.673.42146152.730.0152155.174.11146150.670.0
63145147147.911.38147149.421.38147149.51.38145148.080.0149149.672.76148149.712.07
64131131132.580.0133134.291.53133134.01.53131133.620.0132133.40.76131133.670.0
65161168171.714.35173178.587.45176178.09.32162169.090.62171176.676.21163171.461.24
a1253262263.43.56263266.53.95261265.193.16262265.913.56261265.13.16260264.912.77
a2252259263.832.78265270.745.16267268.85.95260264.923.17266269.05.56258264.52.38
a3232241243.03.88243245.944.74244245.175.17239243.673.02243244.834.74241244.823.88
a4234244244.334.27244248.654.27244245.334.27239242.92.14244247.04.27239243.62.14
a5236241242.332.12245248.583.81247248.04.66243247.22.97244245.63.39242245.552.54
b1696970.00.07172.02.97070.811.456970.710.06970.90.06970.60.0
b2767677.090.07880.162.637878.832.637679.480.07778.331.327679.00.0
b3808181.271.258282.522.58181.831.258182.551.258282.172.58082.430.0
b4798182.02.538283.943.88082.51.277983.50.08383.335.068083.541.27
b5727272.20.07474.522.787373.671.397275.320.07373.831.397273.920.0
c1227237239.254.41246249.68.37237244.654.41233240.422.64241245.946.17235242.13.52
c2219234234.06.85237240.748.22237238.338.22222236.421.37238238.08.68229234.094.57
c3243252253.333.7258261.716.17256258.675.35245257.770.82258258.676.17250256.092.88
c4219227228.863.65231234.685.48230231.755.02225234.852.74231232.175.48225232.272.74
c5215223223.03.72228232.296.05225228.04.65221230.772.79227227.335.58221227.182.79
d1606262.63.336565.88.336264.063.336164.051.676364.195.06162.171.67
d2666767.831.526969.614.556868.673.036775.561.526868.673.036768.271.52
d3727475.172.787778.296.947676.675.567477.792.787576.334.177476.422.78
d4626262.670.06363.771.616263.00.06265.630.06262.670.06266.920.0
d5616162.570.06566.06.566364.63.286163.80.06464.64.926263.771.64
261.89264.472.8266.16270.114.75265.56268.144.2259.44265.611.84266.0268.354.37260.42265.452.23
Table 14. Result comparison with HHO employing the approaches QL3, SA3, QL4, SA4, QL5, and SA5.
Table 14. Result comparison with HHO employing the approaches QL3, SA3, QL4, SA4, QL5, and SA5.
QL3SA3QL4SA4QL5SA5
Inst.Opt.BestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPD
41429528528.523.08519524.5420.98525528.622.38519524.420.98522527.2621.68514522.2919.81
42512537543.144.88517534.270.98537540.894.88524537.362.34535542.324.49520532.791.56
43516536539.673.88523531.621.36530538.332.71523531.311.36534539.683.49523532.391.36
44494519521.755.06501510.441.42512516.893.64504514.082.02512518.373.64503509.51.82
45512533539.784.1518529.431.17526536.332.73519529.51.37529539.053.32516529.810.78
46560575577.572.68566570.931.07574578.882.5563569.640.54573578.912.32565571.750.89
47430443443.03.02434440.380.93440441.672.33435439.331.16438442.391.86435439.01.16
48492501505.51.83496502.750.81502505.172.03498501.811.22501505.041.83498502.621.22
49641684687.336.71659678.872.81685686.06.86664677.223.59669683.964.37659675.172.81
410514528528.52.72519524.540.97525528.62.14519524.40.97522527.261.56514522.290.0
51253275277.338.7267273.05.53276278.09.09269273.126.32272276.657.51267272.375.53
52302327329.678.28319323.935.63323327.06.95317325.04.97320329.755.96318326.115.3
53226232232.52.65228230.750.88232232.672.65230231.211.77231232.872.21229231.311.33
54242250251.53.31247249.672.07249251.172.89245248.711.24247250.782.07245249.721.24
55211218218.03.32212215.60.47216217.332.37212215.640.47216217.282.37213216.180.95
56213221225.553.76215220.530.94227227.56.57213220.790.0223226.34.69215221.00.94
57293309311.05.46299304.432.05309310.05.46297305.071.37304310.383.75297303.251.37
58288295296.52.43289293.560.35292296.01.39288292.850.0292296.411.39290292.710.69
59279287288.02.87280284.640.36283285.671.43281286.310.72283287.091.43281284.470.72
510265275277.333.77267273.00.75276278.04.15269273.121.51272276.652.64267272.370.75
61138143144.973.62141144.562.17142144.842.9141144.712.17141144.522.17142146.02.9
62146153155.04.79147150.740.68152154.674.11146151.940.0151154.743.42149152.672.05
63145149149.672.76147149.221.38147149.331.38148150.02.07147149.391.38147149.01.38
64131133134.01.53131133.870.0134134.172.29131133.50.0131133.830.0131133.640.0
65161170175.675.59164172.531.86175178.58.7165172.792.48171177.226.21167172.673.73
a1253261265.13.16260264.582.77262265.423.56259263.872.37261264.943.16260265.382.77
a2252267269.25.95261267.833.57267270.25.95260266.543.17263268.524.37258266.532.38
a3232244245.335.17240243.753.45245245.65.6239243.923.02240244.963.45240243.53.45
a4234246247.05.13241245.642.99246247.335.13240244.672.56240246.682.56238242.861.71
a5236246247.04.24240244.851.69241244.332.12241244.792.12243246.392.97240243.351.69
b1696970.840.06970.320.07071.161.456973.440.07071.231.457071.581.45
b2767878.52.637678.60.07678.170.07679.790.07778.781.327778.541.32
b3808282.02.58081.80.08282.02.58181.711.258182.01.258182.271.25
b4798282.833.88083.081.278383.25.068183.472.538083.121.277982.750.0
b5727474.02.787273.650.07373.671.397374.691.397273.390.07376.251.39
c1227241246.066.17238240.184.85239245.165.29232239.772.2238245.424.85235239.123.52
c2219233236.336.39225234.172.74233236.676.39227234.863.65234237.046.85229233.694.57
c3243253256.674.12252256.273.7256258.835.35250258.172.88254258.094.53246254.01.23
c4219230231.55.02229235.44.57228230.84.11226229.933.2229232.154.57225231.242.74
c5215228228.676.05222226.553.26226228.05.12221223.862.79225228.344.65221224.672.79
d1606364.195.06163.231.676364.325.06063.50.06264.093.336164.641.67
d2666868.673.036769.71.526868.753.036768.361.526868.653.036768.181.52
d3727676.675.567276.740.07777.336.947477.452.787576.854.177478.362.78
d4626262.330.06263.760.06363.01.616265.450.06263.040.06264.00.0
d5616464.754.926263.281.646364.83.286267.361.646364.563.286265.081.64
266.4268.424.5260.31265.582.16265.56268.114.3260.44265.762.22263.84268.143.48267265.272.23
Table 15. The average Wilcoxon–Mann–Whitney test of WOA.
Table 15. The average Wilcoxon–Mann–Whitney test of WOA.
Ver.BCLQL1QL2QL3QL4QL5SA1SA2SA3SA4SA5
BCL-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
QL10.00-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
QL20.00≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
QL30.00≥0.05≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
QL40.00≥0.05≥0.05≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
QL50.00≥0.05≥0.05≥0.05≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05
SA10.000.000.000.000.000.00-≥0.05≥0.05≥0.05≥0.05
SA20.000.000.000.000.000.00≥0.05-≥0.05≥0.05≥0.05
SA30.000.000.000.000.000.00≥0.05≥0.05-≥0.05≥0.05
SA40.000.000.000.000.000.00≥0.05≥0.05≥0.05-≥0.05
SA50.000.000.000.010.000.01≥0.05≥0.05≥0.05≥0.05-
Table 16. The average Wilcoxon–Mann–Whitney test of SCA.
Table 16. The average Wilcoxon–Mann–Whitney test of SCA.
Ver.BCLQL1QL2QL3QL4QL5SA1SA2SA3SA4SA5
BCL-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
QL10.00-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
QL20.00≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
QL30.00≥0.05≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
QL40.00≥0.05≥0.05≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
QL50.00≥0.05≥0.05≥0.05≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05
SA10.00≥0.05≥0.05≥0.05≥0.05≥0.05-≥0.05≥0.05≥0.05≥0.05
SA20.000.020.020.020.010.02≥0.05-≥0.05≥0.05≥0.05
SA30.000.020.020.020.020.02≥0.05≥0.05-≥0.05≥0.05
SA40.000.040.040.030.030.03≥0.05≥0.05≥0.05-≥0.05
SA50.00≥0.050.040.040.030.04≥0.05≥0.05≥0.05≥0.05-
Table 17. The average Wilcoxon–Mann–Whitney test of GWO.
Table 17. The average Wilcoxon–Mann–Whitney test of GWO.
Ver.BCLQL1QL2QL3QL4QL5SA1SA2SA3SA4SA5
BCL-0.010.000.010.000.00≥0.05≥0.05≥0.050.040.03
QL1≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
QL2≥0.05≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
QL3≥0.05≥0.05≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
QL4≥0.05≥0.05≥0.05≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
QL5≥0.05≥0.05≥0.05≥0.05≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05
SA1≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05-≥0.05≥0.05≥0.05≥0.05
SA2≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05-≥0.05≥0.05≥0.05
SA3≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05-≥0.05≥0.05
SA4≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05-≥0.05
SA5≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05-
Table 18. The average Wilcoxon–Mann–Whitney test of HHO.
Table 18. The average Wilcoxon–Mann–Whitney test of HHO.
Ver.BCLQL1QL2QL3QL4QL5SA1SA2SA3SA4SA5
BCL-0.020.010.010.010.01≥0.05≥0.05≥0.05≥0.05≥0.05
QL1≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
QL2≥0.05≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
QL3≥0.05≥0.05≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
QL4≥0.05≥0.05≥0.05≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
QL5≥0.05≥0.05≥0.05≥0.05≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05
SA1≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05-≥0.05≥0.05≥0.05≥0.05
SA2≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05-≥0.05≥0.05≥0.05
SA3≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05-≥0.05≥0.05
SA4≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05-≥0.05
SA5≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05-
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lemus-Romani, J.; Becerra-Rozas, M.; Crawford, B.; Soto, R.; Cisternas-Caneo, F.; Vega, E.; Castillo, M.; Tapia, D.; Astorga, G.; Palma, W.; et al. A Novel Learning-Based Binarization Scheme Selector for Swarm Algorithms Solving Combinatorial Problems. Mathematics 2021, 9, 2887. https://doi.org/10.3390/math9222887

AMA Style

Lemus-Romani J, Becerra-Rozas M, Crawford B, Soto R, Cisternas-Caneo F, Vega E, Castillo M, Tapia D, Astorga G, Palma W, et al. A Novel Learning-Based Binarization Scheme Selector for Swarm Algorithms Solving Combinatorial Problems. Mathematics. 2021; 9(22):2887. https://doi.org/10.3390/math9222887

Chicago/Turabian Style

Lemus-Romani, José, Marcelo Becerra-Rozas, Broderick Crawford, Ricardo Soto, Felipe Cisternas-Caneo, Emanuel Vega, Mauricio Castillo, Diego Tapia, Gino Astorga, Wenceslao Palma, and et al. 2021. "A Novel Learning-Based Binarization Scheme Selector for Swarm Algorithms Solving Combinatorial Problems" Mathematics 9, no. 22: 2887. https://doi.org/10.3390/math9222887

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop