Next Article in Journal
A Hierarchical Bayesian Model for Inferring and Decision Making in Multi-Dimensional Volatile Binary Environments
Previous Article in Journal
Computational Modeling of Latent Heat Thermal Energy Storage in a Shell-Tube Unit: Using Neural Networks and Anisotropic Metal Foam
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Swarm-Inspired Computing to Solve Binary Optimization Problems: A Backward Q-Learning Binarization Scheme Selector

by
Marcelo Becerra-Rozas
1,*,
José Lemus-Romani
2,
Felipe Cisternas-Caneo
1,
Broderick Crawford
1,*,
Ricardo Soto
1 and
José García
3
1
Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Avenida Brasil 2241, Valparaíso 2362807, Chile
2
Escuela de Construcción Civil, Pontificia Universidad Católica de Chile, Avenida Vicuña Mackenna 4860, Macul, Santiago 7820436, Chile
3
Escuela de Ingeniería de Construcción y Transporte, Pontificia Universidad Católica de Valparaíso, Avenida Brasil 2147, Valparaíso 2362807, Chile
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(24), 4776; https://doi.org/10.3390/math10244776
Submission received: 15 November 2022 / Revised: 3 December 2022 / Accepted: 7 December 2022 / Published: 15 December 2022
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
In recent years, continuous metaheuristics have been a trend in solving binary-based combinatorial problems due to their good results. However, to use this type of metaheuristics, it is necessary to adapt them to work in binary environments, and in general, this adaptation is not trivial. The method proposed in this work evaluates the use of reinforcement learning techniques in the binarization process. Specifically, the backward Q-learning technique is explored to choose binarization schemes intelligently. This allows any continuous metaheuristic to be adapted to binary environments. The illustrated results are competitive, thus providing a novel option to address different complex problems in the industry.

1. Introduction

The resolution of real problems can be approached through mathematical modeling to find a solution with an optimization algorithm [1]; under this scheme, it is increasingly common for different industries to solve combinatorial problems for their normal operation to minimize costs and times, as well as maximize profits. Such is the case of the forestry industry [2], flight planning for unmanned aircraft [3], or the detection of cracks in pavements [4]. Combinatorial problems are mostly NP-hard, which makes it difficult to find solutions with polynomial-time algorithms [5], which is why the use of intelligent optimization algorithms, mainly metaheuristics (MHs) [6], have considerably supported the growth of combinatorial problem solving; through their search processes, they manage to intelligently explore the search space, finding quasi-optimal solutions in reasonable computational times.
MHs are general-purpose algorithms widely used to solve optimization problems. Talbi in [7] indicated that metaheuristics can be classified according to how they perform the search process. Single-solution metaheuristics transform a single solution during the search process. Some classic examples of this type of metaheuristics are simulated annealing [8] and tabu search [9]. On the other hand, population-based metaheuristics are a set of solutions that are evolved as the optimization process progresses. Some classic examples of this type of metaheuristics are particle swarm optimization [10], cuckoo search [11], and the generic algorithm [12]. In the literature, population-based metaheuristics are more widely used than single-solution metaheuristics.
The no-free-lunch (NFL) theorem [13] tells us that there is no optimization algorithm that is good at all optimization problems. This theorem motivates researchers to keep developing new innovative algorithms. Thanks to this theorem, new metaheuristics have been created with very good performance. These good metaheuristics are grey wolf optimization [14], the whale optimization algorithm [15], and the sine–cosine algorithm (SCA) [16].
The grey wolf optimizer has been used for example in feature selection [17], training neural networks [18], optimizing support vector machines [19], designing and tuning controllers [20], economic dispatch problems [21], robotics and path planning [22], and scheduling [23].
The whale optimization algorithm has been used for example in optimal power flow problem [24], the economic dispatch problem [25], the electric vehicle charging station locating problem [26], image segmentation [27], feature selection [28], drug toxicity prediction [29], and C O 2 emissions prediction and forecasting [30].
The sine–cosine algorithm has been used for example in the trajectory controller problem [31], feature selection [32], power management [33], network integration [34], engineering problems [35], and Image processing [36].
The grey wolf optimizer, sine–cosine algorithm, and whale optimization algorithm have been developed to perform in continuous domains; however, there are a small number of techniques that are capable of operating in binary and continuous domains, as is the case of genetic algorithms [37] and some variations of ant colony optimization (ACO) [38]. However, they have not been able to obtain the performance obtained by continuous metaheuristic techniques that use an operator capable of transforming their continuous solutions to binary space. In recent years, there has been an increase in the literature on new and novel binarization operators, such as those based on machine learning, specifically clustering techniques such as K-means [39] and DB-scan [40], based on reinforcement learning, such as Q-learning [41] and SARSA [42], among other inspirations, such as quantum [43], logical operators [44], crossovers [45], and percentiles [46]. Among the most-common and -used operators is the two-step operator, which consists of normalizing the continuous values through a transfer function (Step 1) to be subsequently binarized an approximation rule, finally obtaining a value of 0 or 1 (Step 2) [47]. In this context, it is necessary to continue the search for new variations of binarization techniques, since it has been proven that they directly influence the performance of the MHs [48,49,50,51,52].
In this work, we propose a new intelligent operator using a binarization scheme selection (BSS) capable of adapting any continuous MHs to work in the binary domain. BSS is based on the two-step technique, where employing an intelligent operator, the transfer function, and the binarization rules to be used are chosen; this scheme was first proposed in [53]. BSS has been previously used with reinforcement learning techniques within the machine learning umbrella: Q-learning and SARSA; in this case, a new intelligent operator called backward Q-learning (BQSA) [54] is presented, which is a combination between Q-learning and SARSA, updating the Q-values by SARSA and in a delayed way with Q-learning. On the other hand, the set of schemes of this proposal is more extensive, going from 40 possible combinations to 80. All of the above points to the need to investigate hybrid methods to improve the algorithm’s performance. The contributions made as a result of this work are presented below:
  • The implementation of a BSS, as a binarization operator capable of operating in any continuous MH.
  • The use of the BQSA as an smart operator in BSS.
  • A larger set of transfer functions obtained from the literature, which generates an increase from 40 to 80 possible binarization schemes to be used.
Experimental tests were carried out against multiple state-of-the-art binarization strategies that solve the set covering problem. Among the results obtained, there are considerably competitive performances for the proposed work, but not having statistically significant differences, although the difference with static versions is validated. In these quantitative comparisons, we can observe differences in the convergence behavior between the 80 and 40 actions versions, as well as the balance between exploration and exploitation.
The rest of the paper is structured as follows: Section 2 presents the work related to metaheuristic techniques, machine learning, and their hybridization. Section 3 presents the proposal for the incorporation of the BQSA in BSS, while in Section 4, its implementation is validated with the results obtained and the respective statistical tests, ending with the analysis and conclusions in Section 5.

2. Related Work

In the following subsections, we present some of the concepts necessary to internalize the work.

2.1. Reinforcement Learning

Machine learning (ML) aims to analyze data in the search of patterns and to develop predictions obtaining future results [55]; when decomposing ML, we can find four main types (Figure 1): supervised, unsupervised, semi-supervised, and reinforcement learning (RL). In RL, an agent receives a set of instructions, actions, or guidelines with which it will then make its own decisions based on a process of reward and penalty to guide the agent toward the optimal solution to the problem.
Under this basis, an RL agent is constituted by four sub-elements: policy; value function; reward function; the environment model; however, the latter is usually optional [56]. As a definition of these elements, we give the following:
  • A policy defines the agent’s behavior at each instant of time, i.e., it is a mapping from the set of perceived states to the set of actions to be performed when the agent is in those states.
  • The value function allows the agent to maximize the sum of total rewards in the long run. It calculates the value of a state–action pair as the total amount of rewards that the agent can expect to accumulate in the future, starting from the state it is in. Thus, the agent selects the action based on value judgments. Indeed, while the reward determines the immediate and intrinsic desirability of a state–action pair, the value indicates the long-term desirability of a state–action pair considering likely future state–action pairs and their rewards.
  • A reward function represents the agent’s goal, i.e., it translates each perceived state–action pair into a single number. In other words, a reward indicates the intrinsic desirability of that state–action pair. It is a way of communicating to the agent what the agent wants to obtain, but not how to achieve it.
  • An environment model is intended to reproduce the behavior of the environment, i.e., the model that directs the agent to the next state and the subsequent reward based on the current state–action pair. The environment model is not always available and is therefore an optional element.
Among the existing RL classifications, we find the temporal difference techniques [57], where the Q-learning (QL) algorithm is the most popular for its contributions in several areas [58]. Still, there are other algorithms, such as SARSA [59] and the BQSA [54], which are variations of QL, but have obtained different performances for different problems.

2.2. Q-Learning

Among the algorithms present in RL, we find the QL algorithm first proposed in [60], which provides agents with the ability to learn to act optimally without the need to build domain maps. Having several possible s states, where from the “environment”, we obtain the current s in which the agent interacts and performs decisions. The agent has a set of possible actions which affect the reward and the next state. Once an action is performed, the state changes. When changing state, the agent receives a reward for the decision made. Where the rewards received by the agent consequently generate learning in the agent. To solve the problem, the agent learns the best course of action it can take, which has a maximum cumulative reward. The sequence of actions from the first state to the terminal state is called an episode. The transition of states is given by the Equation (1).
Q n e w ( s t , a t ) = ( 1 α ) · Q o l d ( s t , a t ) + α · [ r n + γ · m a x Q ( s t + 1 , a t + 1 ) ]
where Q n e w ( s t , a t ) is nominating the reward of the action taken in state s t and r n is the reward received when action a t is taken, m a x Q ( s t + 1 , a t + 1 ) is the maximum value of the action for the next state, the value of α must be 0 < α 1 and corresponds to the learning factor. On the other hand, the value of γ must be 0 γ 1 and corresponds to the discount factor. If γ reaches the value of 0, only the immediate reward will be considered, while as it approaches 1, the future reward receives greater emphasis relative to the immediate reward. QL is algorithmically presented with the following Algorithm 1:
Algorithm 1 Q-learning.
  1:
Initialize Q ( s , a )
  2:
for each episode do
  3:
    Initialize state s
  4:
    while  s s t e r m i n a l  do
  5:
        Choose action a from state s
  6:
        Take action a
  7:
        Observe reward r
  8:
        Observe next state s
  9:
        Update Q ( s , a ) using Equation (1)
10:
       s s
11:
    end while
12:
end for

2.3. State-Action-Reward-State-Action

This is a method of RL that uses generalized policy iteration patterns, which consists of two processes that are performed simultaneously and interact with each other, where one performs the value function with the current policy, and at the same time, the other one improves the current policy. These two processes complement each other in each iteration, but each one does not need to be completed before the next one begins.
The learning agent learns the current value function derived from the policy currently in use. To understand how it works, the first step is to learn an action-value function instead of a state-value function. In particular, for the on-policy method, we must estimate Q π ( s , a ) for the current policy π and all states s and actions a.
To understand the algorithm, let us consider the transitions as a pair of values, state–action to state–action, where the values of the state–action pairs are learned. These cases are identical: both are Markov chains with a rewarding process. The theorems that ensure convergence of state values also apply to the corresponding algorithm for action values, with Equation (2).
Q n e w ( s t , a t ) = ( 1 α ) · Q o l d ( s t , a t ) + α · [ r n + γ · ( s t + 1 , a t + 1 ) ]
After each transition the state is updated, until a terminal state is reached. When a state s t + 1 is terminal then Q ( s t + 1 , a t + 1 ) is defined as zero. Each transition process is composed of five events: s t , a t , r t + 1 , s t + 1 , a t + 1 (State-Action-Reward-State-Action), giving the name to the SARSA algorithm. The following is the Algorithm 2:
Algorithm 2 SARSA.
  1:
Initialize Q ( s , a )
  2:
for each episode do
  3:
    Initialize state s
  4:
    Choose action a from state s
  5:
    while  s s t e r m i n a l  do
  6:
        Take action a
  7:
        Observe reward r
  8:
        Observe next state s
  9:
        Choose next action a from next state s
10:
      Update Q ( s , a ) using Equation (2)
11:
       s s
12:
       a a
13:
    end while
14:
end for

2.4. Backward Q-Learning

Backward Q-learning is another RL technique, although its name is similar to QL, it is not an ordinary Q-function update function (Equation (1)), this time, a backward update is added, hence the name backward Q-learning.
In this structure, action is directly affected, while the policy is indirectly affected. As the agent increases interaction with the environment, the agent’s precise knowledge also increases. Due to its structure, the agent can improve the learning speed, balance the explore–exploit dilemma and converge to the global minimum by using those previous states, actions, and information in an episode. Then, it is recorded that agents went through states, chose actions, and acquired rewards in an episode, and then this information will be used to update the Q-function again.
When the agent reaches the target state in the current episode, the produced data is used to update the Q-function backward. For example, state s 0 is defined as an initial state, and state s n is defined as a terminal state. The agent updates the Q-function N times from the initial state s 0 to the terminal state s n in an episode thanks to the recorded events: “s, a, r, s”. Therefore, we redefine Equation (1) of a step as:
Q n e w ( s t i , a t i ) ( 1 α ) · Q ( s t i , a t i ) + α · [ r t + 1 i + γ · m a x a Q ( s t + 1 i , a t + 1 i ) ]
where i = 1 , 2 , , N is the number of times the Q-function will be updated in the current episode. In turn, the agent simultaneously records the four events in M i , represented mathematically with the Equation (4):
M i s t i , a t i , r t i , s t + 1 i
Once the agent reaches the terminal state, the agent will backward update the Q-function based on the information obtained from Equation (4) as follows (Equation (5)).
Q n e w ( s t j , a t j ) ( 1 α ) · Q ( s t j , a t j ) + α b · [ r t + 1 j + γ b · m a x Q ( s t + 1 j , a t + 1 j ) ]
where j = N , N 1 , N 2 , , 1 , α b and γ b are the learning and discount factors respectively for the backward update of Q-function. Algorithm 3 is added for a better understanding of the above.
Algorithm 3 Backward Q-learning.
  1:
Initialize arbitrarily all Q ( s , a ) , M and set α and γ
  2:
for each episode do
  3:
    Choose initialize state s t
  4:
    Choose an action a t from state s t
  5:
    while  N 1  do
  6:
        for each step in the episode do
  7:
             Execute the select action a t i to the environment
  8:
             Observe reward r t + 1
  9:
             Observe new state s t + 1 i
10:
             Choose an action a t + 1 i from state s t + 1 i
11:
             Record the four events in M i s t i , a t i , r t i , s t + 1 i
12:
             Update Q ( s t i , a t i ) using the Equation (3)
13:
              s t i s t + 1 i
14:
              a t i a t + 1 i
15:
              i i + 1
16:
        end for
17:
    end while
18:
    for  j = N to 1 do
19:
        Backward update Q ( s t j , a t j ) using Equation (5)
20:
    end for
21:
    Initialize all M values
22:
end for

2.5. Metaheuristics

MHs are used to solve optimization problems, which can be considered a strategy that guides and modifies other heuristics to produce solutions beyond those normally generated in a search for local optimality [61]. Blum and Roli [6] mention two main components of any metaheuristic algorithm, which are: diversification and intensification, also known as exploration and exploitation. Exploration means generating diverse solutions to explore the search space on a global scale, while exploitation is to focus the search on a local region, knowing that a good solution is found in this neighborhood. Global optimization can be achieved with a good combination of these two main components, as a good balance while selecting the best solutions will improve the convergence rate of the algorithms. Choosing the best solutions can ensure that the solutions converge to the optimum. At the same time, diversification through randomization allows the search from the local optimum to the whole search space, causing the diversity of the solutions to grow.
The great advantage of MHs is that they are able to generate near-optimal solutions in reduced computational times as opposed to exact methods and, in addition, they are able to adapt to the problem in contrast to heuristic methods [7].
A high percentage of MHs are nature-based; this is mainly because their development is based on some abstraction of nature. The existing taxonomy can be defined in various ways and in various sections of the MH if one wanted to dissect; at first, they can be classified as those based on the trajectory or based on population. A popular classification can be those presented by [62,63], decomposing the population-based ones into 4 main categories: physics-based, human-based, evolutionary-based and swarm-based (Figure 2). Furthermore, the taxonomy presented in [64] deepens in the main components that conform to these algorithms: the solution evaluation, parameters, encoding, initialization of the agents or the population, management of the population, operators, and finally, local search.
Along this regard, and going deeper into the metaheuristic components and behavior, a metaheuristic can be represented algorithmically as a triple nested cycle. The first cycle corresponds to the iterations performed during the optimization process, the second cycle corresponds to the solutions obtained from the agents and finally, the third cycle corresponds to the dimensions associated with the problem. It is necessary to mention that within the third cycle, there is a Δ that is characteristic of each MH. In the Algorithm 4 the above mentioned is presented.
Algorithm 4 Discrete general scheme of metaheuristics.
1:
Random initialization
2:
for i t e r a t i o n ( t ) do
3:
    for  s o l u t i o n ( i )  do
4:
        for  d i m e n s i o n ( d )  do
5:
            X i , d t + 1 = X i , d t + Δ
6:
        end for
7:
    end for
8:
end for

2.6. Hybridizations

Hybridizations between MHs and other approaches have been thoroughly studied in the literature, among these approaches is ML including RL. Various authors propose taxonomies and types of interactions such as [65,66,67,68]. Optimization and ML approaches according to Song et al. [65] interact in four ways:
  • Optimization supports ML.
  • The ML supports the optimization.
  • ML supports ML.
  • Optimization supports optimization.
Interaction number 2 is also structured in the classification presented in [66,67], where the way it supports ML is at the problem level, replacing, for example, objective functions or constraints that are costly; at a low level, i.e., in the components or operators of the MH; and finally at a high level, where ML techniques can be used to choose between different MHs or components of an MH.

2.7. Binarization

As mentioned above, a high percentage of population-based metaheuristics are continuous in nature. Therefore, they are not suitable for solving binary optimization problems directly. Due to this reason, an adaptation to the context of the 0 and 1 domain is necessary. The two-step sequential mechanism and the BSS used in other works are detailed below [41,42].

Two-Step

Within the literature, the transfer from continuous to binary through the two-step mechanism is one of the most common. Its characteristic name is due to its sequential mechanism, where the first step consists of transferring from the reals to a bounded interval through transfer functions, i.e., [ 0 , 1 ] , then in the second step, by means of binarization rules, the bounded interval is transformed to a value of { 0 , 1 } . In the last few years, several transfer functions have been presented, among the most common ones are the S and V type functions [49], others exist such as the X [69,70], Z [71], U [72,73], Q [74], in the literature we can find versions that change during the iterative process by means of a decreasing parameter [75,76,77,78]

2.8. Binarization Scheme Selector

As mentioned in Section 1, other works proposed the idea of combining RL with MHs, following the framework proposed by Talbi et al. in [66,67], being born from this combination an intelligent selector based on the two-step technique mentioned above, called BSS. The breakthrough of this smart selector is that it is able to learn autonomously through trial and error which transfer function (first step) and binarization rule (second step) best fit in the binarization when obtaining the solutions of the optimization process. In the proposals [42], there are S-type and V-type transfer functions, and as binarization rules there are standard, static, complement, elitist and elitist roulette, resulting in a total of 40 possible combinations, combinations that the intelligent selector is able to determine which one to use in each iteration. Figure 3 shows in general how the BSS is applied, and the Algorithm 5 explains the operation of the BSS.
Algorithm 5 Binarization scheme selector.
  1:
Initialize a random swarm
  2:
Initialize Q-table
  3:
for i t e r a t i o n ( t ) do
  4:
    Select action a t for s t from the Q-table
  5:
    for  s o l u t i o n ( i )  do
  6:
        for  d i m e n s i o n ( d )  do
  7:
            X i , d t + 1 X i , d t + Δ ( a t )
  8:
        end for
  9:
    end for
10:
    Get immediate reward r t
11:
    Get the maximum Q-value for the next state s t + 1
12:
    Update Q-table using Equation (1) or Equation (2)
13:
    Update the current state s t s t + 1
14:
end for

3. The Proposal: Binarization Scheme Selector

As mentioned above, this work proposes to apply and integrate a new intelligent operator called the BQSA [54] integrated into the BSS. This time, it is necessary to modify the original BSS operation related to the way the BQSA operates, i.e., memory. We detail the modification in question in the Algorithm 6. Algorithm 6 is very similar to Algorithm 5 except for some differences, which are highlighted in black (Lines 14–17). First, we initialize the Q-table, M (where the events are recorded), N (the number of times the Q-function will be updated), and randomly the swarm. Then, at each iteration, a binarization scheme for the “exploration” or “exploitation” state is selected from the Q-table and then applied. Subsequently, we obtain the reward by applying the binarization scheme to then update the Q-values in the Q-table. Finally, as explained in Section 2.4, we record the four events ( s t i , a t i , r t i , s t + 1 i ) , once the iterations reach the same value as N, we start to perform the backward update of the Q-table.
Algorithm 6 Binarization scheme selector modified.
  1:
Initialize a random swarm
  2:
Initialize Q-table, M and N
  3:
for i t e r a t i o n ( t ) do
  4:
      Select action a t for s t from the Q-table
  5:
      for  s o l u t i o n ( i )  do
  6:
          for  d i m e n s i o n ( d )  do
  7:
              X i , d t + 1 X i , d t + Δ ( a t )
  8:
          end for
  9:
      end for
10:
      Get immediate reward r t
11:
      Get the maximum Q-value for the next state s t + 1
12:
      Update Q-table using Equation (3)
13:
      Update the current state s t s t + 1
14:
      Record the four events in M
15:
      if  t = N  then
16:
          Backward update Q-table using Equation (5)
17:
      end if
18:
end for
Another modification made to the BSS is the number of transfer functions with which it operates; in its original version, it has implemented eight functions in total, four transfer functions type S-shaped [49] and V-shaped [79] (see Figure 4a,b). The new ones added are the so-called X-shaped [69,70] and Z-shaped [71,80] types. Figure 4c,d and Table 1 show the details of the new transfer functions and the originals.

4. Experimental Results

To validate the performance of our proposal, a comparison of eight different versions of GWO, SCA, and WOA was carried out. Three of these versions incorporate BQSA, QL, and SARSA, where 80 binarization schemes were selected (80aBQSA, 80aQL, 80aSA). The other three versions of the final refer to incorporating BQSA, QL, and SARSA, where 40 binarization schemes were selected (40aBQSA,40aQL,40aSA). Finally, the last two use fixed binarization schemes. Regarding the binarization schemes and versions, it is necessary to mention that the schemes or also called actions are composed of the multiplication of the { T r a n s f e r F u n c t i o n s × B i n a r i z a t i o n R u l e s } ; in this case, we have four families of transfer functions with 4 functions each, 16 in total and 5 binarization rules: 16 × 5 = 80 . Regarding the versions mentioned and performed, 40aQL and 40aSA are the original versions of 40 actions (families S and V) applied to QL [41] and SARSA [42], respectively, in order to see the performance of our proposal, we replicated QL and SARSA, but this time in 80 actions and BQSA in both 40 and 80 actions. Finally, the versions in the middle, the first, BCL, uses the V4-Elitist [48]. The second, called MIR, uses the V4-Complement [49].
The benchmark instances of the set covering problem solved are those proposed in Beasley’s OR-Library [81]. In particular, we solved 45 instances delivered in this library.
The programming language used in the construction of the algorithms was Python 3.7, and it was executed with the free services of Google Colaboratory [82]. The results were stored and processed from databases provided by the Google Cloud Platform. The authors at [48] suggest making 40,000 calls to the objective function. To this end, we used 40 individuals from the population and 1000 iterations for all GWO, SCA, and WOA execution instances. 31 independent executions were performed for each executed instance. All parameters used for GWO, SCA, WOA, BQSA, QL, and SARSA are detailed in Table 2.
The results obtained from the experimentation process are summarized in the Table 3, Table 4 and Table 5, where the results are presented for each of the eight versions, and the 45 benchmark instances, wherein the first row the names of the versions are presented, in the second row, the titles that go as follows, the first column names the OR-Library instances used (Inst.), the second column the optimal value known for each of these instances (Opt.), while the following columns are titled in three columns the best result obtained in the 31 independent runs (Best), the average of these 31 runs (Avg), and finally the relative percentage deviation (RPD), which is defined in Equation (6). While in the last row, each column’s average results are presented to facilitate the comparison between versions.
RPD = 100 · B e s t O p t O p t .

4.1. Convergence and Exploration–Exploitation Charts

During the experimentation, several data from the optimization process were recorded, such as the fitness obtained in iteration and the diversity among individuals as presented in [41,42], in order to analyze their behavior during the iterations. A graphical representation is the convergence graphs shown in Figure 5, Figure 6 and Figure 7, where the X axis corresponds to the 1000 iterations, while the Y axis presents the best fitness obtained up to that iteration. The graphs correspond to the best runs for representative instances, and the fitness value found is recorded in the subtitle of each graph, while in the graph set title, the known optimum is presented in order to make a simpler comparison. Other representations are the exploration and exploration graphs, presented in Figure 8, Figure 9 and Figure 10, where on the X-axis, the iterations are presented while on the Y-axis, the exploration (XPL) and exploitation (XPLT) are displayed.

4.2. Statistical Results

To validate the comparison between averages, it is necessary to define by means of the corresponding statistical test if the difference between the results is significant, for which we use the Wilcoxon–Mann–Whitney test [83], making a comparison between all the versions, with a significance level of 0.05, for each of the MH used, where these results are represented in Table 6, Table 7 and Table 8. These tables are structured as follows; the first column presents the techniques used, the following columns present the average p-values of the 45 instances compared with the version indicated in the column title; if the value of this comparison is greater than 0.05 it is presented as “ 0.05”, when the comparison is against the same version the symbol “-” is presented and the values have been approximated to the second decimal place.

5. Conclusions

The increase in computing capacity at more accessible costs has allowed the democratization of the use of machine learning, which has generated an increase in research in different areas, where we can see that every day, the use of these techniques is more common in both academia and industry. The use of machine learning in the improvement of the metaheuristics search process is a field in constant development, where several researches seek to validate the use of ML as a process improvement. The literature presents two explicit schemes for these hybridizations, the high-level ones, where we can observe the hyperheuristics, and the low-level ones, as is the case of this research, where the ML technique is a further operator of the MH.
In this paper, a new intelligent operator is presented in the use of binarization scheme selection (BSS), able to adapt any continuous MH to work in the binary domain. The main contributions are the implementation of a BSS capable of operating in any MH, using a new intelligent operator such as BQSA, and the increase of the possible actions of the intelligent operator from 40 to 80. The increase in these actions comes from the incorporation of two novel families that are rarely used in binarization schemes, such as Z-shaped and X-shaped.
The two-step binarization schemes are the most-used binarization methods in the literature [47], both for their versatility in programming and their low computational cost; for this, it is necessary to choose a transfer function (Step 1) and binarization rule (Step 2), but in the literature, there are many different ways to binarize, since there is a combinatorial problem between the options of Step 1 and Step 2, reaching the extreme of having infinite alternatives [75,78], having so many options, it is necessary to choose intelligently among all the possible options.
In the literature, to solve the problem of choosing the binarization scheme, in most cases, a combination that has presented good performance is chosen, while in some more exhaustive works, the combination is validated against an extensive experimental analysis, as is the case in [48,49], but these only confirm a good two-step combinatorial problem, for a given problem and instances, which is not necessarily replicable to other problems in the same domain. Under this context, an intelligent scheme was proposed in [41] to select among different actions (two-step combinatorics), where by means of QL, an action is chosen for each iteration, which is rewarded or penalized according to their performance, this being a hybridization where a machine learning technique supports metaheuristics. There are other works where BSS is used, which present different intelligent selectors such as QL [41] and SARSA [42], but always with the most-used set of 40 actions (8 choices of V-shaped and S-shaped transfer functions and five binarizations), which have presented diverse favorable performances, but in this work, besides replicating these experiments, we analyzed the performance of 80 actions (16 options of V-shaped, S-shaped, X-shaped, and Z-shaped transfer functions, and 5 binarizations), with the objective of validating that, by having a wider range of actions, the intelligent selector will be able to choose in a better way, avoiding biases by having reduced actions, besides directing the research so that the intelligent selector has more options to choose from.
In response to this proposal, an extensive set of experiments was carried out, which were detailed in Section 4, where eight different versions were compared between 3 MHs, solving 45 different instances of the set covering problem, all of them executed in 31 independent runs, in order to perform the respective statistical tests. The versions containing 80 actions (80aBQSA, 80aQL, and 80aSA) presented competitive performances, obtaining a similar average RPD and, in some cases, better, but having differences that were not significant in front of the respective statistical tests; therefore, we cannot conclude that they have a better performance compared to the versions with 40 actions (40aBQSA, 40aQL, and 40aSA), but for the MHs’ WOA and SCA, there were significant differences when compared to the static version MIR (v4-Complement), which had the worst performance of the eight versions. After analyzing the convergence plots, we can observe that, although the results are diverse, we can assume that the static schemes present early convergences, compared to the dynamic versions (40 and 80 actions), which is related to a good search process without getting trapped early in local optima. Along with this, the exploration and exploitation graphs give us a different perspective of the behavior during the search process, which gives us information on the diversity between individuals, as defined in [84]; from these graphs, we can conclude that the versions with 80 shares tend to have a more predominant tendency to exploit, On the other hand, we confirmed that the recommendations of the literature will not necessarily be applicable for any problem, as is the case of using the MIR combination, which was validated for another problem, confirming what is stated in the no-free-lunch theorem [13].
During the theoretical and experimental development of this work, new research questions have arisen, which remain as possible future works based on the work implemented in this paper, where we can highlight the need to advance in addressing the use of variable transfer functions [75,78]. This is in order to take advantage of the richness of being able to vary the transfer function under a continuous parameter, but which in turn generates a problem to solve, which is that BQSA, SARSA, QL, and other temporal difference techniques are defined to choose between a discrete set of actions, not allowing directly choosing actions in continuous domains. It is also necessary to study the influence of transfer functions against binarization rules, i.e., to use a variety of actions, either individual transfer functions or individual binarization rules. Along with answering the above questions, the option of evaluating other MHs in the literature in other binary domain problems is contemplated in order to confirm that the incorporation of reinforcement learning techniques generates the same effect on them. Another area to investigate is the behavior of this hybridization under smaller subsets in order to evaluate the impact of each of the combinatorics.

Author Contributions

M.B.-R., J.L.-R. and F.C.-C.: conceptualization, investigation, methodology, writing—review and editing, project administration, resources, formal analysis. B.C., R.S. and J.G.: writing—review and editing, investigation, validation, funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

Crawford and Ricardo Soto are supported by Grant ANID/FONDECYT/REGULAR/1210810. Marcelo Becerra-Rozas is supported by the National Agency for Research and Development (ANID)/Scholarship Program/DOCTORADO NACIONAL/2021-21210740. Felipe Cisternas-Caneo is supported by Beca INF-PUCV.

Data Availability Statement

The code used can be found in: https://github.com/imaberro/BSS-BQSA-80-40-actions, accessed on 12 November 2022.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
Acronyms
MHMetaheuristics
NFLNo-free-lunch theorem
SCASine-cosine algorithm
ACOAnt colony optimization
BSSBinarization scheme selection
BQSABackward Q-learning
MLMachine learning
RLReinforcement learning
QLQ-learning
80aBQSAVersion of backward Q-learning with 80 actions
80aQLVersion of Q-learning with 80 actions
80aSAVersion of SARSA with 80 actions
40aBQSAVersion of backward Q-learning with 40 actions
40aQLVersion of Q-learning with 40 actions
40aSAVersion of SARSA with 40 actions
MIRStatic version of two-step, using V4 and complement
BCLStatic version of two-step, using V4 and elitist
S1S-shaped Type 1
S2S-shaped Type 2
S3S-shaped Type 3
S4S-shaped Type 4
V1V-shaped Type 1
V2V-shaped Type 2
V3V-shaped Type 3
V4V-shaped Type 4
X1X-shaped Type 1
X2X-shaped Type 2
X3X-shaped Type 3
X4X-shaped Type 4
Z1Z-shaped Type 1
Z2Z-shaped Type 2
Z3Z-shaped Type 3
Z4Z-shaped Type 4
XPLExploration
XPLTExploitation
RPDRelative percentage deviation
Symbol
Q n e w ( s t , a t ) New Q-value obtained for state s t and action a t
Q o l d ( s t , a t ) Old Q-value obtained for state s t and action a t
s t State at time t
r t Reward at time t
a t Action at time t
α Learning factor
m a x Q ( s t + 1 , a t + 1 ) Max Q-value obtained for state s t and action a t
γ Discount factor
Q ( s , a ) Q-value obtained for state s and action a
sState
s t e r m i n a l Last status
aAction
rReward
s Next state
π Policy
Q n e w ( s t i , a t i ) New Q-value obtained for state s t i and action a t i , at iteration i
r t + 1 i Reward at iteration i and t + 1
M i Memory of backward Q-learning
a t i Action on t at iteration i
Q ( s t j , a t j ) Q-value obtained for state s t j and action a t j , at iteration j
NNumber of times the Q-function will be updated
X i , d t + 1 Population X in iteration t + 1 , individual i and dimension d
X i , d t Population X in iteration t, individual i and dimension d
Δ Perturbation or dimensional movement, which depends on
each metaheuristic
( d w j ) Transfer function result calculated for population d, at iteration
j for individual w

References

  1. Hanaka, T.; Kiyomi, M.; Kobayashi, Y.; Kobayashi, Y.; Kurita, K.; Otachi, Y. A Framework to Design Approximation Algorithms for Finding Diverse Solutions in Combinatorial Problems. arXiv 2022, arXiv:2201.08940. [Google Scholar]
  2. Sun, Y.; Jin, X.; Pukkala, T.; Li, F. Two-level optimization approach to tree-level forest planning. For. Ecosyst. 2022, 9, 100001. [Google Scholar] [CrossRef]
  3. Ait Saadi, A.; Soukane, A.; Meraihi, Y.; Benmessaoud Gabis, A.; Mirjalili, S.; Ramdane-Cherif, A. UAV Path Planning Using Optimization Approaches: A Survey. Arch. Comput. Methods Eng. 2022, 29, 4233–4284. [Google Scholar] [CrossRef]
  4. Hoang, N.D.; Huynh, T.C.; Tran, X.L.; Tran, V.D. A Novel Approach for Detection of Pavement Crack and Sealed Crack Using Image Processing and Salp Swarm Algorithm Optimized Machine Learning. Adv. Civ. Eng. 2022, 2022, 9193511. [Google Scholar] [CrossRef]
  5. Guo, T.; Han, C.; Tang, S.; Ding, M. Solving combinatorial problems with machine learning methods. In Nonlinear Combinatorial Optimization; Springer: Cham, Switzerland, 2019; pp. 207–229. [Google Scholar]
  6. Blum, C.; Roli, A. Metaheuristics in combinatorial optimization: Overview and conceptual comparison. Acm Comput. Surv. 2003, 35, 268–308. [Google Scholar] [CrossRef]
  7. Talbi, E.G. Metaheuristics: From Design to Implementation; John Wiley & Sons: Hoboken, NJ, USA, 2009; Volume 74. [Google Scholar]
  8. Van Laarhoven, P.J.; Aarts, E.H. Simulated annealing. In Simulated Annealing: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 1987; pp. 7–15. [Google Scholar]
  9. Glover, F.; Laguna, M. Tabu search. In Handbook of Combinatorial Optimization; Springer: Berlin/Heidelberg, Germany, 1998; pp. 2093–2229. [Google Scholar]
  10. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  11. Yang, X.S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  12. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  13. Ho, Y.C.; Pepyne, D.L. Simple explanation of the no-free-lunch theorem and its implications. J. Optim. Theory Appl. 2002, 115, 549–570. [Google Scholar] [CrossRef]
  14. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  15. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  16. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  17. Emary, E.; Zawbaa, H.M.; Grosan, C.; Hassenian, A.E. Feature subset selection approach by gray-wolf optimization. In Proceedings of the Afro-European Conference for Industrial Advancement; Springer: Cham, Switzerland, 2015; pp. 1–13. [Google Scholar]
  18. Mosavi, M.R.; Khishe, M.; Ghamgosar, A. Classification of sonar data set using neural network trained by gray wolf optimization. Neural Netw. World 2016, 26, 393. [Google Scholar] [CrossRef]
  19. Eswaramoorthy, S.; Sivakumaran, N.; Sekaran, S. Grey wolf optimization based parameter selection for support vector machines. COMPEL—Int. J. Comput. Math. Electr. Electron. Eng. 2016, 35, 1513–1523. [Google Scholar] [CrossRef]
  20. Li, S.X.; Wang, J.S. Dynamic modeling of steam condenser and design of PI controller based on grey wolf optimizer. Math. Probl. Eng. 2015, 2015, 120975. [Google Scholar] [CrossRef] [Green Version]
  21. Wong, L.I.; Sulaiman, M.; Mohamed, M.; Hong, M.S. Grey Wolf Optimizer for solving economic dispatch problems. In Proceedings of the 2014 IEEE International Conference on Power and Energy (PECon), Kuching, Malaysia, 1–3 December 2014; pp. 150–154. [Google Scholar]
  22. Tsai, P.W.; Nguyen, T.T.; Dao, T.K. Robot path planning optimization based on multiobjective grey wolf optimizer. In Proceedings of the International Conference on Genetic and Evolutionary Computing; Springer: Cham, Switzerland, 2016; pp. 166–173. [Google Scholar]
  23. Lu, C.; Gao, L.; Li, X.; Xiao, S. A hybrid multi-objective grey wolf optimizer for dynamic scheduling in a real-world welding industry. Eng. Appl. Artif. Intell. 2017, 57, 61–79. [Google Scholar] [CrossRef]
  24. Bentouati, B.; Chaib, L.; Chettih, S. A hybrid whale algorithm and pattern search technique for optimal power flow problem. In Proceedings of the 2016 8th International Conference on Modelling, Identification and Control (ICMIC), Algiers, Algeria, 15–17 November 2016; pp. 1048–1053. [Google Scholar]
  25. Touma, H.J. Study of the economic dispatch problem on IEEE 30-bus system using whale optimization algorithm. Int. J. Eng. Technol. Sci. 2016, 3, 11–18. [Google Scholar] [CrossRef]
  26. Yin, X.; Cheng, L.; Wang, X.; Lu, J.; Qin, H. Optimization for hydro-photovoltaic-wind power generation system based on modified version of multi-objective whale optimization algorithm. Energy Procedia 2019, 158, 6208–6216. [Google Scholar] [CrossRef]
  27. Abd El Aziz, M.; Ewees, A.A.; Hassanien, A.E. Whale optimization algorithm and moth-flame optimization for multilevel thresholding image segmentation. Expert Syst. Appl. 2017, 83, 242–256. [Google Scholar] [CrossRef]
  28. Mafarja, M.M.; Mirjalili, S. Hybrid whale optimization algorithm with simulated annealing for feature selection. Neurocomputing 2017, 260, 302–312. [Google Scholar] [CrossRef]
  29. Tharwat, A.; Moemen, Y.S.; Hassanien, A.E. Classification of toxicity effects of biotransformed hepatic drugs using whale optimized support vector machines. J. Biomed. Inform. 2017, 68, 132–149. [Google Scholar] [CrossRef]
  30. Zhao, H.; Guo, S.; Zhao, H. Energy-related CO2 emissions forecasting using an improved LSSVM model optimized by whale optimization algorithm. Energies 2017, 10, 874. [Google Scholar] [CrossRef] [Green Version]
  31. Banerjee, A.; Nabi, M. Re-entry trajectory optimization for space shuttle using sine-cosine algorithm. In Proceedings of the 2017 8th International Conference on Recent Advances in Space Technologies (RAST), Istanbul, Turkey, 19–22 June 2017; pp. 73–77. [Google Scholar]
  32. Sindhu, R.; Ngadiran, R.; Yacob, Y.M.; Zahri, N.A.H.; Hariharan, M. Sine–cosine algorithm for feature selection with elitism strategy and new updating mechanism. Neural Comput. Appl. 2017, 28, 2947–2958. [Google Scholar] [CrossRef]
  33. Mahdad, B.; Srairi, K. A new interactive sine cosine algorithm for loading margin stability improvement under contingency. Electr. Eng. 2018, 100, 913–933. [Google Scholar] [CrossRef]
  34. Padmanaban, S.; Priyadarshi, N.; Holm-Nielsen, J.B.; Bhaskar, M.S.; Azam, F.; Sharma, A.K.; Hossain, E. A novel modified sine-cosine optimized MPPT algorithm for grid integrated PV system under real operating conditions. IEEE Access 2019, 7, 10467–10477. [Google Scholar] [CrossRef]
  35. Gonidakis, D.; Vlachos, A. A new sine cosine algorithm for economic and emission dispatch problems with price penalty factors. J. Inf. Optim. Sci. 2019, 40, 679–697. [Google Scholar] [CrossRef]
  36. Abd Elfattah, M.; Abuelenin, S.; Hassanien, A.E.; Pan, J.S. Handwritten arabic manuscript image binarization using sine cosine optimization algorithm. In Proceedings of the International Conference on Genetic and Evolutionary Computing; Springer: Cham, Switzerland, 2016; pp. 273–280. [Google Scholar]
  37. Shreem, S.S.; Turabieh, H.; Al Azwari, S.; Baothman, F. Enhanced binary genetic algorithm as a feature selection to predict student performance. Soft Comput. 2022, 26, 1811–1823. [Google Scholar] [CrossRef]
  38. Ma, W.; Zhou, X.; Zhu, H.; Li, L.; Jiao, L. A two-stage hybrid ant colony optimization for high-dimensional feature selection. Pattern Recognit. 2021, 116, 107933. [Google Scholar] [CrossRef]
  39. García, J.; Crawford, B.; Soto, R.; Castro, C.; Paredes, F. A k-means binarization framework applied to multidimensional knapsack problem. Appl. Intell. 2018, 48, 357–380. [Google Scholar] [CrossRef]
  40. García, J.; Moraga, P.; Valenzuela, M.; Crawford, B.; Soto, R.; Pinto, H.; Peña, A.; Altimiras, F.; Astorga, G. A Db-Scan binarization algorithm applied to matrix covering problems. Comput. Intell. Neurosci. 2019, 2019, 3238574. [Google Scholar] [CrossRef] [Green Version]
  41. Crawford, B.; Soto, R.; Lemus-Romani, J.; Becerra-Rozas, M.; Lanza-Gutiérrez, J.M.; Caballé, N.; Castillo, M.; Tapia, D.; Cisternas-Caneo, F.; García, J.; et al. Q-learnheuristics: Towards data-driven balanced metaheuristics. Mathematics 2021, 9, 1839. [Google Scholar] [CrossRef]
  42. Lemus-Romani, J.; Becerra-Rozas, M.; Crawford, B.; Soto, R.; Cisternas-Caneo, F.; Vega, E.; Castillo, M.; Tapia, D.; Astorga, G.; Palma, W.; et al. A novel learning-based binarization scheme selector for swarm algorithms solving combinatorial problems. Mathematics 2021, 9, 2887. [Google Scholar] [CrossRef]
  43. Lai, X.; Hao, J.K.; Fu, Z.H.; Yue, D. Diversity-preserving quantum particle swarm optimization for the multidimensional knapsack problem. Expert Syst. Appl. 2020, 149, 113310. [Google Scholar] [CrossRef]
  44. Aytimur, A.; Babayigit, B. Binary Artificial Bee Colony Algorithms for {0–1} Advertisement Problem. In Proceedings of the 2019 6th International Conference on Electrical and Electronics Engineering (ICEEE), Istanbul, Turkey, 16–17 April 2019; pp. 91–95. [Google Scholar]
  45. Abdel-Basset, M.; Mohamed, R.; Elkomy, O.M.; Abouhawwash, M. Recent metaheuristic algorithms with genetic operators for high-dimensional knapsack instances: A comparative study. Comput. Ind. Eng. 2022, 166, 107974. [Google Scholar] [CrossRef]
  46. Jorquera, L.; Valenzuela, P.; Causa, L.; Moraga, P.; Villavicencio, G. A Percentile Firefly Algorithm an Application to the Set Covering Problem. In Proceedings of the Computer Science On-Line Conference; Springer: Cham, Switzerland, 2021; pp. 750–759. [Google Scholar]
  47. Crawford, B.; Soto, R.; Astorga, G.; García, J.; Castro, C.; Paredes, F. Putting continuous metaheuristics to work in binary search spaces. Complexity 2017, 2017, 8404231. [Google Scholar] [CrossRef] [Green Version]
  48. Lanza-Gutierrez, J.M.; Crawford, B.; Soto, R.; Berrios, N.; Gomez-Pulido, J.A.; Paredes, F. Analyzing the effects of binarization techniques when solving the set covering problem through swarm optimization. Expert Syst. Appl. 2017, 70, 67–82. [Google Scholar] [CrossRef]
  49. Mirjalili, S.; Lewis, A. S-shaped versus V-shaped transfer functions for binary particle swarm optimization. Swarm Evol. Comput. 2013, 9, 1–14. [Google Scholar] [CrossRef]
  50. Mafarja, M.; Eleyan, D.; Abdullah, S.; Mirjalili, S. S-shaped vs. V-shaped transfer functions for ant lion optimization algorithm in feature selection problem. In Proceedings of the International Conference on Future Networks and Distributed Systems, Cambridge, UK, 19–20 July 2017; pp. 1–7. [Google Scholar]
  51. Ghosh, K.K.; Guha, R.; Bera, S.K.; Kumar, N.; Sarkar, R. S-shaped versus V-shaped transfer functions for binary Manta ray foraging optimization in feature selection problem. Neural Comput. Appl. 2021, 33, 11027–11041. [Google Scholar] [CrossRef]
  52. Agrawal, P.; Ganesh, T.; Oliva, D.; Mohamed, A.W. S-shaped and v-shaped gaining-sharing knowledge-based algorithm for feature selection. Appl. Intell. 2022, 52, 81–112. [Google Scholar] [CrossRef]
  53. Cisternas-Caneo, F.; Crawford, B.; Soto, R.; Tapia, D.; Lemus-Romani, J.; Castillo, M.; Becerra-Rozas, M.; Paredes, F.; Misra, S. A data-driven dynamic discretization framework to solve combinatorial problems using continuous metaheuristics. In Innovations in Bio-Inspired Computing and Applications; Springer: Cham, Switzerland, 2020; pp. 76–85. [Google Scholar]
  54. Wang, Y.H.; Li, T.H.S.; Lin, C.J. Backward Q-learning: The combination of Sarsa algorithm and Q-learning. Eng. Appl. Artif. Intell. 2013, 26, 2184–2193. [Google Scholar] [CrossRef]
  55. Burns, E. In-Depth Guide to Machine Learning in the Enterprise. Techtarget 2021, 17. Available online: https://www.techtarget.com/searchenterpriseai/In-depth-guide-to-machine-learning-in-the-enterprise (accessed on 12 November 2022).
  56. Lo, A.W. Reconciling efficient markets with behavioral finance: The adaptive markets hypothesis. J. Investig. Consult. 2005, 7, 21–44. [Google Scholar]
  57. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  58. Clifton, J.; Laber, E. Q-learning: Theory and applications. Annu. Rev. Stat. Appl. 2020, 7, 279–301. [Google Scholar] [CrossRef] [Green Version]
  59. Rummery, G.A.; Niranjan, M. On-Line Q-Learning Using Connectionist Systems; Citeseer: Cambridge, UK, 1994; Volume 37. [Google Scholar]
  60. Watkins, C.J.; Dayan, P. Q-learning. Mach. Learn. 1992, 8, 279–292. [Google Scholar] [CrossRef]
  61. Glover, F.W.; Kochenberger, G.A. Handbook of Metaheuristics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006; Volume 57. [Google Scholar]
  62. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  63. Cuevas, E.; Fausto, F.; González, A. New Advancements in Swarm Algorithms: Operators and Applications; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  64. Jourdan, L.; Dhaenens, C.; Talbi, E.G. Using datamining techniques to help metaheuristics: A short survey. In Proceedings of the International Workshop on Hybrid Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2006; pp. 57–69. [Google Scholar]
  65. Song, H.; Triguero, I.; Özcan, E. A review on the self and dual interactions between machine learning and optimisation. Prog. Artif. Intell. 2019, 8, 143–165. [Google Scholar] [CrossRef] [Green Version]
  66. Talbi, E.G. Machine Learning into Metaheuristics: A Survey and Taxonomy of Data-Driven Metaheuristics; ffhal-02745295f; HAL: Lyon, France, 2020. [Google Scholar]
  67. Talbi, E.G. Machine learning into metaheuristics: A survey and taxonomy. ACM Comput. Surv. 2021, 54, 1–32. [Google Scholar] [CrossRef]
  68. Karimi-Mamaghan, M.; Mohammadi, M.; Meyer, P.; Karimi-Mamaghan, A.M.; Talbi, E.G. Machine Learning at the service of Meta-heuristics for solving Combinatorial Optimization Problems: A state-of-the-art. Eur. J. Oper. Res. 2022, 296, 393–422. [Google Scholar] [CrossRef]
  69. Ghosh, K.K.; Singh, P.K.; Hong, J.; Geem, Z.W.; Sarkar, R. Binary social mimic optimization algorithm with x-shaped transfer function for feature selection. IEEE Access 2020, 8, 97890–97906. [Google Scholar] [CrossRef]
  70. Beheshti, Z. A novel x-shaped binary particle swarm optimization. Soft Comput. 2021, 25, 3013–3042. [Google Scholar] [CrossRef]
  71. Guo, S.-S.; Wang, J.-S.; Guo, M.-W. Z-shaped transfer functions for binary particle swarm optimization algorithm. Comput. Intell. Neurosci. 2020, 2020, 6502807. [Google Scholar] [CrossRef]
  72. Awadallah, M.A.; Hammouri, A.I.; Al-Betar, M.A.; Braik, M.S.; Abd Elaziz, M. Binary Horse herd optimization algorithm with crossover operators for feature selection. Comput. Biol. Med. 2022, 141, 105152. [Google Scholar] [CrossRef] [PubMed]
  73. Mirjalili, S.; Zhang, H.; Mirjalili, S.; Chalup, S.; Noman, N. A novel U-shaped transfer function for binary particle swarm optimisation. In Soft Computing for Problem Solving 2019; Springer: Singapore, 2020; pp. 241–259. [Google Scholar]
  74. Jain, S.; Dharavath, R. Memetic salp swarm optimization algorithm based feature selection approach for crop disease detection system. J. Ambient. Intell. Humaniz. Comput. 2021, 1–19. [Google Scholar] [CrossRef]
  75. Kahya, M.A.; Altamir, S.A.; Algamal, Z.Y. Improving whale optimization algorithm for feature selection with a time-varying transfer function. Numer. Algebra Control Optim. 2021, 11, 87. [Google Scholar] [CrossRef] [Green Version]
  76. Islam, M.J.; Li, X.; Mei, Y. A time-varying transfer function for balancing the exploration and exploitation ability of a binary PSO. Appl. Soft Comput. 2017, 59, 182–196. [Google Scholar] [CrossRef]
  77. Mafarja, M.; Aljarah, I.; Heidari, A.A.; Faris, H.; Fournier-Viger, P.; Li, X.; Mirjalili, S. Binary dragonfly optimization for feature selection using time-varying transfer functions. Knowl.-Based Syst. 2018, 161, 185–204. [Google Scholar] [CrossRef]
  78. Chantar, H.; Thaher, T.; Turabieh, H.; Mafarja, M.; Sheta, A. BHHO-TVS: A binary harris hawks optimizer with time-varying scheme for solving data classification problems. Appl. Sci. 2021, 11, 6516. [Google Scholar] [CrossRef]
  79. Rajalakshmi, N.; Padma Subramanian, D.; Thamizhavel, K. Performance enhancement of radial distributed system with distributed generators by reconfiguration using binary firefly algorithm. J. Inst. Eng. India Ser. B 2015, 96, 91–99. [Google Scholar] [CrossRef]
  80. Sun, W.Z.; Zhang, M.; Wang, J.S.; Guo, S.S.; Wang, M.; Hao, W.K. Binary Particle Swarm Optimization Algorithm Based on Z-shaped Probability Transfer Function to Solve 0–1 Knapsack Problem. IAENG Int. J. Comput. Sci. 2021, 48, 294–303. [Google Scholar]
  81. Beasley, J.; Jörnsten, K. Enhancing an algorithm for set covering problems. Eur. J. Oper. Res. 1992, 58, 293–300. [Google Scholar] [CrossRef]
  82. Bisong, E. Google Colaboratory. In Building Machine Learning and Deep Learning Models on Google Cloud Platform: A Comprehensive Guide for Beginners; Apress: Berkeley, CA, USA, 2019; pp. 59–64. [Google Scholar] [CrossRef]
  83. Mann, H.B.; Whitney, D.R. On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Stat. 1947, 18, 50–60. [Google Scholar] [CrossRef]
  84. Morales-Castañeda, B.; Zaldivar, D.; Cuevas, E.; Fausto, F.; Rodríguez, A. A better balance in metaheuristic algorithms: Does it exist? Swarm Evol. Comput. 2020, 54, 100671. [Google Scholar] [CrossRef]
Figure 1. Machine learning classification.
Figure 1. Machine learning classification.
Mathematics 10 04776 g001
Figure 2. Nature-inspired metaheuristics.
Figure 2. Nature-inspired metaheuristics.
Mathematics 10 04776 g002
Figure 3. Binarization scheme selector.
Figure 3. Binarization scheme selector.
Mathematics 10 04776 g003
Figure 4. Transfer functions. (a) S-shaped; (b) V-shaped; (c) X-shaped; (d) Z-shaped.
Figure 4. Transfer functions. (a) S-shaped; (b) V-shaped; (c) X-shaped; (d) Z-shaped.
Mathematics 10 04776 g004
Figure 5. GWO instances coverage when resolving the scp55 instance of the SCP. Instance Optimum: 211. (a) 80aBQSA—fitness obtained 211—Instance 55; (b) 80aQL—fitness obtained 212—Instance 55 (c) 80aSA—fitness obtained 212—Instance 55; (d) 40aBQSA—fitness obtained 212—Instance 55; (e) 40aQL—fitness obtained 213—Instance 55; (f) 40aSA—fitness obtained 212—Instance 55; (g) BCL—fitness obtained 212—Instance 55; (h) MIR—fitness obtained 216—Instance 55.
Figure 5. GWO instances coverage when resolving the scp55 instance of the SCP. Instance Optimum: 211. (a) 80aBQSA—fitness obtained 211—Instance 55; (b) 80aQL—fitness obtained 212—Instance 55 (c) 80aSA—fitness obtained 212—Instance 55; (d) 40aBQSA—fitness obtained 212—Instance 55; (e) 40aQL—fitness obtained 213—Instance 55; (f) 40aSA—fitness obtained 212—Instance 55; (g) BCL—fitness obtained 212—Instance 55; (h) MIR—fitness obtained 216—Instance 55.
Mathematics 10 04776 g005
Figure 6. SCA instances coverage when resolving the scp49 instance of the SCP. Instance Optimum: 641. (a) 80aBQSA—fitness obtained 659—Instance 49; (b) 80aQL—fitness obtained 665—Instance 49; (c) 80aSA—fitness obtained 665—Instance 49; (d) 40aBQSA—fitness obtained 664—Instance 49; (e) 40aQL—fitness obtained 667—Instance 49; (f) 40aSA—fitness obtained 663—Instance 49; (g) BCL—fitness obtained 766—Instance 49; (h) MIR—fitness obtained 1535—Instance 49.
Figure 6. SCA instances coverage when resolving the scp49 instance of the SCP. Instance Optimum: 641. (a) 80aBQSA—fitness obtained 659—Instance 49; (b) 80aQL—fitness obtained 665—Instance 49; (c) 80aSA—fitness obtained 665—Instance 49; (d) 40aBQSA—fitness obtained 664—Instance 49; (e) 40aQL—fitness obtained 667—Instance 49; (f) 40aSA—fitness obtained 663—Instance 49; (g) BCL—fitness obtained 766—Instance 49; (h) MIR—fitness obtained 1535—Instance 49.
Mathematics 10 04776 g006
Figure 7. WOA instances coverage when resolving the scp55 instance of the SCP. Instance Optimum: 211. (a) 80aBQSA—fitness obtained 212—Instance 55; (b) 80aQL—fitness obtained 212—Instance 55; (c) 80aSA—fitness obtained 213—Instance 55; (d) 40aBQSA—fitness obtained 212—Instance 55; (e) 40aQL—fitness obtained 212—Instance 55; (f) 40aSA—fitness obtained 212—Instance 55; (g) BCL—fitness obtained 294—Instance 55; (h) MIR—fitness obtained 397—Instance 55.
Figure 7. WOA instances coverage when resolving the scp55 instance of the SCP. Instance Optimum: 211. (a) 80aBQSA—fitness obtained 212—Instance 55; (b) 80aQL—fitness obtained 212—Instance 55; (c) 80aSA—fitness obtained 213—Instance 55; (d) 40aBQSA—fitness obtained 212—Instance 55; (e) 40aQL—fitness obtained 212—Instance 55; (f) 40aSA—fitness obtained 212—Instance 55; (g) BCL—fitness obtained 294—Instance 55; (h) MIR—fitness obtained 397—Instance 55.
Mathematics 10 04776 g007
Figure 8. GWO instances exploration–exploitation percentages when resolving the scp51 instance of the SCP. Instance Optimum: 253. (a) 80aBQSA—fitness obtained 255; (b) 80aQL—fitness obtained 256; (c) 80aSA—fitness obtained 256; (d) 40aBQSA—fitness obtained 254; (e) 40aQL—fitness obtained 258; (f) 40aSA—fitness obtained 257; (g) BCL—fitness obtained 259; (h) MIR—fitness obtained 262.
Figure 8. GWO instances exploration–exploitation percentages when resolving the scp51 instance of the SCP. Instance Optimum: 253. (a) 80aBQSA—fitness obtained 255; (b) 80aQL—fitness obtained 256; (c) 80aSA—fitness obtained 256; (d) 40aBQSA—fitness obtained 254; (e) 40aQL—fitness obtained 258; (f) 40aSA—fitness obtained 257; (g) BCL—fitness obtained 259; (h) MIR—fitness obtained 262.
Mathematics 10 04776 g008
Figure 9. SCA instances exploration–exploitation percentages when resolving the scp51 instance of the SCP. Instance Optimum: 494. (a) 80aBQSA—fitness obtained 496; (b) 80aQL—fitness obtained 503; (c) 80aSA—fitness obtained 502; (d) 40aBQSA—fitness obtained 502; (e) 40aQL—fitness obtained 503; (f) 40aSA—fitness obtained 504; (g) BCL—fitness obtained 564; (h) MIR—fitness obtained 962.
Figure 9. SCA instances exploration–exploitation percentages when resolving the scp51 instance of the SCP. Instance Optimum: 494. (a) 80aBQSA—fitness obtained 496; (b) 80aQL—fitness obtained 503; (c) 80aSA—fitness obtained 502; (d) 40aBQSA—fitness obtained 502; (e) 40aQL—fitness obtained 503; (f) 40aSA—fitness obtained 504; (g) BCL—fitness obtained 564; (h) MIR—fitness obtained 962.
Mathematics 10 04776 g009
Figure 10. WOA instances exploration–exploitation percentages when resolving the scp41 instance of the SCP. Instance Optimum: 429. (a) 80aBQSA—fitness obtained 430; (b) 80aQL—fitness obtained 431; (c) 80aSA—fitness obtained 430; (d) 40aBQSA—fitness obtained 430; (e) 40aQL—fitness obtained 430; (f) 40aSA—fitness obtained 431; (g) BCL—fitness obtained 489; (h) MIR—fitness obtained 638.
Figure 10. WOA instances exploration–exploitation percentages when resolving the scp41 instance of the SCP. Instance Optimum: 429. (a) 80aBQSA—fitness obtained 430; (b) 80aQL—fitness obtained 431; (c) 80aSA—fitness obtained 430; (d) 40aBQSA—fitness obtained 430; (e) 40aQL—fitness obtained 430; (f) 40aSA—fitness obtained 431; (g) BCL—fitness obtained 489; (h) MIR—fitness obtained 638.
Mathematics 10 04776 g010
Table 1. Transfer functions.
Table 1. Transfer functions.
Transfer Functions
S-Shaped [49]V-Shaped [49,79]X-Shaped [69,70]Z-Shaped [71,80]
NameEquationNameEquationNameEquationNameEquation
S1 T ( d w j ) = 1 1 + e 2 d w j V1 T ( d w j ) = e r f π 2 d w j X1 T ( d w j ) = 1 1 + e 2 d w j Z1 T ( d w j ) = 1 2 d w j
S2 T ( d w j ) = 1 1 + e d w j V2 T ( d w j ) = t a n h ( d w j ) X2 T ( d w j ) = 1 1 + e d w j Z2 T ( d w j ) = 1 5 d w j
S3 T ( d w j ) = 1 1 + e d w j 2 V3 T ( d w j ) = d w j 1 + ( d w j ) 2 X3 T ( d w j ) = 1 1 + e d w j 2 Z3 T ( d w j ) = 1 8 d w j
S4 T ( d w j ) = 1 1 + e d w j 3 V4 T ( d w j ) = 2 π a r c t a n π 2 d w j X4 T ( d w j ) = 1 1 + e d w j 3 Z4 T ( d w j ) = 1 20 d w j
Table 2. Parameters’ setting.
Table 2. Parameters’ setting.
ParameterValue
Independent runs31
Number of populations40
Number of iterations1000
parameter a of SCA2
parameter a of GWOdecreases linearly from 2 to 0
parameter a of WOAdecreases linearly from 2 to 0
parameter b of WOA1
parameter α of backward Q-learning, Q-learning and SARSA0.1
parameter γ of backward Q-learning, Q-learning and SARSA0.4
parameter N of backward Q-learning10
Table 3. Comparison of the metaheuristics’ GWO.
Table 3. Comparison of the metaheuristics’ GWO.
80aBQSA80aQL80aSABCLMIR40aBQSA40aQL40aSA
Inst.Opt.BestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPD
41429430.0459.260.23429.0454.230.0430.0451.680.23431.0437.320.47431.00436.810.47431.0453.320.47430.0460.710.23430.0453.650.23
42512515.0587.260.59515.0596.870.59515.0595.580.59526.0540.582.73540.00551.265.47524.0605.482.34516.0588.230.78521.0593.451.76
43516516.0610.680.0518.0594.00.39519.0604.060.58519.0535.840.58535.00550.263.68517.0577.970.19518.0593.350.39518.0587.420.39
44494499.0548.481.01497.0567.190.61502.0544.611.62503.0520.351.82513.00524.453.85505.0604.942.23498.0553.190.81505.0549.12.23
45512516.0601.060.78517.0563.390.98517.0626.940.98523.0533.842.15532.00551.393.91517.0581.420.98519.0616.811.37521.0627.711.76
46560565.0727.420.89565.0659.610.89563.0654.680.54570.0583.581.79576.00586.032.86563.0668.710.54564.0639.610.71565.0649.320.89
47430431.0477.870.23433.0477.230.7433.0475.030.7434.0441.770.93438.00447.581.86434.0487.450.93434.0504.160.93435.0469.261.16
48492497.0613.811.02494.0586.580.41495.0578.060.61499.0509.841.42506.00515.132.85495.0649.650.61494.0580.810.41495.0563.970.61
49641655.0715.02.18652.0821.191.72655.0765.552.18656.0680.612.34679.00693.065.93658.0803.842.65650.0740.91.4653.0789.321.87
410514518.0594.160.78515.0570.160.19514.0565.610.0520.0532.681.17526.00532.742.33517.0617.740.58519.0575.840.97514.0623.520.0
51253255.0304.840.79256.0293.351.19256.0278.871.19259.0268.232.37262.00268.263.56254.0297.840.4258.0284.871.98257.0305.551.58
52302318.0385.425.3311.0402.192.98315.0371.94.3320.0330.195.96326.00335.397.95316.0407.974.64319.0410.355.63320.0387.715.96
53226229.0258.451.33228.0258.870.88229.0276.291.33229.0235.551.33232.00236.322.65228.0254.390.88229.0273.291.33228.0263.350.88
54242245.0275.451.24244.0295.450.83244.0287.320.83244.0251.550.83250.00254.683.31244.0281.840.83245.0291.581.24244.0283.230.83
55211211.0231.260.0212.0243.00.47212.0226.940.47212.0216.740.47216.00219.322.37212.0236.390.47213.0243.320.95212.0250.060.47
56213215.0257.810.94213.0268.260.0215.0260.030.94216.0226.771.41222.00230.034.23215.0235.320.94213.0255.810.0213.0252.390.0
57293299.0345.392.05296.0366.711.02296.0323.681.02299.0310.842.05306.00314.294.44295.0348.230.68297.0352.481.37298.0353.811.71
58288290.0363.420.69290.0344.420.69289.0349.580.35290.0298.770.69297.00302.973.12290.0329.710.69291.0363.01.04288.0354.030.0
59279279.0328.190.0280.0328.710.36281.0332.350.72281.0288.390.72285.00293.712.15281.0326.580.72282.0334.581.08281.0344.230.72
510265267.0314.290.75266.0306.610.38266.0292.030.38269.0276.811.51277.00282.164.53267.0329.90.75269.0314.421.51267.0294.060.75
61138140.0221.131.45141.0214.232.17141.0200.842.17140.0146.941.45144.00149.104.35141.0206.322.17140.0251.581.45141.0227.682.17
62146146.0250.480.0147.0259.450.68146.0309.680.0147.0154.320.68155.00161.236.16148.0284.771.37148.0263.231.37146.0333.580.0
63145146.0227.710.69145.0250.770.0147.0236.351.38145.0152.030.0148.00154.322.07148.0317.842.07148.0303.652.07146.0207.940.69
64131131.0170.290.0131.0175.420.0131.0169.00.0131.0136.230.0134.00136.262.29131.0194.450.0131.0191.420.0131.0150.770.0
65161162.0243.740.62162.0256.420.62161.0293.060.0167.0175.133.73175.00184.618.70162.0232.420.62165.0375.192.48162.0300.710.62
a1253259.0512.842.37260.0362.12.77259.0516.12.37262.0267.03.56265.00279.844.74260.0393.712.77261.0531.973.16257.0429.01.58
a2252257.0447.11.98254.0375.10.79255.0400.321.19262.0271.523.97275.00283.139.13258.0392.292.38259.0431.262.78258.0377.132.38
a3232238.0382.262.59238.0336.842.59239.0435.613.02238.0248.162.59246.00257.846.03238.0446.062.59241.0384.03.88240.0378.713.45
a4234237.0382.421.28236.0347.770.85238.0361.521.71238.0250.941.71250.00263.196.84238.0337.451.71239.0400.582.14236.0376.970.85
a5236239.0411.91.27241.0418.522.12239.0408.261.27244.0250.03.39253.00261.847.20240.0320.771.69239.0388.421.27240.0409.521.69
b16969.0274.940.069.0365.390.069.0229.230.069.072.260.075.0089.948.7069.0298.90.069.0394.290.069.0259.00.0
b27676.0335.130.076.0317.610.076.0354.130.076.080.710.086.0097.8713.1676.0372.130.076.0359.190.076.0351.350.0
b38080.0308.520.080.0427.650.081.0540.231.2581.083.971.2589.00107.2911.2581.0474.841.2581.0430.971.2580.0327.550.0
b47979.0421.420.079.0274.130.079.0241.160.081.084.162.5393.00105.9017.7279.0372.740.079.0391.840.079.0330.480.0
b57272.0278.870.072.0287.810.072.0226.160.073.074.771.3981.0093.3212.5072.0351.390.072.0233.650.072.0345.940.0
c1227233.0590.162.64232.0662.942.2233.0422.92.64239.0249.265.29258.00270.3513.66236.0523.03.96232.0369.842.2234.0524.233.08
c2219226.0548.973.2227.0500.683.65225.0597.062.74228.0239.524.11252.00267.1915.07226.0472.773.2224.0577.12.28227.0395.523.65
c3243248.0615.352.06248.0563.92.06246.0649.551.23252.0260.943.7279.00294.8114.81247.0642.421.65247.0477.351.65252.0701.873.7
c4219223.0409.971.83222.0628.971.37222.0510.971.37228.0237.324.11250.00263.6114.16224.0435.612.28224.0655.02.28225.0683.842.74
c5215221.0413.292.79219.0406.391.86222.0524.713.26222.0230.683.26243.00259.0613.02220.0555.682.33220.0484.02.33219.0376.161.86
d16060.0476.160.060.0588.710.060.0427.190.061.064.651.6783.00104.3238.3361.0586.231.6760.0451.450.060.0644.550.0
d26666.0618.740.067.0782.611.5266.0492.770.066.068.90.087.00124.6831.8266.0760.160.066.0457.260.066.0403.00.0
d37272.0513.060.073.0435.581.3973.0526.161.3975.077.814.17106.00130.8447.2274.0509.842.7874.0702.232.7873.0492.771.39
d46262.0580.350.062.0468.810.062.0349.450.062.065.350.090.00111.8745.1662.0392.160.062.0429.320.062.0400.030.0
d56161.0470.550.061.0530.770.061.0302.350.061.064.680.085.00109.4239.3461.0492.520.061.0475.190.061.0481.680.0
256.73424.551.01256.29427.480.93256.64413.011.03258.84267.281.9270.02281.9510.33257.36432.561.31257.24430.831.32257.27420.111.19
Table 4. Comparison of the metaheuristics SCA.
Table 4. Comparison of the metaheuristics SCA.
80aBQSA80aQL80aSABCLMIR40aBQSA40aQL40aSA
Inst.Opt.BestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPD
41429432.0519.10.7432.0509.480.7432.0516.060.7505.0606.1617.72679.00716.9058.28432.0556.190.7430.0556.970.23432.0547.90.7
42512524.0724.422.34525.0698.12.54527.0687.682.93600.0785.1317.191060.001211.42107.03522.0745.971.95522.0700.521.95522.0690.521.95
43516522.0714.231.16522.0652.611.16521.0688.390.97589.0794.9414.151185.001295.61129.65521.0747.320.97522.0758.261.16520.0797.320.78
44494496.0631.870.4503.0647.031.82502.0688.191.62564.0728.8714.17962.001051.6194.74502.0651.481.62503.0630.291.82504.0624.02.02
45512521.0674.451.76524.0713.452.34523.0649.322.15615.0815.7420.121126.001222.71119.92521.0816.11.76522.0718.191.95519.0703.841.37
46560565.0799.650.89565.0823.350.89566.0902.161.07638.0867.5213.931331.001485.23137.68566.0807.351.07567.0941.91.25563.0840.390.54
47430433.0600.580.7435.0564.521.16433.0585.740.7491.0645.1914.19877.00946.10103.95435.0589.131.16435.0615.231.16434.0591.520.93
48492495.0744.420.61496.0730.840.81493.0659.770.2577.0798.8417.281119.001257.00127.44501.0731.841.83496.0721.190.81496.0721.390.81
49641659.01063.132.81665.0975.233.74665.0870.683.74766.0995.919.51535.001695.74139.47664.01005.973.59667.0937.524.06663.01013.553.43
410514518.0695.00.78519.0633.260.97517.0640.970.58576.0778.912.061104.001210.84114.79520.0758.11.17517.0747.870.58517.0716.740.58
51253258.0362.941.98258.0344.711.98258.0380.451.98302.0383.019.37568.00625.13124.51254.0395.580.4258.0368.231.98258.0344.231.98
52302317.0453.034.97320.0522.615.96319.0503.295.63356.0475.7717.88748.00919.45147.68315.0473.484.3319.0556.195.63317.0514.454.97
53226229.0296.421.33229.0335.871.33230.0288.321.77265.0346.8717.26498.00561.35120.35229.0345.811.33229.0368.161.33229.0324.351.33
54242247.0306.682.07246.0367.581.65246.0345.771.65281.0356.916.12534.00590.90120.66247.0355.742.07247.0383.582.07246.0368.971.65
55211213.0255.840.95213.0274.160.95212.0275.870.47224.0318.196.16411.00438.8794.79212.0295.130.47213.0276.550.95213.0261.320.95
56213216.0334.321.41217.0315.351.88218.0291.232.35242.0347.1313.62472.00539.23121.60217.0321.521.88218.0320.742.35217.0315.161.88
57293299.0412.842.05300.0437.232.39297.0400.481.37345.0464.5817.75643.00717.23119.45298.0424.811.71299.0393.032.05298.0442.91.71
58288289.0389.810.35291.0460.681.04292.0444.231.39350.0447.4521.53650.00739.77125.69290.0397.450.69291.0407.811.04291.0413.261.04
59279281.0381.130.72282.0369.01.08281.0371.770.72313.0424.7712.19682.00751.87144.44282.0423.161.08280.0412.030.36282.0391.871.08
510265267.0382.030.75270.0366.391.89267.0339.320.75305.0419.7715.09621.00673.71134.34267.0396.480.75270.0358.771.89268.0380.771.13
61138141.0315.92.17141.0322.232.17140.0273.391.45171.0267.2923.91669.00834.23384.78141.0305.872.17142.0382.582.9140.0303.291.45
62146149.0427.742.05149.0502.292.05149.0423.162.05188.0327.4228.771086.001199.00643.84147.0535.260.68148.0596.231.37147.0399.260.68
63145147.0414.481.38147.0404.711.38147.0418.261.38158.0308.238.971057.001147.90628.97145.0360.00.0147.0463.261.38146.0432.770.69
64131131.0266.190.0131.0264.550.0132.0301.320.76159.0299.7121.37623.00729.03375.57131.0333.580.0131.0297.810.0131.0234.870.0
65161162.0352.060.62166.0432.03.11165.0491.652.48190.0307.3918.011000.001177.03521.12164.0502.01.86165.0455.392.48165.0542.162.48
a1253260.0529.812.77261.0475.483.16259.0471.352.37272.0368.977.511127.001343.65345.45261.0647.523.16260.0728.742.77260.0604.062.77
a2252259.0495.612.78259.0475.522.78259.0513.742.78290.0415.9415.081129.001238.03348.02260.0589.713.17261.0587.323.57260.0611.323.17
a3232240.0511.13.45239.0510.93.02238.0534.12.59261.0341.6112.51067.001189.39359.91240.0576.033.45239.0502.93.02240.0539.523.45
a4234237.0467.191.28239.0484.322.14237.0541.971.28274.0391.5517.091095.001158.55367.95238.0527.521.71238.0587.131.71240.0592.392.56
a5236240.0478.871.69241.0540.392.12239.0416.741.27276.0368.916.951082.001178.23358.47241.0545.292.12240.0544.391.69242.0556.392.54
b16969.0391.00.069.0482.00.069.0521.740.076.0132.9410.141323.001436.321817.3969.0657.610.069.0480.260.069.0403.770.0
b27676.0451.00.076.0511.10.076.0419.610.088.0143.5815.791364.001445.391694.7476.0464.740.076.0365.420.076.0611.190.0
b38081.0525.611.2580.0741.580.081.0836.711.2587.0136.98.751670.001843.231987.5081.0551.481.2581.0772.941.2581.0612.321.25
b47980.0680.131.2780.0651.841.2780.0743.681.2788.0147.3511.3988.001604.9411.3980.0813.191.2780.0685.351.2779.0716.520.0
b57272.0458.940.072.0359.420.072.0444.740.072.0126.970.01401.001480.291845.8372.0505.10.072.0626.610.072.0408.320.0
c1227235.0756.943.52235.0626.353.52236.0678.973.96285.0346.4225.551508.001608.61564.32235.0785.613.52235.0628.843.52235.0698.163.52
c2219227.0700.843.65229.0778.684.57227.0792.393.65259.0318.0618.261637.001783.68647.49228.0628.454.11227.0745.613.65227.0859.773.65
c3243251.0956.193.29249.0777.872.47249.0871.12.47279.0354.4214.81268.002063.4510.29250.0815.422.88249.0675.772.47247.0954.261.65
c4219226.0574.943.2226.0751.423.2225.0824.292.74243.0314.110.961657.001773.26656.62226.0751.973.2223.0947.581.83226.0871.523.2
c5215221.0564.942.79219.0783.711.86220.0717.292.33242.0306.4812.561537.001684.84614.88221.0586.482.79221.0755.452.79221.0629.352.79
d16060.0538.90.061.0603.551.6761.0752.421.6769.0122.1915.01979.002093.843198.3361.0661.581.6761.0717.11.6761.0932.451.67
d26667.0493.481.5267.0746.971.5267.0700.971.5272.0125.039.0972.002284.979.0967.0737.391.5267.0966.061.5267.0943.581.52
d37274.0908.772.7873.0803.971.3974.0686.322.7881.0145.7412.52369.002583.813190.2874.01094.162.7874.0755.162.7874.0759.262.78
d46262.0789.420.062.0913.580.062.01025.770.068.0114.529.681929.002126.943011.2962.0779.320.062.0753.480.062.0727.160.0
d56161.0467.710.061.0723.520.061.0600.030.077.0122.5526.231945.002114.743088.5261.0755.940.061.0745.940.061.01048.450.0
257.98539.11.56258.76564.651.77258.31567.361.66293.98403.4615.291055.271283.87645.97258.36594.441.64258.53598.671.74258.18599.921.61
Table 5. Comparison of the metaheuristics’ WOA.
Table 5. Comparison of the metaheuristics’ WOA.
80aBQSA80aQL80aSABCLMIR40aBQSA40aQL40aSA
Inst.Opt.BestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPDBestAvgRPD
41429430.0619.90.23431.0781.870.47430.0610.00.23489.00607.5513.99638.00715.8748.72430.0771.060.23430.0799.290.23431.0655.350.47
42512523.0930.132.15522.0839.191.95519.0872.871.37679.00852.9432.621079.001181.94110.74521.01198.971.76521.01113.261.76520.01338.451.56
43516520.01031.160.78519.0912.030.58520.0979.130.78739.00884.4843.221222.001293.61136.82520.0922.00.78520.01140.230.78519.01233.650.58
44494502.0927.191.62500.0789.131.21503.0776.261.82596.00779.0620.65954.001052.0393.12501.0860.711.42503.01197.191.82501.01142.291.42
45512519.0902.651.37517.0913.680.98520.01032.161.56755.00874.6847.461067.001204.13108.40521.01306.611.76520.01150.581.56522.0933.421.95
46560561.01011.320.18565.01007.320.89564.01028.680.71815.00981.2645.541389.001483.13148.04563.01526.550.54568.01551.681.43564.01511.350.71
47430431.0608.390.23433.0710.550.7434.0645.00.93570.00682.4232.56854.00931.8498.60430.0845.580.0434.0846.320.93434.0959.90.93
48492493.0775.160.2496.0950.550.81497.0836.391.02713.00875.4244.921148.001236.16133.33494.01315.00.41494.01046.00.41496.01348.870.81
49641667.01494.294.06659.01297.02.81664.01422.03.59922.001141.5543.841532.001706.10139.00660.01487.422.96662.01388.13.28655.01731.652.18
410514516.0736.00.39515.0825.610.19517.0812.450.58696.00847.0035.411046.001180.97103.50515.01028.450.19516.01308.870.39518.01029.840.78
51253257.0411.061.58259.0418.92.37259.0473.12.37347.00411.1037.15543.00614.74114.62259.0628.612.37259.0794.482.37257.0524.941.58
52302314.0865.263.97314.0617.13.97316.0628.744.64468.00577.3554.97762.00913.61152.32316.0799.584.64315.0890.714.3319.0576.485.63
53226228.0354.00.88228.0410.390.88229.0325.481.33313.00377.8438.50506.00558.06123.89229.0584.161.33230.0485.391.77229.0585.131.33
54242246.0563.681.65246.0503.581.65247.0405.262.07318.00394.9731.40540.00580.19123.14246.0521.841.65245.0488.681.24245.0486.01.24
55211212.0276.710.47212.0356.870.47213.0355.420.95294.00339.6539.34397.00427.8488.15212.0353.90.47212.0371.770.47212.0377.130.47
56213214.0409.740.47213.0369.680.0214.0335.810.47334.00389.7156.81507.00544.58138.03215.0537.840.94214.0447.680.47215.0497.680.94
57293299.0598.742.05297.0538.741.37298.0548.811.71387.00504.0632.08642.00710.52119.11296.0630.321.02298.0716.651.71296.0711.651.02
58288290.0586.060.69290.0534.90.69290.0395.810.69399.00497.3538.54663.00745.32130.21292.0730.551.39289.0664.450.35289.0747.260.35
59279281.0547.350.72282.0602.741.08281.0562.420.72404.00497.0344.80673.00740.35141.22281.0732.610.72282.0611.521.08282.0808.231.08
510265268.0378.841.13267.0461.350.75266.0402.580.38390.00470.1347.17594.00669.58124.15268.0575.191.13265.0626.940.0268.0488.771.13
61138140.0370.351.45140.0480.941.45140.0654.551.45283.00403.26105.07736.00836.52433.33140.0563.291.45141.0426.162.17140.0643.421.45
62146148.0647.061.37146.0603.320.0146.0619.520.0320.00561.77119.181100.001211.81653.42146.01416.060.0147.01215.680.68149.0779.552.05
63145148.0677.552.07145.0479.030.0147.0696.01.38337.00537.19132.41912.001125.97528.97147.0613.131.38146.0635.190.69148.0864.12.07
64131131.0275.230.0132.0308.00.76131.0376.810.0246.00366.1987.79652.00722.42397.71131.0558.030.0131.0618.520.0131.0648.030.0
65161165.0580.742.48162.0842.870.62162.0652.770.62357.00531.74121.741020.001155.81533.54164.0867.581.86162.0722.350.62161.01129.450.0
a1253257.0823.841.58260.0785.842.77260.0800.942.77455.00662.7179.841243.001352.03391.30259.01343.552.37258.0815.191.98260.01253.452.77
a2252259.0629.032.78259.0886.872.78263.0910.584.37452.00651.1679.371150.001241.00356.35260.01299.873.17260.0915.653.17260.0961.583.17
a3232240.0975.583.45239.0608.713.02239.0582.293.02436.00601.5887.931066.001185.84359.48240.0944.133.45239.01108.713.02242.0738.164.31
a4234238.0933.681.71237.0794.771.28237.0672.421.28467.00595.9799.571080.001161.45361.54238.0873.481.71239.0942.062.14238.0888.841.71
a5236240.0874.261.69241.0800.02.12240.0702.771.69447.00618.6589.411139.001191.23382.63240.0766.811.69240.0730.261.69241.01070.12.12
b16969.0555.740.069.01085.10.069.0890.420.0309.00561.94347.831344.001441.681847.8370.0888.681.4569.01107.260.069.01284.520.0
b27676.0822.320.076.0630.230.076.0548.480.0337.00547.39343.421265.001426.321564.4776.0941.770.076.01094.90.076.0614.10.0
b38081.01164.971.2580.0912.770.080.0648.650.0378.00689.84372.501737.001848.742071.2581.01419.91.2581.01199.711.2581.01361.231.25
b47980.0411.941.2779.0978.260.079.01268.350.0334.00631.39322.781514.001644.031816.4679.01225.810.079.01315.840.080.01663.421.27
b57272.0834.580.072.0785.740.072.0959.450.0299.00541.81315.281372.001467.521805.5672.01201.840.072.01254.290.072.01196.680.0
c1227236.01010.423.96234.0812.353.08233.0794.552.64523.00717.65130.401488.001610.13555.51234.01689.233.08234.01309.523.08231.01117.841.76
c2219227.0585.323.65224.01061.392.28226.01099.063.2474.00737.55116.441654.001779.23655.25224.01564.392.28225.01233.942.74226.01772.813.2
c3243249.01307.132.47252.01560.453.7248.01298.352.06629.00919.10158.851970.002126.94710.70248.01523.482.06245.01516.650.82248.01504.322.06
c4219227.0816.483.65226.01105.353.2225.01148.522.74570.00769.13160.271582.001734.13622.37226.01546.393.2226.01067.323.2226.01092.323.2
c5215221.01011.972.79220.0894.192.33221.0918.712.79496.00754.65130.701541.001686.84616.74221.01284.482.79217.01012.680.93221.01099.652.79
d16060.0850.030.060.0918.320.061.0951.231.67419.00801.39598.331950.002080.393150.0060.01278.580.060.01213.190.060.01157.810.0
d26667.01092.651.5266.01271.840.067.0799.771.52480.00806.10627.272261.002333.683325.7666.01320.650.067.02011.741.5267.01672.321.52
d37273.0889.161.3973.0647.811.3972.01528.390.0506.00880.23602.782445.002581.483295.8373.01705.741.3973.01883.681.3973.02095.321.39
d46262.01370.870.062.01464.520.062.01201.810.0312.00755.48403.232025.002107.973166.1362.02044.810.062.01884.160.062.01449.160.0
d56161.0895.290.061.0750.810.061.01044.030.0344.00792.87463.931930.002093.293063.9361.01708.770.061.01649.650.061.01081.10.0
257.73765.21.45257.33784.681.21257.73782.61.36463.07653.83152.831176.271280.82778.69257.491065.51.34257.491033.871.28257.671040.611.43
Table 6. Average p-value of GWO compared to others algorithm.
Table 6. Average p-value of GWO compared to others algorithm.
80aBQSA80aQL80aSABCLMIR40aBQSA40aQL40aSA
80aBQSA-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
80aQL≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05
80aSA≥0.05≥0.05-≥0.05≥0.05≥0.05≥0.05≥0.05
BCL≥0.05≥0.05≥0.05-0.02≥0.05≥0.05≥0.05
MIR≥0.05≥0.05≥0.05≥0.05-≥0.05≥0.05≥0.05
40aBQSA≥0.05≥0.05≥0.05≥0.05≥0.05-≥0.05≥0.05
40aQL≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05-≥0.05
40aSA≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05≥0.05-
Table 7. Average p-value of SCA compared to others algorithm.
Table 7. Average p-value of SCA compared to others algorithm.
80aBQSA80aQL80aSABCLMIR40aBQSA40aQL40aSA
80aBQSA-≥0.05≥0.05≥0.050.00≥0.05≥0.05≥0.05
80aQL≥0.05-≥0.05≥0.050.00≥0.05≥0.05≥0.05
80aSA≥0.05≥0.05-≥0.050.00≥0.05≥0.05≥0.05
BCL≥0.05≥0.05≥0.05-0.00≥0.05≥0.05≥0.05
MIR≥0.05≥0.05≥0.05≥0.05-≥0.05≥0.05≥0.05
40aBQSA≥0.05≥0.05≥0.05≥0.050.00-≥0.05≥0.05
40aQL≥0.05≥0.05≥0.05≥0.050.00≥0.05-≥0.05
40aSA≥0.05≥0.05≥0.05≥0.050.00≥0.05≥0.05-
Table 8. Average p-value of WOA compared to others algorithm.
Table 8. Average p-value of WOA compared to others algorithm.
80aBQSA80aQL80aSABCLMIR40aBQSA40aQL40aSA
80aBQSA-≥0.05≥0.05≥0.050.00≥0.05≥0.05≥0.05
80aQL≥0.05-≥0.05≥0.050.00≥0.05≥0.05≥0.05
80aSA≥0.05≥0.05-≥0.050.00≥0.05≥0.05≥0.05
BCL≥0.05≥0.05≥0.05-0.00≥0.05≥0.05≥0.05
MIR≥0.05≥0.05≥0.05≥0.05-≥0.05≥0.05≥0.05
40aBQSA≥0.05≥0.05≥0.05≥0.050.01-≥0.05≥0.05
40aQL≥0.05≥0.05≥0.05≥0.050.01≥0.05-≥0.05
40aSA≥0.05≥0.05≥0.05≥0.050.00≥0.05≥0.05-
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Becerra-Rozas, M.; Lemus-Romani, J.; Cisternas-Caneo, F.; Crawford, B.; Soto, R.; García, J. Swarm-Inspired Computing to Solve Binary Optimization Problems: A Backward Q-Learning Binarization Scheme Selector. Mathematics 2022, 10, 4776. https://doi.org/10.3390/math10244776

AMA Style

Becerra-Rozas M, Lemus-Romani J, Cisternas-Caneo F, Crawford B, Soto R, García J. Swarm-Inspired Computing to Solve Binary Optimization Problems: A Backward Q-Learning Binarization Scheme Selector. Mathematics. 2022; 10(24):4776. https://doi.org/10.3390/math10244776

Chicago/Turabian Style

Becerra-Rozas, Marcelo, José Lemus-Romani, Felipe Cisternas-Caneo, Broderick Crawford, Ricardo Soto, and José García. 2022. "Swarm-Inspired Computing to Solve Binary Optimization Problems: A Backward Q-Learning Binarization Scheme Selector" Mathematics 10, no. 24: 4776. https://doi.org/10.3390/math10244776

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop