Next Article in Journal
Global Regular Axially Symmetric Solutions to the Navier–Stokes Equations: Part 2
Next Article in Special Issue
A Revisit to Sunk Cost Fallacy for Two-Stage Stochastic Binary Decision Making
Previous Article in Journal
Research on Deep Q-Network Hybridization with Extended Kalman Filter in Maneuvering Decision of Unmanned Combat Aerial Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Chaotic Binarization Schemes for Solving Combinatorial Optimization Problems Using Continuous Metaheuristics

1
Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Avenida Brasil 2241, Valparaíso 2362807, Chile
2
Facultad de Ingeniería, Universidad Andres Bello, Antonio Varas 880, Providencia, Santiago 7591538, Chile
3
Escuela de Ingeniería de Construcción y Transporte, Pontificia Universidad Católica de Valparaíso, Avenida Brasil 2147, Valparaíso 2362804, Chile
*
Authors to whom correspondence should be addressed.
Mathematics 2024, 12(2), 262; https://doi.org/10.3390/math12020262
Submission received: 30 November 2023 / Revised: 22 December 2023 / Accepted: 8 January 2024 / Published: 12 January 2024
(This article belongs to the Special Issue Mathematical Optimization and Decision Making Analysis)

Abstract

:
Chaotic maps are sources of randomness formed by a set of rules and chaotic variables. They have been incorporated into metaheuristics because they improve the balance of exploration and exploitation, and with this, they allow one to obtain better results. In the present work, chaotic maps are used to modify the behavior of the binarization rules that allow continuous metaheuristics to solve binary combinatorial optimization problems. In particular, seven different chaotic maps, three different binarization rules, and three continuous metaheuristics are used, which are the Sine Cosine Algorithm, Grey Wolf Optimizer, and Whale Optimization Algorithm. A classic combinatorial optimization problem is solved: the 0-1 Knapsack Problem. Experimental results indicate that chaotic maps have an impact on the binarization rule, leading to better results. Specifically, experiments incorporating the standard binarization rule and the complement binarization rule performed better than experiments incorporating the elitist binarization rule. The experiment with the best results was STD_TENT, which uses the standard binarization rule and the tent chaotic map.

1. Introduction

Optimization problems are increasingly relevant across a wide range of sectors, including mining, energy, telecommunications, and health. A prominent type of these problems is combinatorial optimization problems, where the decision variables are of a categorical nature, such as binary. In these cases, the challenge is to identify the best possible combination of these variables.
The complexity of solving these problems increases exponentially with the number of decision variables. This is because, in a binary combinatorial problem, the search space of these problems is 2 n , where n represents the total number of decision variables. This exponential growth of the search space poses significant computational and analytical challenges.
According to the literature [1], methods for addressing complex optimization problems are classified into two main categories: exact methods and approximate methods.
  • Exact Methods: These methods focus on ensuring an optimal solution by exhaustively exploring the entire search space. However, their applicability is limited due to scalability issues. As the complexity of the problem increases, the time required to find an optimal solution increases significantly, which can make them impractical for large-scale problems or those with an excessively large search space.
  • Approximate Methods: Unlike exact methods, approximate methods do not guarantee the attainment of an optimal solution. However, they are capable of providing high-quality solutions within reasonable computational times, making them very valuable in practice, especially for complex and large-scale problems. Within this category, metaheuristics are particularly notable. These techniques, which include Genetic Algorithms (GA), Particle Swarm Optimization (PSO), and Ant Colony Optimization (ACO), are known for their ability to find efficient solutions to complex problems through intelligent exploration of the search space, avoiding getting trapped in sub-optimal local solutions.
Thus, exact methods are ideal for smaller, manageable problems where precision is required, whereas approximate methods, especially metaheuristics, are the preferred option for larger-scale problems or those with time constraints, where a ”good enough” solution is acceptable and often necessary.
The study of metaheuristics has grown in recent years, with hybridizations emerging as the current trend. There exist hybridizations between metaheuristics such as those proposed in [2,3,4,5], hyperheuristic approaches where a high-level metaheuristic guides another low-level one [6,7,8], approaches where machine learning techniques enhance metaheuristics [9,10,11], and other approaches in which chaos theory is utilized to modify the stochastic behavior of metaheuristics [12,13,14,15].
The integration of chaotic maps into metaheuristics has caught the attention of the scientific community due to its advantages, such as low computational cost and rapid adaptability [16]. Chaotic maps are used as generators of random sequences, contributing to an improvement in the stochastic behavior of metaheuristics. This approach is utilized in various aspects of metaheuristics such as the initialization of solutions [17,18] or in the solution perturbation operators [19,20]. The hybridization of metaheuristics with chaotic maps is significant because it enhances the ability of metaheuristics to avoid getting trapped in local minima and improves global exploration of the search space.
In reviewing the different metaheuristics existing in the literature [21], we can observe that most of them are designed to solve continuous problems; therefore, to solve binary combinatorial problems, it is necessary to binarize them. According to the literature [22], there are different ways to binarize metaheuristics, among which the two-step technique stands out. This binarization process is carried out in two steps: (1) applying a transfer function and (2) applying a binarization rule. In the present work, chaotic maps were used to change the stochastic behavior of the binarization rule to binarize continuous metaheuristics. Specifically, seven chaotic maps were used, which were compared with the original stochastic behavior for binarizing three metaheuristics widely used in the literature.
Among the great variety of metaheuristics that exist in the literature, in the present work we chose the Sine Cosine Algorithm (SCA) [23], Grey Wolf Optimizer (GWO) [24], and Whale Optimization Algorithm (WOA) [25]. These three metaheuristics are population metaheuristics designed to solve continuous optimization problems of great interest to the scientific community. This interest is reflected in the use of these metaheuristics in different works where, for example, SCA was used in [26,27,28,29,30,31], GWO was used in [32,33,34,35,36,37,38,39], and WOA was used in [40,41,42,43,44,45,46].
Given this great interest, the good results obtained in different optimization problems, and the No Free Lunch Theorem [47,48], we are motivated to investigate the behavior of these three metaheuristics in a combinatorial optimization problem, the Knapsack Problem, with the hybridization of chaotic maps.
The main contributions of this work are the following:
  • Incorporate chaotic maps into binarization schemes to develop chaotic binarization schemes.
  • Use these chaotic binarization schemes in three continuous metaheuristics to solve the 0-1 Knapsack Problem.
  • Analyze the results obtained in terms of descriptive statistics, convergence, and non-parametric statistical test.
The following is a brief summary of the structure of this paper: Section 2 provides a comprehensive review of related works that utilize continuous metaheuristics (Section 2.1), defines chaotic maps (Section 2.2), and assesses their application in metaheuristics (Section 2.3). Section 3 examines how continuous metaheuristics can be leveraged to solve binary combinatorial optimization problems. Section 4 outlines our research proposal, which focuses on the implementation of chaotic binarization schemes. Section 5 of this paper details the 0-1 Knapsack Problem (Section 5.1), the experiment configuration (Section 5.2), the results analysis (Section 5.3), the algorithm convergence analysis (Section 5.4), and the non-parametric statistical test analysis (Section 5.5). Finally, Section 6 presents conclusions and future work.

2. Related Work

2.1. Metaheuristics

Metaheuristics are highly flexible and efficient algorithms, capable of delivering quality solutions in manageable computational times [1]. The efficacy of metaheuristics is largely due to their ability to balance two critical phases in the search process, diversification (or exploration) and intensification (or exploitation), using specific operators that vary according to the algorithm in question.
The development of metaheuristics, stimulated by the No Free Lunch Theorem [47,48,49], is based on a variety of sources of inspiration, including human behavior, genetic evolution, social interactions among animals, and physical phenomena. This theorem, fundamental in the field of optimization, states that there is no universal algorithm that is most efficient for solving all optimization problems. The following section will introduce and define the three metaheuristics employed in this research.

2.1.1. Sine Cosine Algorithm

The Sine Cosine Algorithm (SCA) is a metaheuristic proposed by Mirjalili in 2016 [23]. This metaheuristic was designed to solve continuous optimization problems and is inspired by the dual behavior of the trigonometric functions sine and cosine. Algorithm 1 presents the behavior of SCA.

2.1.2. Grey Wolf Optimizer

The Grey Wolf Optimizer [24], proposed by Mirjalili in 2014, is a metaheuristic inspired by the hunting behavior and hierarchical social structure of the grey wolf. The efficacy of this technique is based on the imitation of the dynamics and social interactions observed in a pack of wolves.
In a wolf pack, there are four types of hierarchical roles that are essential in the structure of the GWO.
  • Alpha ( α ): These are the wolves that lead the pack. In the context of GWO, they represent the current best solution. The alpha guides the search process and decision making during optimization.
  • Beta ( β ): These wolves support the alpha and are considered the second-best solution. In the metaheuristic, they assist in directing the search, providing a secondary perspective in the solution space.
  • Delta ( δ ): Though strong, delta wolves lack leadership skills. They are the third-best solution in the optimization process and contribute to the diversity of the search, bringing variability and preventing the pack (the algorithm) from becoming stagnant.
  • Omega ( ω ): These wolves are the lowest in the social hierarchy. They have no leadership power and are dedicated to following and protecting the younger members of the pack. In GWO, they represent the other possible solutions, following the lead of the higher-ranking wolves.
Algorithm 1 Sine Cosine Algorithm
  Input: The population X = { X 1 , X 2 , , X i }
  Output: The updated population X = { X 1 , X 2 , , X i } and X b e s t
  1:
Initialize random population X
  2:
Evaluate the objective function of each individual in the population X
  3:
Identify the best individual in the population ( X b e s t )
  4:
a = 2
  5:
for  i t e r a t i o n ( t )   do
  6:
     r 1 = a ( t · ( a / m a x I t e r ) )
  7:
    for  s o l u t i o n ( i )  do
  8:
        for  d i m e n s i o n ( j )  do
  9:
            r a n d = r a n d ( 0 , 1 )
10:
            r 2 = ( 2 · π ) · r a n d ( 0 , 1 )
11:
            r a n d = r a n d ( 0 , 1 )
12:
            r 3 = 2 · r a n d ( 0 , 1 )
13:
            r 4 = r a n d ( 0 , 1 )
14:
           if  r 4 < 0.5  then
15:
                X i , j t = X i , j t + ( r 1 · s i n ( r 2 ) · | r 3 · X b e s t , j X i , j t | )
16:
           else
17:
                 X i , j t = X i , j t + ( r 1 · c o s ( r 2 ) · | r 3 · X b e s t , j X i , j t | )
18:
           end if
19:
        end for
20:
    end for
21:
    Evaluate the objective function of each individual in the population X
22:
    Update X b e s t
23:
end for
24:
Return the updated population X where X b e s t is the best result
The implementation of these hierarchies in the GWO allows the algorithm to effectively balance exploration (diversification) and exploitation (intensification) of the solution space. The inspiration from the behavior and social structure of grey wolves brings a unique methodology for solving complex optimization problems. Algorithm 2 presents the behavior of GWO.

2.1.3. Whale Optimization Algorithm

The Whale Optimization Algorithm (WOA) is a metaheuristic developed by Mirjalili and Lewis in 2016 [25], inspired by the hunting behavior and social structure of whales. This algorithm mimics the hunting strategy known as “bubble-net feeding”, a sophisticated and coordinated method used by whales to capture their prey. The WOA is characterized by three main phases in its search and optimization process:
  • Search for the prey: The whales (search agents) explore the solution space to locate the prey (the best solution). Notably in WOA, unlike other metaheuristics, the position update of each search agent is based on a randomly selected agent, not necessarily the best one found so far. This allows for a broader and more diversified exploration of the solution space.
  • Encircling the prey: Once the prey (best solution) is identified, the whales position themselves to encircle it. This stage represents an intensification phase, where the algorithm concentrates on the area around the promising solution identified in the search phase.
  • Bubble-net attacking: In the final phase, the whales attack the prey using the bubble-net technique. This phase represents a coordinated and focused effort to refine the search in the selected region and optimize the solution.
Algorithm 2 Grey Wolf Optimizer
  Input: The population X = { X 1 , X 2 , , X i }
  Output: The updated population X = { X 1 , X 2 , , X i } and X b e s t
  1:
Initialize random population X
  2:
Evaluate the objective function of each individual in the population X
  3:
Identify the best individual in the population ( X b e s t )
  4:
for  i t e r a t i o n ( t )   do
  5:
    a = 2 − t · (2/maxIter)
  6:
    Determine alpha wolf ( X a l p h a )          ▹ X a l p h a is the best solution
  7:
    Determine beta wolf ( X b e t a )        ▹ X b e t a is the second best solution
  8:
    Determine delta wolf ( X d e l t a )        ▹ X d e l t a is the third best solution
  9:
    for  s o l u t i o n ( i )  do
10:
        for  d i m e n s i o n ( j )  do
11:
            r 1 = rand(0,1)
12:
            r 2 = rand(0,1)
13:
            A 1 = 2 · a · r 1 a
14:
            C 1 = 2 · r 2
15:
            d a l p h a = | ( C 1 · X a l p h a , j t ) X i , j t |
16:
            X 1 = X a l p h a , j t ( A 1 · d a l p h a )
17:
            r 1 = rand(0,1)
18:
            r 2 = rand(0,1)
19:
            A 2 = 2 · a · r 1 a
20:
            C 2 = 2 · r 2
21:
            d b e t a = | ( C 2 · X b e t a , j t ) X i , j t |
22:
            X 2 = X b e t a , j t ( A 2 · d b e t a )
23:
            r 1 = rand(0,1)
24:
            r 2 = rand(0,1)
25:
            A 3 = 2 · a · r 1 a
26:
            C 3 = 2 · r 2
27:
            d d e l t a = | ( C 3 · X d e l t a , j t ) X i , j t |
28:
            X 3 = X d e l t a , j t ( A 3 · d d e l t a )
29:
            X i , j t = ( X 1 + X 2 + X 3 ) / 3
30:
        end for
31:
    end for
32:
    Evaluate the objective function of each individual in the population X
33:
    Update X b e s t
34:
end for
35:
Return the updated population X where X b e s t is the best result
The structure of these phases enables the WOA to effectively balance between exploration and exploitation, making it suitable for solving a wide range of complex optimization problems. Algorithm 3 presents the behavior of WOA.

2.2. Chaotic Maps

Dynamic systems, characterized by their lack of linearity and periodicity, exhibit chaos in a way that is both deterministic and seemingly random [50]. Such a characteristic of the dynamic system is recognized as a generator of random behaviors [51]. It is crucial to understand that chaos, although it follows specific patterns and is based on chaotic variables, is not synonymous with absolute randomness [52]. The implementation of chaotic mappings is valued for its ability to minimize computational costs and because it requires only a limited set of initial parameters [16].
Chaotic behavior demonstrates high sensitivity to variations in initial conditions, meaning that any modification of these conditions will influence the resulting sequence [53]. There are numerous chaotic maps referenced in the scientific literature, of which ten are of special relevance [50,52,54]. Equations (1)–(7) shows seven of these chaotic maps, and Figure 1 details the behavior of each of the previously mentioned chaotic maps.
Algorithm 3 Whale Optimization Algorithm
  Input: The population X = { X 1 , X 2 , , X i }
  Output: The updated population X = { X 1 , X 2 , , X i } and X b e s t
  1:
Initialize random population X
  2:
Evaluate the objective function of each individual in the population X
  3:
Identify the best individual in the population ( X b e s t )
  4:
b = 1
  5:
for  i t e r a t i o n ( t )   do
  6:
     a = 2 ( ( 2 · t ) / m a x I t e r )
  7:
    for  s o l u t i o n ( i )  do
  8:
         p = r a n d ( 0 , 1 )
  9:
         r a n d = r a n d ( 0 , 1 )
10:
         A = 2 · a · ( r a n d a
11:
         r a n d = r a n d ( 0 , 1 )
12:
         C = 2 · r a n d
13:
         l = r a n d ( 1 , 1 )
14:
        if  p < 0.5  then
15:
           if  | A | < 1  then
16:
               for  d i m e n s i o n ( j )  do
17:
                    D = | ( C · X b e s t , j ) X i , j t |
18:
                    X i , j t = X b e s t , j ( A · D )
19:
               end for
20:
           else
21:
                X r a n d o m = random individual from the population
22:
               for  d i m e n s i o n ( j )  do
23:
                    D = | ( C · X r a n d o m , j ) X i , j t |
24:
                    X i , j t = X r a n d o m , j ( A · D )
25:
               end for
26:
           end if
27:
        else
28:
           for  d i m e n s i o n ( j )  do
29:
                D = X b e s t , j X i , j t
30:
                X i , j t = ( D · e b · l · c o s ( 2 · π · l ) ) + X b e s t , j
31:
           end for
32:
        end if
33:
    end for
34:
    Evaluate the objective function of each individual in the population X
35:
    Update X b e s t
36:
end for
37:
Return the updated population X where X b e s t is the best result
L o g i s t i c M a p x k + 1 = c · x k ( 1 x k ) , c = 4
P i e c e w i s e M a p x k + 1 = x k l 0 x k < l x k l 0.5 l l x k < 0.5 1 l x k 0.5 l 0.5 x k < 1 l 1 x k l 1 l x k < 1 , l = 0.4
S i n e M a p x k + 1 = c 4 s i n ( π x k ) , c = 4
S i n g e r M a p μ ( 7.86 x k 23.31 x k 2 + 23.75 x k 3 ) 13.302875 x k 4 , μ = 1.07
S i n u s o i d a l M a p x k + 1 = c x k 2 s i n ( π x k ) , c = 2.3
T e n t M a p x k + 1 = x k 0.7 x k < 0.7 10 3 ( 1 x k ) x k 0.7
C i r c l e M a p x k + 1 = m o d ( x k + d ( c 2 π ) · s i n ( 2 π x k >) , 1 ) , c = 0.5 a n d d = 0.2

2.3. Chaotic Maps in Metaheuristics

Hybridization between metaheuristics and chaotic maps can be classified into four categories, which are summarized in Figure 2.
  • Initialization: The implementation of chaotic maps can be effective in creating initial solutions or populations in metaheuristic techniques, thereby replacing the random generation of these solutions. The nature of chaotic dynamics facilitates the distribution of initial solutions in different areas of the search space, thereby enhancing the exploration phase [13,17,18,55,56,57,58,59].
  • Mutation: Chaotic maps can be employed to perturb or mutate solutions. By using the chaotic behavior as a source of randomness, the metaheuristic algorithm can introduce diverse and unpredictable variations in the solutions, aiding in exploration [15,60,61].
  • Local Search: Chaotic maps have the potential to effectively steer the local search process within metaheuristic algorithms. By integrating chaotic dynamics into the metaheuristics, the algorithm gains the ability to break free from local optima and delve into various segments of the solution space [14,50,62,63,64,65,66].
  • Parameter Adaptation: Chaos maps can be employed to dynamically adapt the parameters of a metaheuristic. The inherent chaotic behavior aids in the real-time adjustment of metaheuristic-specific parameters such as mutation rates and crossover probabilities in a genetic algorithm, thereby enhancing the algorithm’s adaptability throughout the optimization process [12,19,20,67,68,69,70,71,72,73].

3. Continuous Metaheuristics for Solving Combinatorial Problems

The No Free Lunch (NFL) theorem [47,48,49] indicates that there is no optimization algorithm capable of solving all existing optimization problems effectively. This is the primary motivation behind binarizing continuous metaheuristics, as evident in the literature where authors have presented binary versions for the Bat Algorithm [74,75], Particle Swarm Optimization [76], Sine Cosine Algorithm [10,11,77,78], Salp Swarm Algorithm [79,80], Grey Wolf Optimizer [11,81,82], Dragonfly Algorithm [83,84], Whale Optimization Algorithm [11,77,85], and Magnetic Optimization Algorithm [86].
The binarization process aims to transfer continuous solutions from a metaheuristic to the binary domain. In the literature [22], various binarization methods are found, with the two-step technique being a notable one. Researchers use this technique because of its quick implementation and integration into metaheuristics [87,88].

3.1. Two-Step Technique

The two-step technique, as its name suggests, performs the binarization process in two stages. In the first stage, a transfer function is applied, which maps continuous solutions to the real domain [ 0 , 1 ] . Then, in the second stage, a binarization rule is applied, discretizing the transferred value, thereby completing the binarization process. Figure 3 provides an overview of the two-step technique.

3.1.1. Transfer Function

In 1997, Kennedy et al. [89] introduced transfer functions in the field of optimization. New transfer functions have been introduced over the years [22], and we can observe that there are different types of transfer functions, among which S-Shaped transfer functions [76,90] and V-Shaped transfer functions [91] stand out.
Table 1 and Figure 4 show the S-Shaped transfer functions and V-Shaped transfer functions found in the literature. The notation d i j observed in Table 1 corresponds to the continuous value of the j-th dimension of the i-th individual resulting after the perturbation performed by the continuous metaheuristic.

3.1.2. Binarization Rule

The process of binarization involves converting continuous values into binary values, that is, values of 0 or 1. In this context, binarization rules are applied to the probability obtained from the transfer function to obtain a binary value. There are various different rules described in scientific literature [92] that can be utilized for this binarization process. The choice of the binarization rule is crucial since it can vary depending on the context and specific problem needs. It is crucial to consider the appropriate use of the binarization rule to obtain accurate and reliable results. Table 2 shows the five binarization rules found in the literature [87].
The notation X i j observed in Table 2 corresponds to the j-th dimension binary value of the i-th current individual, and X B e s t j , observed also in Table 2, corresponds to the j-th dimension binary value of the best solution. Algorithm 4 shows the general scheme of a continuous metaheuristic being binarized. The Δ symbol observed there refers to the perturbation of solutions, which is implemented by each metaheuristic in its own way depending on its inspiration.
Algorithm 4 General scheme of continuous MHs for solving combinatorial problems
   Input: The population X = { X 1 , X 2 , , X p o p }
   Output: The updated population X = { X 1 , X 2 , , X p o p } and X b e s t
  1:
Initialize random binary population X
  2:
for  t = 1   to  M a x i t e r   do
  3:
    for  i = 1 to  p o p  do
  4:
        for  j = 1  to  d i m  do
  5:
            X i , j t + 1 = X i , j t + Δ
  6:
        end for
  7:
    end for
  8:
    for  i = 1  to  p o p  do               ▹ Binarization Process
  9:
        for  j = 1  to  d i m  do
10:
           Get T ( d i j ) by applying transfer function
11:
           Get X n e w j by applying binarization rule
12:
        end for
13:
    end for
14:
    Evaluate each solution on the objective function
15:
    Update X b e s t
16:
end for

4. Proposal: Chaotic Binarization Schemes

Authors who have incorporated chaotic behavior into their metaheuristics indicate that they improve the balance of exploration and exploitation because they obtain better results. On the other hand, Senkerik in [93] shows us a study on chaos dynamics in metaheuristics and tells us the choice of chaotic maps depends closely on the problem to be solved.
As observed in Section 2.3, chaotic maps have been applied to replace the random numbers used in metaheuristics. In this context, we propose using chaotic behavior to carry out the binarization process.
Specifically, we propose replacing the random numbers used in the standard binarization rule, complement binarization rule, and elitist binarization rule with the chaotic numbers generated by the chaotic maps.
As shown in Section 2.2, there are different chaotic maps; some of them can encourage exploration, and others can encourage exploitation. Thus, each original binarization rule will be compared with seven new chaotic variants for each binarization rule; these are detailed in Figure 5.
In other words, our proposal consists of changing the uniform distribution between [0, 1] of the random number existing in the standard binarization rule, complementary binarization rule, and elitist binarization rule by the chaotic distribution of the 7 chaotic maps defined in the present work.
The dimensionality of the chaotic maps will be related to the number of iterations ( M a x i t e r ), population size ( p o p ), and number of decision variables of the optimization problem ( d i m ). Thus, the dimensionality of the chaotic maps in the present proposal will be M a x i t e r · p o p · d i m . Suppose we have an optimization problem with 100 decision variables and we use a population of 10 individuals and 500 iterations. In this case, the generated chaotic map will contain 100 · 10 · 500 values. Algorithm 5 presents a summary of the proposal.
Algorithm 5 Chaotic binarization schemes
  Input: The population X = { X 1 , X 2 , , X p o p }
  Output: The updated population X = { X 1 , X 2 , , X p o p } and X b e s t
  1:
Initialize random binary population X
  2:
Initialize the chaotic maps using M a x i t e r , p o p and d i m
  3:
for  t = 1   to  M a x i t e r   do
  4:
    for  i = 1  to  p o p  do
  5:
        for  j = 1  to  d i m  do
  6:
            X i , j t + 1 = X i , j t + Δ
  7:
        end for
  8:
    end for
  9:
    for  i = 1  to  p o p  do                 ▹ Binarization Process
10:
        for  j = 1  to  d i m  do
11:
           Get  T ( d i j )  by applying transfer function
12:
           Get chaotic map value
13:
           Get  X n e w j  by applying binarization rule using chaotic number
14:
        end for
15:
    end for
16:
    Evaluate each solution on the objective function
17:
    Update  X b e s t
18:
end for

5. Experimental Results

To validate our proposal we used the Grey Wolf Optimizer, Sine Cosine Algorithm, and Whale Optimization Algorithm. Each of these continuous metaheuristics was used to solve a set of benchmark instances of the 0-1 Knapsack Problem. The binarization process of each of these metaheuristics is shown in Figure 5. We have 24 different versions of each metaheuristic. To test our proposal, we use benchmark instances widely used in the literature.

5.1. 0-1 Knapsack Problem

The Knapsack Problem is another NP-hard combinatorial optimization problem. Mathematically, it is modeled as follows: Given N objects, where the j-th object has its own weight w j and profit p j , and a knapsack that can holds a limited weight capability C, the problem consists of finding the objects that maximize the profit whose sum of weights does not exceed the capacity of the knapsack [94,95,96]. The objective function is as follows:
max f ( x ) = j = 1 N p j x j
This is subject to the following restrictions:
j = 1 N w j x j C x j { 0 , 1 } j J
where x j represents the binary decision variables (i.e., whether an element is considered in the knapsack (value 1 in the decision variable) or not considered in the knapsack (value 0 in the decision variable)).
According to the authors in [97], this problem has different practical applications in the real world, such as capital budgeting allocation problems [98], resource allocation problems [99], stock-cutting problems [100], and investment decision making [101].
We use the instances proposed by Pisinger in [102,103] where he presents three sets of benchmark instances that differ due to the correlation between each element. Table 3 shows the details of the benchmark instances of the Knapsack Problem used in this work, where the first column of this table presents the name of the instance, the second column presents the number of items to select, and the third column presents the global optimum of the instance.

5.2. Parameter Setting

Regarding the setup of the experiments, each variation of the metaheuristics was run 31 times independently, a population size of 20 individuals was used, and 500 iterations were used. The details of each configuration are detailed in Table 4.
Thus, 3 × 31 × 15 × 24 = 33 , 480 experiments were carried out. Regarding the software and hardware used in the experimentation, we used Python in version 3.10.5 64-bit as the programming language. All the experiments were executed on a machine with Windows 10, an Intel Core i9-10900k 3.70 GHz processor, and 64 GB of RAM.

5.3. Summary of Results

Table 5 shows us the performance of each experiment with the three metaheuristics used in each solved instance. The first column of the table indicates the experiment, while the second and third columns pertain to the solved instance. The second column shows two symbols. The symbol “✓” indicates that the experiment under analysis reached the known global optimum, while the symbol “×” indicates that the experiment under analysis did not reach the known global optimum. Finally, in the third column we will also observe two things. In case the experiment under analysis has reached the known global optimum, this column will indicate in bold and underlined the metaheuristic(s) that reached the global optimum. In case no metaheuristic has reached the optimum, the metaheuristic or metaheuristics that reached the value closest to the optimum will be reported without bold and underlined. This last case applies to instances knapPI_2_100_1000_1, knapPI_2_1000_1000_1, knapPI_1_2000_1000_1, knapPI_2_2000_1000_1, and knapPI_3_2000_1000_1.
By analyzing Table 5 we can observe that the experiments incorporating the standard binarization rule and the complement binarization rule have the best results. In particular, we can highlight the family of experiments incorporating the standard binarization rule since they reach the optimum in instances knapPI_1_500_1000_1, knapPI_2_500_1000_1, knapPI_3_500_1000_1, knapPI_1_1000_1000_1, and knapPI_3_1000_1000_1.
Table 6, Table 7 and Table 8 show the results obtained using GWO, WOA, and SCA, respectively. In these tables, we observe the following: in the first column, we observe the experiment used as defined in Figure 5, and the second, third, and fourth columns are repeated for each solved instance. The first of them indicates the best result obtained, the second of them indicates the average obtained with the 31 runs performed, and the third of them indicates the Relative Percentage Distance (RPD), which is calculated based on Equation (10). The experiments are grouped by base binarization rule, and the best result obtained per family is highlighted in bold and underlined.
RPD = 100 · O p t B e s t O p t .
where O p t corresponds to the optimum of the instance and B e s t corresponds to the best value obtained for the experiment.
When analyzing the results obtained in Table 6, Table 7 and Table 8, we can observe that the best binarization rule is STD_TENT, which reached the optimum with the three metaheuristics in 8 instances out of the 15 solved, and with WOA the optimum was reached in 2 more instances. In addition, with WOA the best result was reached in 1 instance.
This confirms what the authors have previously pointed out: the incorporation of chaotic maps improves performance in metaheuristics.
On the other hand, when we look at the largest instances (i.e., instances knapPI_1_1000_ 1000_1, knapPI_2_1000_1000_1, knapPI_3_1000_1000_1, knapPI_1_2000_1000_1, knapPI_2_ 2000_1000_1, and knapPI_3_2000_1000_1), we can observe that the WOA with the standard binarization rule achieves the best results, reaching the optimum in one and two. This indicates that the perturbation operators used by WOA to move the solutions in the search space are more efficient than the SCA and GWO operators.

5.4. Convergence Analysis

In this section, the convergence speed of the 24 experiments associated with the three metaheuristics will be analyzed by solving the knapPI_1_1000_1000_1 instance. This instance was selected since all the algorithms have similar behaviors in all instances, so the choice was random. For more information, you can consult the GitHub repository associated with this paper so you can see the behavior in the other instances.
Figure 6a, Figure 7a, and Figure 8a show us the behavior of the experiments that include the standard binarization rule in GWO, WOA, and SCA, respectively. In all three metaheuristics, we can observe that the STD_SINGER and STD_SINU experiments exhibit premature stagnation, unlike the others, which demonstrate decent convergence.
On the other hand, in Figure 6b, Figure 7b, and Figure 8b, the behavior of the experiments that include the complement binarization rule in GWO, WOA, and SCA is observed respectively. An unusual behavior is observed where several experiments fail to improve the initial optimum generated by the initial solutions. Furthermore, this behavior is not uniform across the three metaheuristics. For GWO, the experiments COM_PIECE and COM_CIRCLE converge, while for WOA, it is COM, COM_CIRCLE, and COM_TENT, and for SCA, it is COM_TENT and COM_CIRCLE.
Finally, Figure 6c, Figure 7c, and Figure 8c show us the behavior of the experiments that include the elitist binarization rule in GWO, WOA, and SCA, respectively. In this case, it is even more noticeable since all experiments for the three metaheuristics do not improve upon the initial solutions obtained with the generation of initial solutions. This is striking and suggests that when using the elitist binarization rule, the solutions become lost in the search space.
Here again, we can observe two important things. The first is that the binarization rule plays a significant role, and the second is that chaotic maps do have an impact on convergence.
On the other hand, observing the behavior of the experiments that use as a basis the standard binarization rule and complement binarization rule within WOA, we can observe that it has a premature convergence and good results, unlike SCA and GWO, which have a slower convergence. This confirms what we mentioned in Section 5.3: the perturbation operators of WOA solutions are more efficient in exploring and exploiting the search space.

5.5. Statistical Test

In the literature [9,104,105,106], it can be seen that the authors perform a static test to compare the experimental results to determine if there is any significant difference between each experiment. For this type of experimentation, a non-parametric statistical test must be applied. In response to this, we have applied the Wilcoxon–Mann–Whitney test [107,108].
From the Scipy Python library, we can apply this statistical test. The python function is called scipy.stats.mannwhitneyu. One parameter of the above function is “alternative”, which we define as “greater”. We evaluate and contrast two distinct experiments, as previously stated in Figure 5. Thus, we can state the following hypotheses:
H 0 = E x p e r i m e n t   A     E x p e r i m e n t   B
H 1 = E x p e r i m e n t   A   >   E x p e r i m e n t   B
If the result of the statistical test is with obtained a p-value < 0.05, we cannot assume that E x p e r i m e n t B has worse performance than E x p e r i m e n t A, rejecting H 0 . This comparison is made because our problem is a maximization problem.
Table 9 shows a summary of the statistical comparisons made. The first column indicates the 24 experiments, the second column indicates how many times the experiment was better than another when we used GWO, the third column indicates how many times the experiment was better than another when we used WOA, the fourth column tells us how many times the experiment was better than the other when we used SCA, and the fifth column tells us how many times one experiment was better than the other when we consider the three metaheuristics. In such a case, metaheuristics compare one experiment against 23 others.
By analyzing Table 9, we can see that the experiments that include the elitist binarization rule are the best in the three metaheuristics. If we check carefully, we can see that ELIT_CIRCLE is statistically worse than the rest of the experiments that include the binarization rule except when we compare with SCA, where only ELIT and ELIT_LOG are statistically better than ELIT_CIRCLE.
After observing all the experiments of the elitist binarization rule family, we can observe the experiments COM_SINU, composed of the complement binarization rule and the capotic sinusoidal map, and STD_SINU, composed of the standard binarization rule and the chaotic sinusoidal map. This is interesting since we can see that the incorporation of the chaotic sinusoidal map contributed to obtaining better results. Although they do not reach optimality in each instance, they are statistically better than the other experiments that include the complement and standard binarization rules.
Another interesting point is that the family of experiments composed by the complement binarization rule obtains statistically better results than those composed of the standard binarization rule with the three metaheuristics used. Finally, the worst experiments are those that include the standard binarization rule, except STD_SINU, since statistically, they fail to beat any other experiment.
Given the experimental results and the statistical tests applied, we can indicate that the binarization rule has a high impact on the binarization process of continuous metaheuristics, as indicated by the authors in [104]. In addition to this, chaotic maps also have an impact on the behavior of metaheuristics, which can be observed in the experimental results, convergence graphs, and statistical tests.
Table 10 shows the results when comparing the 24 experiments applied in GWO, Table 11 shows the results when comparing the 24 experiments applied in WOA, and Table 12 show the results when 24 experiments applied in WOA. These tables are structured as follows: the first column presents the techniques used (Experiment A), and the following columns present the average p-values of the seven instances compared with the version indicated in the column title (Experiment B). The values highlighted in bold and underlined show when the statistical test gives us a value less than 0.05, which is when the null hypothesis ( H 0 ) is rejected. Additionally, when we compare the same experiment, it is marked with an “X” symbol.

6. Conclusions

Binary combinatorial problems, such as the Set Covering Problem [9,11,77,104,105], Knapsack Problem [109,110], or Cell Formation Problem [106], are increasingly common in the industry. Given the demand for good results in reasonable times, metaheuristics have begun to gain ground as resolution techniques.
In the literature [21], we can find different continuous metaheuristics, most of which are designed to solve continuous optimization problems. In view of this, it is necessary to apply a binarization process so that they can solve binary combinatorial problems.
Among the best-known binarization processes [22] found is the two-step technique, which uses a transfer function and the binarization rule [87]. Among the binarization rules are the standard binarization rule, the complement binarization rule, and the elitist binarization rule. These three have one factor in common, and that is that they use a random number within the rules.
Our proposal consists of changing the behavior of the random number of the three binarization rules mentioned above by replacing it with chaotic maps. In particular, we use seven different chaotic maps within the three binarization rules mentioned above, thus creating eight experiments, where seven of them use the chaotic maps and the remaining is the original version that uses a random number with a uniform distribution. Regarding the experiments, seven instances and three metaheuristics were widely solved in the literature.
In the present work, it was shown that the incorporation of chaotic maps has a great impact on the behavior of the three metaheuristics considered. This is interesting since it confirms what has been said in the literature, that chaotic maps impact exploration and exploitation and, consequently, obtain better results.
Of all the experiments carried out, we can highlight all those that are based on the standard binarization rule and complement binarization rule since they are statistically better in the three metaheuristics compared to the experiment that is based on the elitist binarization rule.
Given this, we propose a strategy to select the best binarization rule and chaotic map. First, experiment with some instances of the problem using the three binarization rules without modification (i.e., use the standard, complement, and elitist binarization rules) to see which rule is the most suitable. Once the rule is chosen, proceed to experiment with the incorporation of chaotic maps to show which one has more impact during the optimization process. This strategy can be used independently of the binary combinatorial optimization problem to be solved.
As future work, we propose to use this approach in other binary combinatorial optimization problems, such as the Feature Selection Problem or Set Covering Problem, as well as to incorporate these chaotic maps in other binarization rules, such as elitist roulette.
Furthermore, in the literature, there are proposals that use machine learning techniques from the reinforcement learning family to dynamically select binarization schemes during the optimization process [9,10,11,81,85,111]. This work can be extended to use these new binarization schemes as actions to be decided by machine learning techniques to dynamically balance exploration and exploitation.

Author Contributions

Conceptualization, F.C.-C. and B.C.; methodology, F.C.-C. and B.C.; software, F.C.-C.; validation, B.C., R.S., G.G., Á.P. and A.P.F.; formal analysis, F.C.-C.; investigation, F.C.-C., B.C., R.S., G.G., Á.P. and A.P.F.; resources, F.C.-C.; writing—original draft F.C.-C. and B.C.; writing—review and editing, R.S., G.G., Á.P. and A.P.F.; supervision, B.C. and R.S.; funding acquisition, B.C. and R.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All the results of this work are available at the GitHub repository (https://github.com/FelipeCisternasCaneo/Chaotic-Binarization-Schemes-for-Solving-Combinatorial-Optimization-Problems.git (accessed on 1 January 2024)) and the database with results (https://drive.google.com/drive/folders/1MpSG6qlQ8d8k-qpalebzJFzbJ-DhFObb?usp=sharing (accessed on 1 January 2024)). GitHub has a limit of 100 MB for a file uploaded to the repository. This is why we have left the database shared on Google Drive. To perform validations, you only need to download the file in the shared Google Drive folder, clone the repository, and incorporate the downloaded file in the “BD” folder of the cloned repository.

Acknowledgments

Broderick Crawford and Ricardo Soto are supported by the grant ANID/FONDECYT/REGULAR/1210810. Felipe Cisternas-Caneo is supported by the National Agency for Research and Development (ANID)/Scholarship Program/DOCTORADO NACIONAL/2023-21230203. Felipe Cisternas-Caneo, Broderick Crawford, Ricardo Soto, Álex Paz, and Alvaro Peña Fritz are supported by grant DI Centenario/VINCI/PUCV/039.368/2023.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Talbi, E.G. Metaheuristics: From Design to Implementation; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  2. Abdel-Basset, M.; Sallam, K.M.; Mohamed, R.; Elgendi, I.; Munasinghe, K.; Elkomy, O.M. An Improved Binary Grey-Wolf Optimizer With Simulated Annealing for Feature Selection. IEEE Access 2021, 9, 139792–139822. [Google Scholar] [CrossRef]
  3. Zhao, M.; Hou, R.; Li, H.; Ren, M. A hybrid grey wolf optimizer using opposition-based learning, sine cosine algorithm and reinforcement learning for reliable scheduling and resource allocation. J. Syst. Softw. 2023, 205, 111801. [Google Scholar] [CrossRef]
  4. Ahmed, K.; Salah Kamel, F.J.; Youssef, A.R. Hybrid Whale Optimization Algorithm and Grey Wolf Optimizer Algorithm for Optimal Coordination of Direction Overcurrent Relays. Electr. Power Components Syst. 2019, 47, 644–658. [Google Scholar] [CrossRef]
  5. Seyyedabbasi, A. WOASCALF: A new hybrid whale optimization algorithm based on sine cosine algorithm and levy flight to solve global optimization problems. Adv. Eng. Softw. 2022, 173, 103272. [Google Scholar] [CrossRef]
  6. Tapia, D.; Crawford, B.; Soto, R.; Cisternas-Caneo, F.; Lemus-Romani, J.; Castillo, M.; García, J.; Palma, W.; Paredes, F.; Misra, S. A Q-Learning Hyperheuristic Binarization Framework to Balance Exploration and Exploitation. In Proceedings of the International Conference on Applied Informatics, Ota, Nigeria, 29–31 October 2020; Florez, H., Misra, S., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 14–28. [Google Scholar] [CrossRef]
  7. De Oliveira, S.G.; Silva, L.M. Evolving reordering algorithms using an ant colony hyperheuristic approach for accelerating the convergence of the ICCG method. Eng. Comput. 2020, 36, 1857–1873. [Google Scholar] [CrossRef]
  8. Gonzaga de Oliveira, S.; Silva, L. An ant colony hyperheuristic approach for matrix bandwidth reduction. Appl. Soft Comput. 2020, 94, 106434. [Google Scholar] [CrossRef]
  9. Becerra-Rozas, M.; Lemus-Romani, J.; Cisternas-Caneo, F.; Crawford, B.; Soto, R.; García, J. Swarm-Inspired Computing to Solve Binary Optimization Problems: A Backward Q-Learning Binarization Scheme Selector. Mathematics 2022, 10, 4776. [Google Scholar] [CrossRef]
  10. Cisternas-Caneo, F.; Crawford, B.; Soto, R.; de la Fuente-Mella, H.; Tapia, D.; Lemus-Romani, J.; Castillo, M.; Becerra-Rozas, M.; Paredes, F.; Misra, S. A Data-Driven Dynamic Discretization Framework to Solve Combinatorial Problems Using Continuous Metaheuristics. In Proceedings of the International Conference on Innovations in Bio-Inspired Computing and Applications, Ibica, Spain, 16–18 December 2021; Abraham, A., Sasaki, H., Rios, R., Gandhi, N., Singh, U., Ma, K., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 76–85. [Google Scholar] [CrossRef]
  11. Lemus-Romani, J.; Becerra-Rozas, M.; Crawford, B.; Soto, R.; Cisternas-Caneo, F.; Vega, E.; Castillo, M.; Tapia, D.; Astorga, G.; Palma, W.; et al. A Novel Learning-Based Binarization Scheme Selector for Swarm Algorithms Solving Combinatorial Problems. Mathematics 2021, 9, 2887. [Google Scholar] [CrossRef]
  12. Ibrahim, A.M.; Tawhid, M.A. Chaotic electromagnetic field optimization. Artif. Intell. Rev. 2022, 56, 9989–10030. [Google Scholar] [CrossRef]
  13. Chou, J.S.; Truong, D.N. Multiobjective forensic-based investigation algorithm for solving structural design problems. Autom. Constr. 2022, 134, 104084. [Google Scholar] [CrossRef]
  14. Gao, S.; Yu, Y.; Wang, Y.; Wang, J.; Cheng, J.; Zhou, M. Chaotic Local Search-Based Differential Evolution Algorithms for Optimization. IEEE Trans. Syst. Man, Cybern. Syst. 2021, 51, 3954–3967. [Google Scholar] [CrossRef]
  15. Agrawal, P.; Ganesh, T.; Mohamed, A.W. Chaotic gaining sharing knowledge-based optimization algorithm: An improved metaheuristic algorithm for feature selection. Soft Comput. 2021, 25, 9505–9528. [Google Scholar] [CrossRef]
  16. Naanaa, A. Fast chaotic optimization algorithm based on spatiotemporal maps for global optimization. Appl. Math. Comput. 2015, 269, 402–411. [Google Scholar] [CrossRef]
  17. Yang, H.; Yu, Y.; Cheng, J.; Lei, Z.; Cai, Z.; Zhang, Z.; Gao, S. An intelligent metaphor-free spatial information sampling algorithm for balancing exploitation and exploration. Knowl.-Based Syst. 2022, 250, 109081. [Google Scholar] [CrossRef]
  18. Khosravi, H.; Amiri, B.; Yazdanjue, N.; Babaiyan, V. An improved group teaching optimization algorithm based on local search and chaotic map for feature selection in high-dimensional data. Expert Syst. Appl. 2022, 204, 117493. [Google Scholar] [CrossRef]
  19. Mohmmadzadeh, H.; Gharehchopogh, F.S. An efficient binary chaotic symbiotic organisms search algorithm approaches for feature selection problems. J. Supercomput. 2021, 77, 9102–9144. [Google Scholar] [CrossRef]
  20. Pichai, S.; Sunat, K.; Chiewchanwattana, S. An asymmetric chaotic competitive swarm optimization algorithm for feature selection in high-dimensional data. Symmetry 2020, 12, 1782. [Google Scholar] [CrossRef]
  21. Rajwar, K.; Deep, K.; Das, S. An exhaustive review of the metaheuristic algorithms for search and optimization: Taxonomy, applications, and open challenges. Artif. Intell. Rev. 2023, 56, 13187–13257. [Google Scholar] [CrossRef]
  22. Becerra-Rozas, M.; Lemus-Romani, J.; Cisternas-Caneo, F.; Crawford, B.; Soto, R.; Astorga, G.; Castro, C.; García, J. Continuous Metaheuristics for Binary Optimization Problems: An Updated Systematic Literature Review. Mathematics 2022, 11, 129. [Google Scholar] [CrossRef]
  23. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  24. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  25. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  26. Banerjee, A.; Nabi, M. Re-entry trajectory optimization for space shuttle using sine-cosine algorithm. In Proceedings of the 2017 8th International Conference on Recent Advances in Space Technologies (RAST), Istanbul, Turkey, 19–22 June 2017; pp. 73–77. [Google Scholar] [CrossRef]
  27. Sindhu, R.; Ngadiran, R.; Yacob, Y.M.; Zahri, N.A.H.; Hariharan, M. Sine–cosine algorithm for feature selection with elitism strategy and new updating mechanism. Neural Comput. Appl. 2017, 28, 2947–2958. [Google Scholar] [CrossRef]
  28. Mahdad, B.; Srairi, K. A new interactive sine cosine algorithm for loading margin stability improvement under contingency. Electr. Eng. 2018, 100, 913–933. [Google Scholar] [CrossRef]
  29. Padmanaban, S.; Priyadarshi, N.; Holm-Nielsen, J.B.; Bhaskar, M.S.; Azam, F.; Sharma, A.K.; Hossain, E. A novel modified sine-cosine optimized MPPT algorithm for grid integrated PV system under real operating conditions. IEEE Access 2019, 7, 10467–10477. [Google Scholar] [CrossRef]
  30. Gonidakis, D.; Vlachos, A. A new sine cosine algorithm for economic and emission dispatch problems with price penalty factors. J. Inf. Optim. Sci. 2019, 40, 679–697. [Google Scholar] [CrossRef]
  31. Abd Elfattah, M.; Abuelenin, S.; Hassanien, A.E.; Pan, J.S. Handwritten arabic manuscript image binarization using sine cosine optimization algorithm. In Proceedings of the International Conference on Genetic and Evolutionary Computing, Fuzhou, China, 7–9 November 2016; pp. 273–280. [Google Scholar] [CrossRef]
  32. Emary, E.; Zawbaa, H.M.; Grosan, C.; Hassenian, A.E. Feature subset selection approach by gray-wolf optimization. In Proceedings of the Afro-European Conference for Industrial Advancement, Villejuif, France, 9–11 September 2015; pp. 1–13. [Google Scholar] [CrossRef]
  33. Kumar, V.; Chhabra, J.K.; Kumar, D. Grey wolf algorithm-based clustering technique. J. Intell. Syst. 2017, 26, 153–168. [Google Scholar] [CrossRef]
  34. Eswaramoorthy, S.; Sivakumaran, N.; Sekaran, S. Grey wolf optimization based parameter selection for support vector machines. COMPEL Int. J. Comput. Math. Electr. Electron. Eng. 2016, 35, 1513–1523. [Google Scholar] [CrossRef]
  35. Li, S.X.; Wang, J.S. Dynamic modeling of steam condenser and design of PI controller based on grey wolf optimizer. Math. Probl. Eng. 2015, 2015, 120975. [Google Scholar] [CrossRef]
  36. Wong, L.I.; Sulaiman, M.; Mohamed, M.; Hong, M.S. Grey Wolf Optimizer for solving economic dispatch problems. In Proceedings of the 2014 IEEE International Conference on Power and Energy (PECon), Kuching Sarawak, Malaysia, 1–3 December 2014; pp. 150–154. [Google Scholar] [CrossRef]
  37. Tsai, P.W.; Nguyen, T.T.; Dao, T.K. Robot path planning optimization based on multiobjective grey wolf optimizer. In Proceedings of the International Conference on Genetic and Evolutionary Computing, Fuzhou, China, 7–9 November 2016; pp. 166–173. [Google Scholar] [CrossRef]
  38. Lu, C.; Gao, L.; Li, X.; Xiao, S. A hybrid multi-objective grey wolf optimizer for dynamic scheduling in a real-world welding industry. Eng. Appl. Artif. Intell. 2017, 57, 61–79. [Google Scholar] [CrossRef]
  39. Mosavi, M.R.; Khishe, M.; Ghamgosar, A. Classification of sonar data set using neural network trained by gray wolf optimization. Neural Netw. World 2016, 26, 393. [Google Scholar] [CrossRef]
  40. Bentouati, B.; Chaib, L.; Chettih, S. A hybrid whale algorithm and pattern search technique for optimal power flow problem. In Proceedings of the 2016 8th International Conference on Modelling, Identification and Control (ICMIC), Algiers, Algeria, 15–17 November 2016; pp. 1048–1053. [Google Scholar] [CrossRef]
  41. Touma, H.J. Study of the economic dispatch problem on IEEE 30-bus system using whale optimization algorithm. Int. J. Eng. Technol. Sci. 2016, 3, 11–18. [Google Scholar] [CrossRef]
  42. Yin, X.; Cheng, L.; Wang, X.; Lu, J.; Qin, H. Optimization for hydro-photovoltaic-wind power generation system based on modified version of multi-objective whale optimization algorithm. Energy Procedia 2019, 158, 6208–6216. [Google Scholar] [CrossRef]
  43. Abd El Aziz, M.; Ewees, A.A.; Hassanien, A.E. Whale optimization algorithm and moth-flame optimization for multilevel thresholding image segmentation. Expert Syst. Appl. 2017, 83, 242–256. [Google Scholar] [CrossRef]
  44. Mafarja, M.M.; Mirjalili, S. Hybrid whale optimization algorithm with simulated annealing for feature selection. Neurocomputing 2017, 260, 302–312. [Google Scholar] [CrossRef]
  45. Tharwat, A.; Moemen, Y.S.; Hassanien, A.E. Classification of toxicity effects of biotransformed hepatic drugs using whale optimized support vector machines. J. Biomed. Inform. 2017, 68, 132–149. [Google Scholar] [CrossRef]
  46. Zhao, H.; Guo, S.; Zhao, H. Energy-related CO2 emissions forecasting using an improved LSSVM model optimized by whale optimization algorithm. Energies 2017, 10, 874. [Google Scholar] [CrossRef]
  47. Igel, C. No Free Lunch Theorems: Limitations and Perspectives of Metaheuristics. In Theory and Principled Methods for the Design of Metaheuristics; Borenstein, Y., Moraglio, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; pp. 1–23. [Google Scholar] [CrossRef]
  48. Wolpert, D.; Macready, W. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  49. Ho, Y.C.; Pepyne, D.L. Simple explanation of the no-free-lunch theorem and its implications. J. Optim. Theory Appl. 2002, 115, 549–570. [Google Scholar] [CrossRef]
  50. Li, X.D.; Wang, J.S.; Hao, W.K.; Zhang, M.; Wang, M. Chaotic arithmetic optimization algorithm. Appl. Intell. 2022, 52, 16718–16757. [Google Scholar] [CrossRef]
  51. Gandomi, A.; Yang, X.S.; Talatahari, S.; Alavi, A. Firefly algorithm with chaos. Commun. Nonlinear Sci. Numer. Simul. 2013, 18, 89–98. [Google Scholar] [CrossRef]
  52. Arora, S.; Singh, S. An improved butterfly optimization algorithm with chaos. J. Intell. Fuzzy Syst. 2017, 32, 1079–1088. [Google Scholar] [CrossRef]
  53. Lu, H.; Wang, X.; Fei, Z.; Qiu, M. The effects of using chaotic map on improving the performance of multiobjective evolutionary algorithms. Math. Probl. Eng. 2014, 2014, 924652. [Google Scholar] [CrossRef]
  54. Khennaoui, A.A.; Ouannas, A.; Boulaaras, S.; Pham, V.T.; Taher Azar, A. A fractional map with hidden attractors: Chaos and control. Eur. Phys. J. Spec. Top. 2020, 229, 1083–1093. [Google Scholar] [CrossRef]
  55. Verma, M.; Sreejeth, M.; Singh, M.; Babu, T.S.; Alhelou, H.H. Chaotic Mapping Based Advanced Aquila Optimizer With Single Stage Evolutionary Algorithm. IEEE Access 2022, 10, 89153–89169. [Google Scholar] [CrossRef]
  56. Wang, Y.; Liu, H.; Ding, G.; Tu, L. Adaptive chimp optimization algorithm with chaotic map for global numerical optimization problems. J. Supercomput. 2023, 79, 6507–6537. [Google Scholar] [CrossRef]
  57. Elgamal, Z.; Sabri, A.Q.M.; Tubishat, M.; Tbaishat, D.; Makhadmeh, S.N.; Alomari, O.A. Improved Reptile Search Optimization Algorithm Using Chaotic Map and Simulated Annealing for Feature Selection in Medical Field. IEEE Access 2022, 10, 51428–51446. [Google Scholar] [CrossRef]
  58. Agrawal, U.; Rohatgi, V.; Katarya, R. Normalized Mutual Information-based equilibrium optimizer with chaotic maps for wrapper-filter feature selection. Expert Syst. Appl. 2022, 207, 118107. [Google Scholar] [CrossRef]
  59. Wang, L.; Gao, Y.; Li, J.; Wang, X. A feature selection method by using chaotic cuckoo search optimization algorithm with elitist preservation and uniform mutation for data classification. Discret. Dyn. Nat. Soc. 2021, 2021, 7796696. [Google Scholar] [CrossRef]
  60. Mohd Yusof, N.; Muda, A.K.; Pratama, S.F.; Carbo-Dorca, R.; Abraham, A. Improving Amphetamine-type Stimulants drug classification using chaotic-based time-varying binary whale optimization algorithm. Chemom. Intell. Lab. Syst. 2022, 229, 104635. [Google Scholar] [CrossRef]
  61. Wang, R.; Hao, K.; Chen, L.; Wang, T.; Jiang, C. A novel hybrid particle swarm optimization using adaptive strategy. Inf. Sci. 2021, 579, 231–250. [Google Scholar] [CrossRef]
  62. Feizi-Derakhsh, M.R.; Kadhim, E.A. An Improved Binary Cuckoo Search Algorithm For Feature Selection Using Filter Method And Chaotic Map. J. Appl. Sci. Eng. 2022, 26, 897–903. [Google Scholar] [CrossRef]
  63. Hussien, A.G.; Amin, M. A self-adaptive Harris Hawks optimization algorithm with opposition-based learning and chaotic local search strategy for global optimization and feature selection. Int. J. Mach. Learn. Cybern. 2022, 13, 309–336. [Google Scholar] [CrossRef]
  64. Hu, J.; Heidari, A.A.; Zhang, L.; Xue, X.; Gui, W.; Chen, H.; Pan, Z. Chaotic diffusion-limited aggregation enhanced grey wolf optimizer: Insights, analysis, binarization, and feature selection. Int. J. Intell. Syst. 2022, 37, 4864–4927. [Google Scholar] [CrossRef]
  65. Zhang, Y.; Zhang, Y.; Zhang, C.; Zhou, C. Multiobjective Harris Hawks Optimization With Associative Learning and Chaotic Local Search for Feature Selection. IEEE Access 2022, 10, 72973–72987. [Google Scholar] [CrossRef]
  66. Zhang, X.; Xu, Y.; Yu, C.; Heidari, A.A.; Li, S.; Chen, H.; Li, C. Gaussian mutational chaotic fruit fly-built optimization and feature selection. Expert Syst. Appl. 2020, 141, 112976. [Google Scholar] [CrossRef]
  67. Jalali, S.M.J.; Ahmadian, M.; Ahmadian, S.; Hedjam, R.; Khosravi, A.; Nahavandi, S. X-ray image based COVID-19 detection using evolutionary deep learning approach. Expert Syst. Appl. 2022, 201, 116942. [Google Scholar] [CrossRef] [PubMed]
  68. Joshi, S.K. Chaos embedded opposition based learning for gravitational search algorithm. Appl. Intell. 2023, 53, 5567–5586. [Google Scholar] [CrossRef]
  69. Too, J.; Abdullah, A.R. Chaotic atom search optimization for feature selection. Arab. J. Sci. Eng. 2020, 45, 6063–6079. [Google Scholar] [CrossRef]
  70. Sayed, G.I.; Tharwat, A.; Hassanien, A.E. Chaotic dragonfly algorithm: An improved metaheuristic algorithm for feature selection. Appl. Intell. 2019, 49, 188–205. [Google Scholar] [CrossRef]
  71. Ewees, A.A.; El Aziz, M.A.; Hassanien, A.E. Chaotic multi-verse optimizer-based feature selection. Neural Comput. Appl. 2019, 31, 991–1006. [Google Scholar] [CrossRef]
  72. Hegazy, A.E.; Makhlouf, M.; El-Tawel, G.S. Feature selection using chaotic salp swarm algorithm for data classification. Arab. J. Sci. Eng. 2019, 44, 3801–3816. [Google Scholar] [CrossRef]
  73. Sayed, G.I.; Hassanien, A.E.; Azar, A.T. Feature selection via a novel chaotic crow search algorithm. Neural Comput. Appl. 2019, 31, 171–188. [Google Scholar] [CrossRef]
  74. Mirjalili, S.; Mirjalili, S.M.; Yang, X.S. Binary bat algorithm. Neural Comput. Appl. 2014, 25, 663–681. [Google Scholar] [CrossRef]
  75. Rodrigues, D.; Pereira, L.A.; Nakamura, R.Y.; Costa, K.A.; Yang, X.S.; Souza, A.N.; Papa, J.P. A wrapper approach for feature selection based on Bat Algorithm and Optimum-Path Forest. Expert Syst. Appl. 2014, 41, 2250–2258. [Google Scholar] [CrossRef]
  76. Mirjalili, S.; Lewis, A. S-shaped versus V-shaped transfer functions for binary particle swarm optimization. Swarm Evol. Comput. 2013, 9, 1–14. [Google Scholar] [CrossRef]
  77. Crawford, B.; Soto, R.; Lemus-Romani, J.; Becerra-Rozas, M.; Lanza-Gutiérrez, J.M.; Caballé, N.; Castillo, M.; Tapia, D.; Cisternas-Caneo, F.; García, J.; et al. Q-Learnheuristics: Towards Data-Driven Balanced Metaheuristics. Mathematics 2021, 9, 1839. [Google Scholar] [CrossRef]
  78. Taghian, S.; Nadimi-Shahraki, M. Binary Sine Cosine Algorithms for Feature Selection from Medical Data. arXiv 2019, arXiv:1911.07805. [Google Scholar] [CrossRef]
  79. Faris, H.; Mafarja, M.M.; Heidari, A.A.; Aljarah, I.; Ala’M, A.Z.; Mirjalili, S.; Fujita, H. An efficient binary salp swarm algorithm with crossover scheme for feature selection problems. Knowl.-Based Syst. 2018, 154, 43–67. [Google Scholar] [CrossRef]
  80. Tubishat, M.; Ja’afar, S.; Alswaitti, M.; Mirjalili, S.; Idris, N.; Ismail, M.A.; Omar, M.S. Dynamic Salp swarm algorithm for feature selection. Expert Syst. Appl. 2021, 164, 113873. [Google Scholar] [CrossRef]
  81. Tapia, D.; Crawford, B.; Soto, R.; Palma, W.; Lemus-Romani, J.; Cisternas-Caneo, F.; Castillo, M.; Becerra-Rozas, M.; Paredes, F.; Misra, S. Embedding Q-Learning in the selection of metaheuristic operators: The enhanced binary grey wolf optimizer case. In Proceedings of the 2021 IEEE International Conference on Automation/XXIV Congress of the Chilean Association of Automatic Control (ICA-ACCA), Valparaíso, Chile, 22–26 March 2021; pp. 1–6. [Google Scholar] [CrossRef]
  82. Sharma, P.; Sundaram, S.; Sharma, M.; Sharma, A.; Gupta, D. Diagnosis of Parkinson’s disease using modified grey wolf optimization. Cogn. Syst. Res. 2019, 54, 100–115. [Google Scholar] [CrossRef]
  83. Mafarja, M.; Aljarah, I.; Heidari, A.A.; Faris, H.; Fournier-Viger, P.; Li, X.; Mirjalili, S. Binary dragonfly optimization for feature selection using time-varying transfer functions. Knowl.-Based Syst. 2018, 161, 185–204. [Google Scholar] [CrossRef]
  84. Eluri, R.K.; Devarakonda, N. Binary Golden Eagle Optimizer with Time-Varying Flight Length for feature selection. Knowl.-Based Syst. 2022, 247, 108771. [Google Scholar] [CrossRef]
  85. Becerra-Rozas, M.; Lemus-Romani, J.; Crawford, B.; Soto, R.; Cisternas-Caneo, F.; Embry, A.T.; Molina, M.A.; Tapia, D.; Castillo, M.; Misra, S.; et al. Reinforcement Learning Based Whale Optimizer. In Proceedings of the International Conference on Computational Science and Its Applications—ICCSA 2021, Cagliari, Italy, 13–16 September 2021; Gervasi, O., Murgante, B., Misra, S., Garau, C., Blečić, I., Taniar, D., Apduhan, B.O., Rocha, A.M.A.C., Tarantino, E., Torre, C.M., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 205–219. [Google Scholar] [CrossRef]
  86. Mirjalili, S.; Hashim, S.Z.M. BMOA: Binary magnetic optimization algorithm. Int. J. Mach. Learn. Comput. 2012, 2, 204. [Google Scholar] [CrossRef]
  87. Crawford, B.; Soto, R.; Astorga, G.; García, J.; Castro, C.; Paredes, F. Putting continuous metaheuristics to work in binary search spaces. Complexity 2017, 2017, 8404231. [Google Scholar] [CrossRef]
  88. Saremi, S.; Mirjalili, S.; Lewis, A. How important is a transfer function in discrete heuristic algorithms. Neural Comput. Appl. 2015, 26, 625–640. [Google Scholar] [CrossRef]
  89. Kennedy, J.; Eberhart, R.C. A discrete binary version of the particle swarm algorithm. In Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, Orlando, FL, USA, 12–15 October 1997; Volume 5, pp. 4104–4108. [Google Scholar] [CrossRef]
  90. Crawford, B.; Soto, R.; Olivares-Suarez, M.; Palma, W.; Paredes, F.; Olguin, E.; Norero, E. A binary coded firefly algorithm that solves the set covering problem. Rom. J. Inf. Sci. Technol. 2014, 17, 252–264. [Google Scholar]
  91. Rajalakshmi, N.; Padma Subramanian, D.; Thamizhavel, K. Performance enhancement of radial distributed system with distributed generators by reconfiguration using binary firefly algorithm. J. Inst. Eng. India Ser. B 2015, 96, 91–99. [Google Scholar] [CrossRef]
  92. Lanza-Gutierrez, J.M.; Crawford, B.; Soto, R.; Berrios, N.; Gomez-Pulido, J.A.; Paredes, F. Analyzing the effects of binarization techniques when solving the set covering problem through swarm optimization. Expert Syst. Appl. 2017, 70, 67–82. [Google Scholar] [CrossRef]
  93. Senkerik, R. A brief overview of the synergy between metaheuristics and unconventional dynamics. In AETA 2018-Recent Advances in Electrical Engineering and Related Sciences: Theory and Application; Springer: Cham, Switzerland, 2020; pp. 344–356. [Google Scholar] [CrossRef]
  94. Zou, D.; Gao, L.; Li, S.; Wu, J. Solving 0–1 knapsack problem by a novel global harmony search algorithm. Appl. Soft Comput. 2011, 11, 1556–1564. [Google Scholar] [CrossRef]
  95. Sahni, S. Approximate Algorithms for the 0/1 Knapsack Problem. J. ACM 1975, 22, 115–124. [Google Scholar] [CrossRef]
  96. Martello, S.; Pisinger, D.; Toth, P. New trends in exact algorithms for the 0–1 knapsack problem. Eur. J. Oper. Res. 2000, 123, 325–332. [Google Scholar] [CrossRef]
  97. Zhou, Y.; Shi, Y.; Wei, Y.; Luo, Q.; Tang, Z. Nature-inspired algorithms for 0-1 knapsack problem: A survey. Neurocomputing 2023, 554, 126630. [Google Scholar] [CrossRef]
  98. Bas, E. A capital budgeting problem for preventing workplace mobbing by using analytic hierarchy process and fuzzy 0–1 bidimensional knapsack model. Expert Syst. Appl. 2011, 38, 12415–12422. [Google Scholar] [CrossRef]
  99. Reniers, G.L.L.; Sörensen, K. An Approach for Optimal Allocation of Safety Resources: Using the Knapsack Problem to Take Aggregated Cost-Efficient Preventive Measures. Risk Anal. 2013, 33, 2056–2067. [Google Scholar] [CrossRef] [PubMed]
  100. İbrahim, M.; Sezer, Z. Algorithms for the one-dimensional two-stage cutting stock problem. Eur. J. Oper. Res. 2018, 271, 20–32. [Google Scholar] [CrossRef]
  101. Peeta, S.; Sibel Salman, F.; Gunnec, D.; Viswanath, K. Pre-disaster investment decisions for strengthening a highway network. Comput. Oper. Res. 2010, 37, 1708–1719. [Google Scholar] [CrossRef]
  102. Pisinger, D. Where are the hard knapsack problems? Comput. Oper. Res. 2005, 32, 2271–2284. [Google Scholar] [CrossRef]
  103. Pisinger, D. Instances of 0/1 Knapsack Problem. 2005. Available online: http://artemisa.unicauca.edu.co/~johnyortega/instances_01_KP (accessed on 1 January 2024).
  104. Lemus-Romani, J.; Crawford, B.; Cisternas-Caneo, F.; Soto, R.; Becerra-Rozas, M. Binarization of Metaheuristics: Is the Transfer Function Really Important? Biomimetics 2023, 8, 400. [Google Scholar] [CrossRef]
  105. Becerra-Rozas, M.; Cisternas-Caneo, F.; Crawford, B.; Soto, R.; García, J.; Astorga, G.; Palma, W. Embedded Learning Approaches in the Whale Optimizer to Solve Coverage Combinatorial Problems. Mathematics 2022, 10, 4529. [Google Scholar] [CrossRef]
  106. Figueroa-Torrez, P.; Durán, O.; Crawford, B.; Cisternas-Caneo, F. A Binary Black Widow Optimization Algorithm for Addressing the Cell Formation Problem Involving Alternative Routes and Machine Reliability. Mathematics 2023, 11, 3475. [Google Scholar] [CrossRef]
  107. Mann, H.B.; Whitney, D.R. On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Stat. 1947, 18, 50–60. [Google Scholar] [CrossRef]
  108. García, S.; Molina, D.; Lozano, M.; Herrera, F. A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: A case study on the CEC’2005 special session on real parameter optimization. J. Heuristics 2009, 15, 617–644. [Google Scholar] [CrossRef]
  109. García, J.; Leiva-Araos, A.; Crawford, B.; Soto, R.; Pinto, H. Exploring Initialization Strategies for Metaheuristic Optimization: Case Study of the Set-Union Knapsack Problem. Mathematics 2023, 11, 2695. [Google Scholar] [CrossRef]
  110. García, J.; Moraga, P.; Crawford, B.; Soto, R.; Pinto, H. Binarization Technique Comparisons of Swarm Intelligence Algorithm: An Application to the Multi-Demand Multidimensional Knapsack Problem. Mathematics 2022, 10, 3183. [Google Scholar] [CrossRef]
  111. Ábrego-Calderón, P.; Crawford, B.; Soto, R.; Rodriguez-Tello, E.; Cisternas-Caneo, F.; Monfroy, E.; Giachetti, G. Multi-armed Bandit-Based Metaheuristic Operator Selection: The Pendulum Algorithm Binarization Case. In Proceedings of the International Conference on Optimization and Learning, Malaga, Spain, 3–5 May 2023; Dorronsoro, B., Chicano, F., Danoy, G., Talbi, E.G., Eds.; Springer Nature Switzerland: Cham, Switzerland, 2023; pp. 248–259. [Google Scholar] [CrossRef]
Figure 1. Chaotic maps.
Figure 1. Chaotic maps.
Mathematics 12 00262 g001
Figure 2. Chaotic maps in metaheuristics.
Figure 2. Chaotic maps in metaheuristics.
Mathematics 12 00262 g002
Figure 3. Two-step technique.
Figure 3. Two-step technique.
Mathematics 12 00262 g003
Figure 4. S-Shaped and V-Shaped transfer functions.
Figure 4. S-Shaped and V-Shaped transfer functions.
Mathematics 12 00262 g004
Figure 5. Chaotic binarization rules.
Figure 5. Chaotic binarization rules.
Mathematics 12 00262 g005
Figure 6. Convergence graphs of the best execution obtained for the knapPI_1_1000_1000_1 instance using GWO.
Figure 6. Convergence graphs of the best execution obtained for the knapPI_1_1000_1000_1 instance using GWO.
Mathematics 12 00262 g006
Figure 7. Convergence graphs of the best execution obtained for the knapPI_1_1000_1000_1 instance using WOA.
Figure 7. Convergence graphs of the best execution obtained for the knapPI_1_1000_1000_1 instance using WOA.
Mathematics 12 00262 g007
Figure 8. Convergence graphs of the best execution obtained for the knapPI_1_1000_1000_1 instance using SCA.
Figure 8. Convergence graphs of the best execution obtained for the knapPI_1_1000_1000_1 instance using SCA.
Mathematics 12 00262 g008
Table 1. S-Shaped and V-Shaped transfer functions.
Table 1. S-Shaped and V-Shaped transfer functions.
S-ShapedV-Shaped
NameEquationNameEquation
S1 T ( d i j ) = 1 1 + e 2 d i j V1 T ( d i j ) = e r f π 2 d i j
S2 T ( d i j ) = 1 1 + e d i j V2 T ( d i j ) = t a n h ( d i j )
S3 T ( d i j ) = 1 1 + e d i j 2 V3 T ( d i j ) = d i j 1 + ( d i j ) 2
S4 T ( d i j ) = 1 1 + e d i j 3 V4 T ( d i j ) = 2 π a r c t a n π 2 d i j
Table 2. Binarization rules.
Table 2. Binarization rules.
TypeBinarization Rules
Standard (STD) X n e w j = 1 i f r a n d T ( d w j ) 0 e l s e .
Complement (COM) X n e w j = C o m p l e m e n t ( X w j ) i f r a n d T ( d w j ) 0 e l s e .
Static Probability (SP) X n e w j = 0 i f T ( d w j ) α X w j i f α < T ( d w j ) 1 2 ( 1 + α ) 1 i f T ( d w j ) 1 2 ( 1 + α )
Elitist (ELIT) X n e w j = X B e s t j i f r a n d < T ( d w j ) 0 e l s e .
Roulette Elitist (ROU_ELIT) X n e w j = P [ X n e w j = ζ j ] = f ( ζ ) δ Q g f ( δ ) if rand T ( d w j ) P [ X n e w j = 0 ] = 1 e l s e .
Table 3. Instances of Knapsack Problem.
Table 3. Instances of Knapsack Problem.
InstanceNumber of ItemsOptimum
knapPI_1_100_1000_11009147
knapPI_1_200_1000_120011,238
knapPI_1_500_1000_150028,857
knapPI_1_1000_1000_1100054,503
knapPI_1_2000_1000_12000110,625
knapPI_2_100_1000_11001514
knapPI_2_200_1000_12001634
knapPI_2_500_1000_15004566
knapPI_2_1000_1000_110009052
knapPI_2_2000_1000_1200018,051
knapPI_3_100_1000_11002397
knapPI_3_200_1000_12002697
knapPI_3_500_1000_15007117
knapPI_3_1000_1000_1100014,390
knapPI_3_2000_1000_1200028,919
Table 4. Parameters setting.
Table 4. Parameters setting.
ParameterValue
Number of metaheuristics3
Independent runs31
Transfer FunctionS2 (see in Table 1)
Number of binarization schemes24 (see in Figure 5)
Number of KP instances15 (see in Table 3)
Number of populations20
Number of iterations500
parameter a of SCA2
parameter a of GWOdecreases linearly from 2 to 0
parameter a of WOAdecreases linearly from 2 to 0
parameter b of WOA1
Table 5. Summary of the performance of each experiment in each instance with the three metaheuristics.
Table 5. Summary of the performance of each experiment in each instance with the three metaheuristics.
ExperimentknapPI_1_100_1000_1knapPI_2_100_1000_1knapPI_3_100_1000_1knapPI_1_200_1000_1knapPI_2_200_1000_1knapPI_3_200_1000_1
Op?MHOp?MHOp?MHOp?MHOp?MHOp?MH
STD_TENTGWO-WOA-SCA× GWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCA
STD_CIRCLEGWO-WOA-SCA× GWO-SCAGWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCA
STDGWO-WOA-SCA× GWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCA
STD_SINEGWO-WOA-SCA× GWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCA
STD_PIECEGWO-WOA-SCA× GWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCA
STD_LOGGWO-WOA-SCA× GWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCA
STD_SINUGWO-WOA-SCA×WOA-SCAGWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCA
STD_SINGERGWO-WOA-SCA× GWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCA
COMGWO-WOA-SCA× GWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCA
COM_LOGGWO-WOA-SCA× GWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCA
COM_PIECEGWO-WOA-SCA× GWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCA
COM_SINEGWO-WOA-SCA× GWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCA
COM_SINGERGWO-WOA-SCA× GWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCA
COM_SINUGWO-WOA-SCA× GWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCA
COM_TENTGWO-WOA-SCA× GWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCA
COM_CIRCLEGWO-WOA-SCA× GWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCA
ELIT_SINEGWO-WOA-SCA× SCAGWO-SCAGWO-WOA-SCAGWO-WOA-SCA
ELIT_SINGERGWO-WOA-SCA× WOA-SCAWOA-SCAGWO-SCAGWO-WOA-SCA
ELIT_PIECEGWO-WOA-SCA× GWOGWO-WOA-SCAWOA-SCAGWO-WOA
ELIT_CIRCLEGWO-WOA-SCA× GWO-WOAGWO-SCAGWO-WOAGWO-SCA
ELIT_TENTGWO-WOA-SCA× SCAWOAGWO-WOAGWO-WOA-SCA
ELITGWO-WOA-SCA× × WOA-SCAGWO-WOAWOA-SCA
ELIT_LOGGWO-WOA-SCA× GWOWOAGWO-SCA GWO-SCA
ELIT_SINUGWO-WOA-SCA× WOAGWOGWO-WOAWOA
STD_TENTGWO-WOA-SCAGWO-WOA-SCAGWO-WOA-SCAWOA×WOAWOA
STD_CIRCLEGWO-WOA-SCASCAGWO-WOA-SCA× ×WOAWOA
STDGWO-WOA-SCAGWO-WOA-SCAGWO-WOA× ×WOAWOA
STD_SINEGWO-WOA-SCAGWO-WOA-SCAGWO-WOAWOA×WOA×
STD_PIECEGWO-WOAGWO-WOA-SCAGWO-WOAWOA×WOA×
STD_LOGWOAGWO-WOA-SCAWOA× × ×
STD_SINUWOAWOAWOA× × ×
STD_SINGERWOAWOAWOA× × ×
COM× × × × × ×
COM_LOG× × × × × ×
COM_PIECE× × × × × ×
COM_SINE× × × × × ×
COM_SINGER× × × × × ×
COM_SINU× × × × × ×
COM_TENT× × × × × ×
COM_CIRCLE× × × × × ×
ELIT_SINE× × × × × ×
ELIT_SINGER× × × × × ×
ELIT_PIECE× × × × × ×
ELIT_CIRCLE× × × × × ×
ELIT_TENT× × × × × ×
ELIT× × × × × ×
ELIT_LOG× × × × × ×
ELIT_SINU× × × × × ×
STD_TENT× × ×
STD_CIRCLE×WOA×WOA×WOA
STD× × ×
STD_SINE× × ×
STD_PIECE× × ×
STD_LOG× × ×
STD_SINU× × ×
STD_SINGER× × ×
COM× × ×
COM_LOG× × ×
COM_PIECE× × ×
COM_SINE× × ×
COM_SINGER× × ×
COM_SINU× × ×
COM_TENT× × ×
COM_CIRCLE× × ×
ELIT_SINE× × ×
ELIT_SINGER× × ×
ELIT_PIECE× × ×
ELIT_CIRCLE× × ×
ELIT_TENT× × ×
ELIT× × ×
ELIT_LOG× × ×
ELIT_SINU× × ×
Table 6. Results obtained with GWO for instances (a) knapPI_1_100_1000_1, knapPI_2_100_1000_1, and knapPI_3_100_1000_1; (b) knapPI_1_200_1000_1, knapPI_2_200_1000_1, and knapPI_3_200_1000_1; (c) knapPI_1_500_1000_1, knapPI_2_500_1000_1, and knapPI_3_500_1000_1; (d) knapPI_1_1000_1000_1, knapPI_2_1000_1000_1, and knapPI_3_1000_1000_1; (e) knapPI_1_2000_1000_1, knapPI_2_2000_1000_1, and knapPI_3_2000_1000_1.
Table 6. Results obtained with GWO for instances (a) knapPI_1_100_1000_1, knapPI_2_100_1000_1, and knapPI_3_100_1000_1; (b) knapPI_1_200_1000_1, knapPI_2_200_1000_1, and knapPI_3_200_1000_1; (c) knapPI_1_500_1000_1, knapPI_2_500_1000_1, and knapPI_3_500_1000_1; (d) knapPI_1_1000_1000_1, knapPI_2_1000_1000_1, and knapPI_3_1000_1000_1; (e) knapPI_1_2000_1000_1, knapPI_2_2000_1000_1, and knapPI_3_2000_1000_1.
ExperimentknapPI_1_100_1000_1knapPI_2_100_1000_1knapPI_3_100_1000_1
BestAvg.RPDBestAvg.RPDBestAvg.RPD
STD9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_LOG9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_PIECE9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_SINE9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_SINGER9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_SINU9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_TENT9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_CIRCLE9147.09147.00.01512.01512.00.1322397.02396.0970.0
COM9147.09147.00.01512.01512.00.1322397.02396.9680.0
COM_LOG9147.09147.00.01512.01512.00.1322397.02396.5160.0
COM_PIECE9147.09147.00.01512.01512.00.1322397.02396.9350.0
COM_SINE9147.09147.00.01512.01512.00.1322397.02396.710.0
COM_SINGER9147.09147.00.01512.01512.00.1322397.02395.290.0
COM_SINU9147.09147.00.01512.01512.00.1322397.02397.00.0
COM_TENT9147.09147.00.01512.01512.00.1322397.02396.9030.0
COM_CIRCLE9147.09147.00.01512.01512.00.1322397.02396.290.0
ELIT9147.08903.3550.01512.01500.7420.1322396.02314.710.042
ELIT_LOG9147.08907.6770.01512.01502.0320.1322397.02306.9680.0
ELIT_PIECE9147.08910.1610.01512.01499.5810.1322397.02330.4190.0
ELIT_SINE9147.08913.8710.01512.01499.710.1322390.02317.3550.292
ELIT_SINGER9147.08868.9680.01512.01501.0650.1322396.02312.6770.042
ELIT_SINU9147.08933.3550.01512.01501.00.1322396.02305.8390.042
ELIT_TENT9147.08922.1940.01512.01496.290.1322390.02308.9680.292
ELIT_CIRCLE9147.08914.2260.01512.01501.0320.1322397.02300.4520.0
ExperimentknapPI_1_200_1000_1knapPI_2_200_1000_1knapPI_3_200_1000_1
BestAvg.RPDBestAvg.RPDBestAvg.RPD
STD11,238.011,238.00.01634.01634.00.02697.02697.00.0
STD_LOG11,238.011,238.00.01634.01634.00.02697.02697.00.0
STD_PIECE11,238.011,238.00.01634.01634.00.02697.02697.00.0
STD_SINE11,238.011,238.00.01634.01634.00.02697.02697.00.0
STD_SINGER11,238.011,238.00.01634.01633.9350.02697.02697.00.0
STD_SINU11,238.011,238.00.01634.01633.3550.02697.02697.00.0
STD_TENT11,238.011,238.00.01634.01634.00.02697.02697.00.0
STD_CIRCLE11,238.011,238.00.01634.01632.9680.02697.02697.00.0
COM11,238.011,237.6450.01634.01634.00.02697.02696.7740.0
COM_LOG11,238.011,233.1290.01634.01634.00.02697.02696.6450.0
COM_PIECE11,238.011,232.5480.01634.01634.00.02697.02696.8710.0
COM_SINE11,238.011,236.5810.01634.01633.9350.02697.02696.8710.0
COM_SINGER11,238.011,201.9030.01634.01633.0970.02697.02692.3230.0
COM_SINU11,238.011,232.6770.01634.01630.3230.02697.02694.3230.0
COM_TENT11,238.011,236.9350.01634.01633.9350.02697.02696.9350.0
COM_CIRCLE11,238.011,228.7740.01634.01633.1940.02697.02697.00.0
ELIT11,227.010,938.8390.0981634.01618.4520.02695.02637.3550.074
ELIT_LOG11,227.010,836.9350.0981634.01620.4840.02697.02640.0650.0
ELIT_PIECE11,238.010,882.00.01633.01617.8710.0612697.02650.5810.0
ELIT_SINE11,238.010,855.9350.01634.01623.3870.02697.02637.5160.0
ELIT_SINGER11,227.010,878.6130.0981634.01614.0320.02697.02637.8060.0
ELIT_SINU11,238.010,889.8710.01634.01619.3230.02696.02639.5810.037
ELIT_TENT11,227.010,854.1610.0981634.01615.6450.02697.02637.9350.0
ELIT_CIRCLE11,238.010,870.5160.01634.01618.8710.02697.02650.1290.0
ExperimentknapPI_1_200_1000_1knapPI_2_200_1000_1knapPI_3_200_1000_1
BestAvg.RPDBestAvg.RPDBestAvg.RPD
STD28,857.028,834.7740.04566.04561.290.07117.07112.1290.0
STD_LOG28,834.028,689.0650.084566.04557.3550.07017.07016.9351.405
STD_PIECE28,857.028,831.2580.04566.04566.00.07117.07104.710.0
STD_SINE28,857.028,758.3230.04566.04566.00.07117.07062.8710.0
STD_SINGER28,328.027,587.4191.8334544.04516.7740.4826914.06824.5482.852
STD_SINU27,513.026,267.6454.6574534.04460.9350.7016816.06664.0654.229
STD_TENT28,857.028,829.5480.04566.04561.9680.07117.07108.290.0
STD_CIRCLE28,857.028,850.3230.04557.04551.2580.1977117.07117.00.0
COM28,132.027,365.0972.5124554.04503.2260.2636915.06767.8062.838
COM_LOG27,389.026,985.8395.0874514.04478.5481.1396909.06734.8712.923
COM_PIECE28,272.027,334.8392.0274520.04497.3871.0076909.06787.4192.923
COM_SINE28,164.027,289.7742.4014541.04491.8710.5486915.06774.02.838
COM_SINGER27,045.026,142.3876.2794503.04439.5161.386817.06462.9274.215
COM_SINU27,483.026,101.3554.7614492.04410.0321.6216815.06659.6134.243
COM_TENT28,320.027,390.9681.8614528.04496.5160.8326915.06787.02.838
COM_CIRCLE27,999.027,297.9352.9734537.04506.8710.6357013.06937.3551.461
ELIT27,516.026,179.6774.6474492.04406.711.6216916.06707.6132.824
ELIT_LOG27,952.026,336.6453.1364495.04415.4841.5556812.06687.9684.286
ELIT_PIECE27,624.026,220.3554.2734503.04409.1291.386805.06648.2584.384
ELIT_SINE27,473.026,107.5484.7964530.04416.0650.7886811.06658.2584.3
ELIT_SINGER27,238.025,954.6135.614532.04402.00.7456815.06678.5484.243
ELIT_SINU26,995.026,075.9356.4534486.04388.6451.7526889.06661.1613.204
ELIT_TENT27,007.025,860.1946.4114500.04397.4521.4456812.06653.294.286
ELIT_CIRCLE26,947.025,897.9356.6194482.04409.4841.846814.06676.9034.257
ExperimentknapPI_1_1000_1000_1knapPI_2_1000_1000_1knapPI_3_1000_1000_1
BestAvg.RPDBestAvg.RPDBestAvg.RPD
STD53,838.053,373.6131.229030.09010.290.24314,189.014,101.6451.397
STD_LOG52,617.052,172.3553.468997.08965.710.60813,988.013,904.2262.794
STD_PIECE53,702.053,227.4841.479033.09011.4190.2114,189.014,101.6131.397
STD_SINE53,673.052,915.0321.5239029.08991.5160.25414,187.014,067.8391.411
STD_SINGER49,162.048,070.299.7998888.08778.5811.81213,387.013,219.4846.97
STD_SINU48,934.046,859.12910.2188738.08594.293.46913,381.013,052.4527.012
STD_TENT53,760.053,386.5481.3639032.09006.0650.22114,188.014,118.0321.404
STD_CIRCLE54,264.054,064.4840.4399028.09011.2580.26514,290.014,288.00.695
COM49,922.047,741.8718.4058764.08702.2263.18213,383.013,128.716.998
COM_LOG50,637.047,340.6777.0938826.08672.2582.49713,485.013,076.6136.289
COM_PIECE49,765.047,890.4528.6938822.08697.292.54113,386.013,154.0326.977
COM_SINE49,002.047,232.64510.0938797.08681.7742.81713,481.013,126.6136.317
COM_SINGER49,632.046,531.9358.9378765.08619.4193.17113,383.013,052.9686.998
COM_SINU48,900.046,937.06510.288699.08586.3873.913,467.013,030.1616.414
COM_TENT49,271.047,830.8399.5998806.08716.1942.71813,283.013,102.3237.693
COM_CIRCLE50,789.050,053.5816.8148869.08802.7422.02213,789.013,630.04.177
ELIT49,583.046,430.8399.0278731.08572.3233.54613,284.013,036.6137.686
ELIT_LOG48,212.046,662.35511.5428767.08590.9353.14813,386.013,057.8716.977
ELIT_PIECE49,049.046,628.67710.0078789.08587.3552.90513,178.013,008.1618.423
ELIT_SINE48,975.046,480.96810.1438727.08574.1943.5913,290.013,011.4527.644
ELIT_SINGER49,107.046,625.3879.98752.08591.8393.31413,284.013,027.4847.686
ELIT_SINU48,424.046,412.77411.1548795.08581.292.83913,482.013,062.6136.31
ELIT_TENT49,590.046,663.8069.0148753.08597.0323.30313,285.012,979.3237.679
ELIT_CIRCLE48,220.046,387.83911.5288721.08580.7743.65713,384.012,997.4846.991
ExperimentknapPI_1_200_1000_1knapPI_2_200_1000_1knapPI_3_200_1000_1
BestAvg.RPDBestAvg.RPDBestAvg.RPD
STD106,338.0104,652.9353.87517,818.017,725.6771.29127,910.027,679.8063.489
STD_LOG103,254.0101,973.06.66317,640.017,548.2582.27727,315.027,116.5485.547
STD_PIECE105,308.0104,361.6134.80617,778.017,704.6451.51227,811.027,652.3233.831
STD_SINE104,502.0103,313.3875.53517,706.017,648.0971.91127,616.027,408.8714.506
STD_SINGER94,623.091,299.22614.46517,081.016,891.9355.37425,815.025,341.32310.733
STD_SINU95,130.090,711.014.00716,932.016,712.716.19925,818.025,276.2910.723
STD_TENT106,008.0105,053.0654.17417,787.017,711.0321.46328,112.027,790.5482.791
STD_CIRCLE108,462.0108,065.4521.95517,970.017,923.3550.44928,419.028,334.0651.729
COM94,380.090,898.93514.68516,945.016,804.7426.12725,818.025,310.35510.723
COM_LOG95,370.091,262.03213.7917,215.016,740.5814.63125,618.025,276.06511.415
COM_PIECE95,187.091,238.54816.21917,048.016,787.9035.55626,004.025,338.06510.08
COM_SINE93,710.090,704.83915.2917,033.016,798.9355.6425,909.025,381.74210.408
COM_SINGER93,587.090,654.32315.40217,051.016,712.9035.5425,616.025,264.64511.422
COM_SINU93,509.090,323.06515.47217,041.016,731.6775.59526,014.025,318.7110.045
COM_TENT95,343.090,914.77413.81417,029.016,802.5485.66226,113.025,271.719.703
COM_CIRCLE93,994.091,883.74215.03417,005.016,776.2585.79525,818.025,363.12910.723
ELIT94,037.090,814.61314.99516,933.016,668.716.19425,619.025,241.48411.411
ELIT_LOG93,222.090,393.16115.73216,984.016,719.7425.91126,111.025,367.6779.71
ELIT_PIECE95,236.090,678.80613.91117,164.016,792.1294.91425,806.025,369.77410.765
ELIT_SINE94,328.091,036.80614.73217,000.016,718.6135.82226,216.025,342.8069.347
ELIT_SINGER92,560.090,297.35516.3316,954.016,709.8716.07725,619.025,271.7111.411
ELIT_SINU93,540.090,357.61315.44417,129.016,735.0975.10825,817.025,296.93510.727
ELIT_TENT93,337.090,308.48415.62817,059.016,700.2265.49625,714.025,245.45211.083
ELIT_CIRCLE93,257.090,411.54815.716,992.016,732.3555.86725,916.025,247.83910.384
Table 7. Results obtained with WOA for instances (a) knapPI_1_100_1000_1, knapPI_2_100_1000_1, and knapPI_3_100_1000_1; (b) knapPI_1_200_1000_1, knapPI_2_200_1000_1, and knapPI_3_200_1000_1; (c) knapPI_1_500_1000_1, knapPI_2_500_1000_1, and knapPI_3_500_1000_1; (d) knapPI_1_1000_1000_1, knapPI_2_1000_1000_1, and knapPI_3_1000_1000_1; (e) knapPI_1_2000_1000_1, knapPI_2_2000_1000_1, and knapPI_3_2000_1000_1.
Table 7. Results obtained with WOA for instances (a) knapPI_1_100_1000_1, knapPI_2_100_1000_1, and knapPI_3_100_1000_1; (b) knapPI_1_200_1000_1, knapPI_2_200_1000_1, and knapPI_3_200_1000_1; (c) knapPI_1_500_1000_1, knapPI_2_500_1000_1, and knapPI_3_500_1000_1; (d) knapPI_1_1000_1000_1, knapPI_2_1000_1000_1, and knapPI_3_1000_1000_1; (e) knapPI_1_2000_1000_1, knapPI_2_2000_1000_1, and knapPI_3_2000_1000_1.
ExperimentknapPI_1_100_1000_1knapPI_2_100_1000_1knapPI_3_100_1000_1
BestAvg.RPDBestAvg.RPDBestAvg.RPD
STD9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_LOG9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_PIECE9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_SINE9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_SINGER9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_SINU9147.09147.00.01513.01512.0970.0662397.02397.00.0
STD_TENT9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_CIRCLE9147.09147.00.01512.01512.00.1322396.02396.00.042
COM9147.09147.00.01512.01512.00.1322397.02396.9680.0
COM_LOG9147.09147.00.01512.01512.00.1322397.02396.9030.0
COM_PIECE9147.09147.00.01512.01512.00.1322397.02396.9680.0
COM_SINE9147.09147.00.01512.01512.00.1322397.02396.9350.0
COM_SINGER9147.09147.00.01512.01512.00.1322397.02396.9350.0
COM_SINU9147.09147.00.01512.01509.8710.1322397.02397.00.0
COM_TENT9147.09147.00.01512.01512.00.1322397.02396.9680.0
COM_CIRCLE9147.09147.00.01512.01512.00.1322397.02396.9680.0
ELIT9147.08855.3550.01512.01499.5480.1322396.02313.2260.042
ELIT_LOG9147.08930.3550.01512.01498.4840.1322390.02322.2260.292
ELIT_PIECE9147.08971.7740.01512.01495.1610.1322396.02312.3870.042
ELIT_SINE9147.08925.7420.01512.01499.1610.1322396.02309.2260.042
ELIT_SINGER9147.08886.4190.01512.01498.5160.1322397.02313.1940.0
ELIT_SINU9147.08912.4520.01512.01498.1610.1322397.02306.8390.0
ExperimentknapPI_1_200_1000_1knapPI_2_200_1000_1knapPI_3_200_1000_1
BestAvg.RPDBestAvg.RPDBestAvg.RPD
ELIT_TENT9147.08876.0650.01512.01495.9030.1322390.02326.9030.292
ELIT_CIRCLE9147.08863.6130.01512.01497.6450.1322397.02305.6770.0
STD11,238.011,238.00.01634.01634.00.02697.02697.00.0
STD_LOG11,238.011,238.00.01634.01634.00.02697.02697.00.0
STD_PIECE11,238.011,238.00.01634.01634.00.02697.02697.00.0
STD_SINE11,238.011,238.00.01634.01634.00.02697.02697.00.0
STD_SINGER11,238.011,238.00.01634.01634.00.02697.02697.00.0
STD_SINU11,238.011,238.00.01634.01634.00.02697.02697.00.0
STD_TENT11,238.011,238.00.01634.01634.00.02697.02697.00.0
STD_CIRCLE11,238.011,238.00.01634.01634.00.02697.02697.00.0
COM11,238.011,236.9350.01634.01634.00.02697.02697.00.0
COM_LOG11,238.011,235.8710.01634.01634.00.02697.02696.8710.0
COM_PIECE11,238.011,236.2260.01634.01634.00.02697.02696.9680.0
COM_SINE11,238.011,236.0970.01634.01634.00.02697.02696.8710.0
COM_SINGER11,238.011,233.4840.01634.01633.5160.02697.02696.0320.0
COM_SINU11,238.011,236.5810.01634.01629.710.02697.02696.6450.0
COM_TENT11,238.011,237.6450.01634.01634.00.02697.02697.00.0
COM_CIRCLE11,238.011,233.7420.01634.01633.5480.02697.02697.00.0
ELIT11,238.010,900.3870.01634.01617.0650.02697.02657.0320.0
ELIT_LOG11,238.010,874.1610.01633.01615.1290.0612694.02626.7740.111
ELIT_PIECE11,238.010,862.2260.01634.01617.1610.02697.02639.00.0
ELIT_SINE11,227.010,839.290.0981634.01617.3230.02697.02642.0650.0
ELIT_SINGER11,238.010,919.5480.01633.01616.9350.0612697.02659.3550.0
ELIT_SINU11,223.010,896.5160.1331634.01613.6130.02697.02648.3870.0
ELIT_TENT11,238.010,782.8060.01634.01617.00.02697.02658.3550.0
ELIT_CIRCLE11,227.010,790.1290.0981634.01621.0320.02695.02628.2580.074
ExperimentknapPI_1_500_1000_1knapPI_2_500_1000_1knapPI_3_500_1000_1
BestAvg.RPDBestAvg.RPDBestAvg.RPD
STD28,857.028,856.2580.04566.04565.4840.07117.07117.00.0
STD_LOG28,857.028,845.8710.04566.04565.6130.07117.07116.9030.0
STD_PIECE28,857.028,856.2580.04566.04566.00.07117.07117.00.0
STD_SINE28,857.028,849.5810.04566.04565.5480.07117.07117.00.0
STD_SINGER28,857.028,759.290.04566.04557.7740.07117.07051.2260.0
STD_SINU28,857.028,674.710.04566.04560.8390.07117.07023.0320.0
STD_TENT28,857.028,853.290.04566.04566.00.07117.07117.00.0
STD_CIRCLE28,857.028,834.7420.04552.04551.290.3077117.07117.00.0
COM28,076.027,386.6772.7064555.04507.9680.2416913.06790.3552.866
COM_LOG27,788.027,144.0653.7044549.04491.5810.3726912.06755.1292.88
COM_PIECE28,108.027,405.7742.5964529.04500.3870.816917.06804.8392.81
COM_SINE27,953.027,397.4193.1334533.04501.4520.7236900.06788.3553.049
COM_SINGER27,972.026,589.0973.0674513.04458.4521.1616908.06699.2582.937
COM_SINU27,347.026,271.1615.2334525.04452.1610.8986817.06677.4844.215
COM_TENT28,173.027,497.7742.374555.04507.5810.2416901.06788.1943.035
COM_CIRCLE28,247.027,596.9352.1144551.04507.5810.3296998.06816.01.672
ELIT27,670.026,169.1294.1134509.04402.3551.2486904.06681.9352.993
ELIT_LOG28,187.026,043.7742.3224526.04418.6130.8766908.06671.0972.937
ELIT_PIECE27,241.025,960.3555.64472.04400.3232.0596816.06660.0974.229
ELIT_SINE27,318.025,972.295.3334507.04398.5481.2927016.06664.8061.419
ELIT_SINGER27,655.025,919.4524.1654535.04414.3550.6796815.06666.5814.243
ELIT_SINU27,717.026,085.6133.9514486.04399.1611.7526808.06666.9684.342
ELIT_TENT27,442.026,103.8394.9034508.04410.711.276908.06653.3552.937
ELIT_CIRCLE27,296.026,013.1945.4094473.04402.7742.0376914.06663.9352.852
ExperimentknapPI_1_200_1000_1knapPI_2_200_1000_1knapPI_3_200_1000_1
BestAvg.RPDBestAvg.RPDBestAvg.RPD
STD54,485.054,352.7740.0339051.09050.00.01114,390.014,329.8710.0
STD_LOG54,205.053,928.0970.5479048.09031.4840.04414,290.014,235.9030.695
STD_PIECE54,503.054,370.1940.09051.09049.3230.01114,389.014,326.2580.007
STD_SINE54,503.054,118.8710.09051.09043.3870.01114,290.014,280.290.695
STD_SINGER53,458.052,809.0321.9179027.08989.6450.27614,186.014,053.2261.418
STD_SINU52,687.052,146.3233.3329006.08968.8060.50813,989.013,873.6452.787
STD_TENT54,503.054,371.00.09051.09049.7740.01114,390.014,311.8060.0
STD_CIRCLE54,481.054,475.3550.049051.09049.8390.01114,390.014,389.6130.0
COM49,135.048,006.09.8498893.08746.4841.75713,469.013,161.6456.4
COM_LOG48,767.047,592.22610.5248826.08701.5162.49713,290.013,084.0327.644
COM_PIECE49,832.047,974.2268.578810.08745.8712.67313,489.013,176.0326.261
COM_SINE49,312.047,966.1949.5248790.08716.5162.89413,486.013,178.1616.282
COM_SINGER50,170.046,691.1617.958758.08609.713.24813,375.013,009.4197.054
COM_SINU49,464.046,881.0329.2458830.08558.5162.45213,282.013,046.5487.7
COM_TENT49,743.048,183.6458.7338944.08746.2581.19313,366.013,131.1947.116
COM_CIRCLE50,372.049,047.3557.5798879.08778.2261.91113,985.013,426.8062.814
ELIT48,421.046,461.74211.1598720.08592.293.66813,385.012,983.8396.984
ELIT_LOG48,497.046,619.90311.028677.08571.2584.14313,482.013,058.9686.31
ELIT_PIECE49,052.046,461.16110.0018748.08592.9353.35813,283.013,043.8397.693
ELIT_SINE48,976.046,446.51610.1418734.08589.5163.51313,284.013,009.4847.686
ELIT_SINGER49,767.046,828.2588.6898737.08577.9033.4813,289.012,997.2587.651
ELIT_SINU47,838.046,475.67712.2298732.08570.4193.53513,484.013,036.6456.296
ELIT_TENT49,784.047,014.298.6588802.08590.7742.76213,482.013,055.1946.31
ELIT_CIRCLE49,289.046,553.299.5668780.08597.1943.00513,380.013,045.3237.019
ExperimentknapPI_1_2000_1000_1knapPI_2_2000_1000_1knapPI_3_2000_1000_1
BestAvg.RPDBestAvg.RPDBestAvg.RPD
STD109,623.0108,875.9030.90618,027.017,969.6130.13328,809.028,617.290.38
STD_LOG108,467.0106,985.7421.95117,960.017,877.8710.50428,319.028,210.0322.075
STD_PIECE109,791.0105,507.2920.75418,022.017,971.8060.16128,713.028,583.8060.712
STD_SINE108,598.0107,846.4191.83217,981.017,905.1610.38828,519.028,376.1611.383
STD_SINGER104,903.0103,691.3875.17217,774.017,658.9031.53527,808.027,553.6133.842
STD_SINU103,389.0100,545.6136.54117,626.017,501.9682.35427,416.026,964.7745.197
STD_TENT109,959.0109,113.2260.60218,009.017,968.4520.23328,719.028,603.6770.692
STD_CIRCLE110,555.0110,214.4190.06318,040.018,027.3230.06128,916.028,830.7740.01
COM95,052.091,784.61314.07716,984.016,814.295.91125,916.025,383.80610.384
COM_LOG92,635.090,981.41916.26217,062.016,795.6455.47925,811.025,221.32310.747
COM_PIECE94,761.091,531.48414.3417,279.016,832.1944.27725,718.025,318.67711.069
COM_SINE94,146.091,178.80614.89617,068.016,805.8715.44625,906.025,271.25810.419
COM_SINGER95,646.090,601.16113.5417,031.016,735.5485.65125,811.025,342.32310.747
COM_SINU96,128.090,695.22613.10517,133.016,728.9355.08626,015.025,274.38710.042
COM_TENT95,027.091,353.12914.117,092.016,850.7745.31325,715.025,332.64511.079
COM_CIRCLE95,741.092,354.16113.45417,083.016,874.715.36325,815.025,367.12910.733
ELIT93,971.090,895.74215.05416,969.016,727.0325.99426,418.025,331.7428.648
ELIT_LOG94,071.090,695.96814.96416,990.016,759.1295.87825,809.025,297.19410.754
ELIT_PIECE94,861.090,753.61314.2517,039.016,728.5165.60625,818.025,361.54810.723
ELIT_SINE93,616.090,590.87115.37517,009.016,753.6135.77326,011.025,300.25810.056
ELIT_SINGER95,689.090,577.93513.50116,957.016,700.9356.06126,010.025,305.41910.059
ELIT_SINU93,962.090,689.61315.06317,135.016,696.9685.07525,611.025,230.22611.439
ELIT_TENT94,309.090,461.41914.74916,990.016,707.3875.87825,910.025,360.45210.405
ELIT_CIRCLE93,525.090,443.54815.45817,124.016,765.4195.13526,013.025,284.03210.049
Table 8. Results obtained with SCA for instances (a) knapPI_1_100_1000_1, knapPI_2_100_1000_1, and knapPI_3_100_1000_1; (b) knapPI_1_200_1000_1, knapPI_2_200_1000_1, and knapPI_3_200_1000_1; (c) knapPI_1_500_1000_1, knapPI_2_500_1000_1, and knapPI_3_500_1000_1; (d) knapPI_1_1000_1000_1, knapPI_2_1000_1000_1, and knapPI_3_1000_1000_1; (e) knapPI_1_2000_1000_1, knapPI_2_2000_1000_1, and knapPI_3_2000_1000_1.
Table 8. Results obtained with SCA for instances (a) knapPI_1_100_1000_1, knapPI_2_100_1000_1, and knapPI_3_100_1000_1; (b) knapPI_1_200_1000_1, knapPI_2_200_1000_1, and knapPI_3_200_1000_1; (c) knapPI_1_500_1000_1, knapPI_2_500_1000_1, and knapPI_3_500_1000_1; (d) knapPI_1_1000_1000_1, knapPI_2_1000_1000_1, and knapPI_3_1000_1000_1; (e) knapPI_1_2000_1000_1, knapPI_2_2000_1000_1, and knapPI_3_2000_1000_1.
ExperimentknapPI_1_100_1000_1knapPI_2_100_1000_1knapPI_3_100_1000_1
BestAvg.RPDBestAvg.RPDBestAvg.RPD
STD9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_LOG9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_PIECE9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_SINE9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_SINGER9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_SINU9147.09147.00.01513.01512.0320.0662397.02397.00.0
STD_TENT9147.09147.00.01512.01512.00.1322397.02397.00.0
STD_CIRCLE9147.09147.00.01512.01512.00.1322397.02396.0970.0
COM9147.09147.00.01512.01512.00.1322397.02395.3870.0
COM_LOG9147.09147.00.01512.01512.00.1322397.02393.9680.0
COM_PIECE9147.09147.00.01512.01512.00.1322397.02392.6770.0
COM_SINE9147.09147.00.01512.01512.00.1322397.02395.2260.0
COM_SINGER9147.09147.00.01512.01512.00.1322397.02381.1290.0
COM_SINU9147.09147.00.01512.01501.9350.1322397.02397.00.0
COM_TENT9147.09147.00.01512.01512.00.1322397.02395.5480.0
COM_CIRCLE9147.09147.00.01512.01512.00.1322397.02393.6770.0
ELIT9147.08815.9350.01512.01499.0650.1322390.02301.7420.292
ELIT_LOG9147.08898.8390.01512.01500.4520.1322396.02316.4520.042
ELIT_PIECE9147.08938.8390.01512.01493.5160.1322396.02319.290.042
ELIT_SINE9147.08889.3230.01512.01498.2580.1322397.02311.4190.0
ELIT_SINGER9147.08904.6130.01512.01498.3870.1322397.02324.3870.0
ELIT_SINU9147.08942.3230.01512.01501.9030.1322396.02316.710.042
ELIT_TENT9147.08872.8390.01512.01498.3230.1322397.02326.6450.0
ELIT_CIRCLE9147.08893.4840.01512.01496.00.1322390.02310.5810.292
ExperimentknapPI_1_200_1000_1knapPI_2_200_1000_1knapPI_3_200_1000_1
BestAvg.RPDBestAvg.RPDBestAvg.RPD
STD11,238.011,238.00.01634.01634.00.02697.02697.00.0
STD_LOG11,238.011,238.00.01634.01634.00.02697.02697.00.0
STD_PIECE11,238.011,238.00.01634.01634.00.02697.02697.00.0
STD_SINE11,238.011,238.00.01634.01634.00.02697.02697.00.0
STD_SINGER11,238.011,237.6450.01634.01634.00.02697.02697.00.0
STD_SINU11,238.011,237.290.01634.01631.4190.02697.02696.6770.0
STD_TENT11,238.011,238.00.01634.01634.00.02697.02697.00.0
STD_CIRCLE11,238.011,238.00.01634.01634.00.02697.02697.00.0
COM11,238.011,217.710.01634.01633.5480.02697.02695.4840.0
COM_LOG11,238.011,212.7420.01634.01633.2580.02697.02695.8710.0
COM_PIECE11,238.011,217.710.01634.01633.8060.02697.02695.2260.0
COM_SINE11,238.011,231.6130.01634.01633.7420.02697.02695.7740.0
COM_SINGER11,238.011,120.1940.01634.01632.4840.02697.02656.5160.0
COM_SINU11,238.011,231.6130.01634.01628.0320.02697.02693.6450.0
COM_TENT11,238.011,226.5480.01634.01633.4520.02697.02695.3870.0
COM_CIRCLE11,238.011,137.8390.01634.01630.6130.02697.02693.3870.0
ELIT11,238.010,875.6450.01627.01615.9030.4282697.02660.2260.0
ELIT_LOG11,227.010,865.9680.0981634.01616.8390.02697.02643.9030.0
ELIT_PIECE11,238.010,837.290.01634.01621.5160.02695.02659.2580.074
ELIT_SINE11,238.010,874.6770.01634.01618.3230.02697.02633.5810.0
ELIT_SINGER11,238.010,952.0650.01634.01617.7420.02697.02655.0970.0
ELIT_SINU11,183.010,892.2260.4891633.01619.00.0612695.02641.0650.074
ELIT_TENT11,227.010,868.4840.0981627.01613.8060.4282697.02648.8390.0
ELIT_CIRCLE11,238.010,880.5480.01627.01616.2260.4282697.02645.0320.0
ExperimentknapPI_1_200_1000_1knapPI_2_200_1000_1knapPI_3_200_1000_1
BestAvg.RPDBestAvg.RPDBestAvg.RPD
STD28,857.028,772.3230.04566.04558.5480.07116.07069.9350.014
STD_LOG28,764.028,625.1940.3224566.04554.1290.07017.07016.3551.405
STD_PIECE28,834.028,791.6130.084566.04560.4840.07116.07058.9680.014
STD_SINE28,857.028,706.9350.04566.04566.00.07116.07025.6450.014
STD_SINGER28,182.027,467.3872.3394551.04507.9680.3296915.06815.9682.838
STD_SINU27,261.026,158.3555.5314501.04437.4841.4246815.06658.0324.243
STD_TENT28,857.028,793.9350.04566.04561.3550.07117.07087.710.0
STD_CIRCLE28,857.028,855.5160.04566.04554.6130.07117.07117.00.0
COM27,534.026,586.0324.5854505.04454.3231.3366814.06672.7424.257
COM_LOG26,993.026,440.4526.4594499.04456.4521.4676815.06692.5164.243
COM_PIECE27,409.026,451.4195.0184512.04451.8711.1836817.06675.8064.215
COM_SINE27,353.026,598.1945.2124517.04455.3871.0736817.06708.04.215
COM_SINGER27,610.026,059.6454.3214497.04413.291.5116806.06657.2264.37
COM_SINU27,051.026,021.8716.2584459.04394.1612.3436817.06663.9354.215
COM_TENT27,446.026,551.2264.894537.04455.710.6356816.06699.1944.229
COM_CIRCLE27,201.026,517.3235.7394507.04442.0651.2926913.06699.2262.866
ELIT27,088.026,168.1296.134509.04402.1291.2486916.06664.02.824
ELIT_LOG27,540.026,029.2264.5644504.04414.3871.3586816.06684.4844.229
ELIT_PIECE27,207.026,009.1615.7184515.04410.291.1176813.06661.2584.271
ELIT_SINE27,046.025,994.4846.2764514.04409.0321.1396815.06675.3874.243
ELIT_SINGER26,614.025,867.8397.7734479.04412.9351.9056910.06680.5162.909
ELIT_SINU27,665.026,207.0324.1314533.04409.8060.7236796.06656.4844.51
ELIT_TENT27,248.026,044.8065.5764491.04409.0971.6437015.06654.4841.433
ELIT_CIRCLE27,270.026,108.8715.54483.04395.6771.8186903.06669.6133.007
ExperimentknapPI_1_1000_1000_1knapPI_2_1000_1000_1knapPI_3_1000_1000_1
BestAvg.RPDBestAvg.RPDBestAvg.RPD
STD53,681.052,958.1291.5089045.08989.8710.07714,090.014,039.0972.085
STD_LOG52,931.051,803.3552.8849001.08947.5810.56313,990.013,867.7742.78
STD_PIECE53,662.052,702.711.5439030.08988.2580.24314,090.014,030.5162.085
STD_SINE53,318.052,691.9352.1749013.08987.8060.43114,087.013,987.0652.106
STD_SINGER49,265.047,890.09.618827.08744.712.48613,467.013,186.6456.414
STD_SINU48,741.046,592.010.5728729.08589.2583.56813,286.013,046.07.672
STD_TENT53,416.052,875.291.9949028.08988.2260.26514,187.014,056.6771.411
STD_CIRCLE54,234.054,057.4520.4949030.09013.9680.24314,290.014,285.2260.695
COM49,110.046,767.6779.8958718.08611.2583.6913,388.013,052.6456.963
COM_LOG48,051.046,850.25811.8388837.08614.4192.37513,384.013,069.7746.991
COM_PIECE48,216.046,511.38711.5358712.08603.7743.75613,276.013,000.5487.741
COM_SINE48,435.046,890.09711.1338784.08630.0322.96113,383.013,054.0656.998
COM_SINGER48,477.046,364.19411.0568759.08601.0653.23713,490.013,031.9036.254
COM_SINU47,996.046,647.25811.9398815.08588.6132.61813,377.013,040.0977.04
COM_TENT48,867.046,575.83910.3418769.08616.0323.12613,390.013,008.296.949
COM_CIRCLE48,510.047,243.96810.9968753.08654.0653.30313,287.013,034.6137.665
ELIT48,700.046,472.03210.6478684.08579.04.06513,378.013,041.4197.033
ELIT_LOG49,297.046,567.9689.5528752.08601.3553.31413,478.013,063.0976.338
ELIT_PIECE48,445.046,666.2911.1158756.08594.4843.2713,287.013,029.8397.665
ELIT_SINE48,313.046,879.74211.3578746.08597.03.3813,286.013,014.0657.672
ELIT_SINGER48,866.046,692.61310.3438798.08596.0322.80613,285.013,063.5167.679
ELIT_SINU49,053.046,605.4199.9998706.08586.7743.82213,287.013,051.5817.665
ELIT_TENT49,839.046,487.7428.5578720.08586.8713.66813,380.013,053.3877.019
ELIT_CIRCLE49,340.047,225.5489.4738808.08600.0322.69613,390.012,989.1296.949
ExperimentknapPI_1_200_1000_1knapPI_2_200_1000_1knapPI_3_200_1000_1
BestAvg.RPDBestAvg.RPDBestAvg.RPD
STD105,363.0103,724.3234.75717,794.017,663.0651.42427,716.027,517.0654.16
STD_LOG102,554.0101,476.9037.29617,643.017,506.1292.2627,214.026,996.9685.896
STD_PIECE105,728.0103,444.5814.42717,757.017,658.3231.62927,619.027,488.6134.495
STD_SINE103,910.0102,593.6136.0717,736.017,631.3551.74527,418.027,308.6775.19
STD_SINGER95,257.090,910.03213.89217,005.016,811.9685.79525,618.025,265.51611.415
STD_SINU95,017.091,056.87114.10917,111.016,776.5485.20726,012.025,404.61310.052
STD_TENT106,046.0104,096.0974.13917,773.017,651.4521.5427,814.027,548.6133.821
STD_CIRCLE108,845.0108,422.2261.60917,974.017,939.0970.42728,518.028,371.1611.387
COM94,027.090,608.015.00417,009.016,747.9035.77325,908.025,323.06510.412
COM_LOG95,094.090,690.83914.03916,964.016,711.5816.02225,917.025,344.7110.381
COM_PIECE94,640.090,199.35514.4516,946.016,682.6456.12226,211.025,345.0659.364
COM_SINE95,557.090,575.96813.62116,982.016,704.2585.92226,014.025,301.16110.045
COM_SINGER94,604.090,331.58114.48216,906.016,694.1616.34326,314.025,326.6459.008
COM_SINU94,238.090,794.58114.81316,980.016,732.5815.93326,004.025,329.58110.08
COM_TENT95,329.090,429.54813.82716,965.016,707.9036.01625,705.025,348.67711.114
COM_CIRCLE93,304.090,315.48415.65716,954.016,701.0326.07725,714.025,337.93511.083
ELIT93,627.090,490.22615.36517,042.016,727.05.5925,819.025,238.83910.72
ELIT_LOG94,817.090,709.83914.2917,005.016,725.8395.79525,914.025,278.22610.391
ELIT_PIECE94,780.091,059.16114.32316,950.016,734.06.09925,719.025,284.06511.065
ELIT_SINE95,798.090,960.58113.40317,063.016,706.3235.47325,705.025,235.16111.114
ELIT_SINGER95,383.090,819.16113.77816,993.016,677.3235.86125,616.025,309.64511.422
ELIT_SINU96,188.090,429.77413.0516,840.016,678.7746.70925,915.025,287.87110.388
ELIT_TENT95,551.091,256.77413.62617,056.016,724.5815.51226,111.025,325.8719.71
ELIT_CIRCLE94,775.090,491.32314.32816,972.016,737.2265.97825,816.025,341.06510.73
Table 9. Ranking of best experiments based on statistical tests.
Table 9. Ranking of best experiments based on statistical tests.
ExperimentGWOWOASCATOTALExperimentGWOWOASCATOTAL
STD8/238/238/2324/69COM_SINE6/231/231/238/69
STD_LOG8/238/238/2324/69COM_LOG2/230/230/232/69
STD_PIECE8/238/238/2324/69COM_SINGER0/230/230/230/69
STD_SINE8/238/238/2324/69COM_SINU0/230/230/230/69
STD_TENT8/238/238/2324/69ELIT0/230/230/230/69
STD_CIRCLE8/238/238/2324/69ELIT_LOG0/230/230/230/69
STD_SINGER7/238/238/2323/69ELIT_PIECE0/230/230/230/69
COM_CIRCLE6/238/238/2322/69ELIT_SINE0/230/230/230/69
COM5/238/238/2321/69ELIT_SINGER0/230/230/230/69
COM_PIECE6/236/236/2318/69ELIT_SINU0/230/230/230/69
COM_TENT2/238/238/2318/69ELIT_TENT0/230/230/230/69
STD_SINU0/238/238/2316/69ELIT_CIRCLE0/230/230/230/69
Table 10. Average p-value of GWO compared to others experiments.
Table 10. Average p-value of GWO compared to others experiments.
STDSTD_LOGSTD_PIECESTD_SINESTD_SINGERSTD_SINUSTD_TENTSTD_CIRCLECOMCOM_LOGCOM_PIECECOM_SINE
STDX0.4290.6330.50.3630.3570.7180.7490.2390.2150.2260.156
STD_LOG1.0X1.00.9930.3630.3571.00.7860.2390.2150.2260.156
STD_PIECE0.7970.429X0.50.3630.3570.7480.7650.2390.2150.2260.156
STD_SINE0.9280.4351.0X0.3630.3570.9290.7860.2390.2150.2260.156
STD_SINGER0.9950.9950.9950.995X0.3720.9950.8570.2710.2540.2730.248
STD_SINU1.01.01.01.00.985X1.00.9260.7760.7070.770.757
STD_TENT0.7110.4290.6810.50.3630.357X0.7840.2390.2150.2260.156
STD_CIRCLE0.5380.50.5210.50.4280.360.502X0.2980.2860.2910.29
COM0.9780.9780.9780.9780.8750.370.9780.846X0.3240.5560.397
COM_LOG1.01.01.01.00.890.4371.00.8570.892X0.8760.847
COM_PIECE0.9890.9890.9890.9890.8720.3740.9890.8520.6610.339X0.427
COM_SINE0.9880.9880.9880.9880.8970.3870.9880.8530.750.2970.72X
COM_SINGER1.01.01.01.00.9870.741.00.9610.9620.9090.9730.962
COM_SINU1.01.01.01.00.9750.8161.00.9290.9050.8140.8640.916
COM_TENT0.9840.9840.9840.9840.9480.4240.9840.8490.6670.3350.6080.476
COM_CIRCLE0.9990.9990.9990.9990.670.4330.9990.9280.5090.3080.4650.46
ELIT1.01.01.01.00.9920.8091.01.00.9630.9530.9770.954
ELIT_LOG1.01.01.01.00.9490.7191.01.00.940.8780.9370.968
ELIT_PIECE1.01.01.01.00.9520.7641.01.00.9350.8620.9060.942
ELIT_SINE1.01.01.01.00.9660.8121.01.00.930.8960.9510.943
ELIT_SINGER1.01.01.01.00.9840.7941.01.00.9730.9210.9730.982
ELIT_SINU1.01.01.01.00.9860.8241.01.00.9770.890.9750.984
ELIT_TENT1.01.01.01.00.9950.8871.01.00.9860.9650.9890.988
ELIT_CIRCLE1.01.01.01.00.9980.8441.01.00.9870.9430.9910.982
STD0.1430.2140.160.2160.00.00.00.00.00.00.00.0
STD_LOG0.1430.2140.160.2160.00.00.00.00.00.00.00.0
STD_PIECE0.1430.2140.160.2160.00.00.00.00.00.00.00.0
STD_SINE0.1430.2140.160.2160.00.00.00.00.00.00.00.0
STD_SINGER0.1560.2390.1970.5450.0080.0510.0480.0340.0160.0140.0050.002
STD_SINU0.4050.40.7210.7820.1920.2830.2380.190.2080.1780.1150.158
STD_TENT0.1430.2140.160.2160.00.00.00.00.00.00.00.0
STD_CIRCLE0.1830.2140.2940.2860.00.00.00.00.00.00.00.0
COM0.1820.240.480.6350.0380.060.0660.0710.0280.0240.0140.014
COM_LOG0.2360.330.8090.8360.0480.1230.1390.1050.080.1110.0350.058
COM_PIECE0.170.2810.5390.680.0240.0640.0940.050.0270.0260.0110.009
COM_SINE0.1810.2270.6720.6840.0470.0320.0590.0580.0180.0160.0130.018
COM_SINGERX0.5390.9140.9820.1990.2820.2360.2080.2020.1980.0950.152
COM_SINU0.606X0.8610.8410.2850.4160.3560.2830.2810.2430.160.23
COM_TENT0.2290.283X0.6770.0790.0830.1080.0930.0720.0610.0450.047
COM_CIRCLE0.1620.3030.468X0.0010.0510.0830.0230.010.0120.0040.004
ELIT0.8020.7170.9220.999X0.5940.5190.5430.4470.4820.3430.434
ELIT_LOG0.720.5860.9170.950.41X0.4510.4230.3850.3870.2780.371
ELIT_PIECE0.7660.6460.8920.9180.4850.553X0.4410.3680.4250.2840.407
ELIT_SINE0.7940.720.9080.9770.4610.5810.564X0.4590.470.3110.416
ELIT_SINGER0.80.7220.9290.990.5570.6180.6360.545X0.530.3780.499
ELIT_SINU0.8040.7590.940.9880.5220.6170.5790.5340.474X0.3520.467
ELIT_TENT0.9070.8420.9550.9960.660.7250.720.6930.6260.652X0.631
ELIT_CIRCLE0.850.7730.9540.9960.570.6330.5970.5880.5050.5370.374X
Table 11. Average p-value of WOA compared to others experiments.
Table 11. Average p-value of WOA compared to others experiments.
STDSTD_LOGSTD_PIECESTD_SINESTD_SINGERSTD_SINUSTD_TENTSTD_CIRCLECOMCOM_LOGCOM_PIECECOM_SINE
STDX0.4830.7420.5370.4290.4260.720.7370.3010.220.2390.224
STD_LOG0.947X0.9860.940.4290.4260.9860.7830.3010.220.2390.224
STD_PIECE0.7610.443X0.5060.4290.4260.7960.7770.3010.220.2390.224
STD_SINE0.9640.490.995X0.4290.4260.990.7860.3010.220.2390.224
STD_SINGER1.01.01.01.0X0.4971.00.8570.3010.220.2390.224
STD_SINU0.9320.9320.9320.9320.861X0.9320.7890.2320.1520.170.155
STD_TENT0.7820.4440.7770.5110.4290.426X0.7540.3010.220.2390.224
STD_CIRCLE0.6920.5740.6520.6430.50.4970.675X0.360.2890.2980.289
COM0.9860.9860.9860.9860.9860.9840.9860.926X0.2580.5450.411
COM_LOG0.9940.9940.9940.9940.9940.9920.9940.9260.958X0.9330.852
COM_PIECE0.9780.9780.9780.9780.9780.9750.9780.9170.6750.284X0.458
COM_SINE0.9910.9910.9910.9910.9910.9890.9910.9250.8060.3650.76X
COM_SINGER0.9950.9950.9950.9950.9950.9920.9950.9290.960.8730.930.896
COM_SINU0.9970.9970.9970.9970.9970.9970.9970.9250.910.7770.8650.819
COM_TENT0.9780.9780.9780.9780.9780.9760.9780.9180.6480.2490.5520.388
COM_CIRCLE0.9890.9890.9890.9890.9890.9860.9890.9280.470.2970.3580.315
ELIT1.01.01.01.01.01.01.01.00.9890.9020.9590.923
ELIT_LOG1.01.01.01.01.01.01.01.00.9860.9020.9630.933
ELIT_PIECE1.01.01.01.01.01.01.01.00.9670.9130.940.925
ELIT_SINE1.01.01.01.01.01.01.01.00.9930.920.9690.948
ELIT_SINGER1.01.01.01.01.01.01.01.00.9930.930.9710.945
ELIT_SINU1.01.01.01.01.01.01.01.00.9990.9210.9940.959
ELIT_TENT1.01.01.01.01.01.01.01.00.9780.9110.9480.932
ELIT_CIRCLE1.01.01.01.01.01.01.01.00.9910.910.9770.946
STD0.1490.1460.310.2260.00.00.00.00.00.00.00.0
STD_LOG0.1490.1460.310.2260.00.00.00.00.00.00.00.0
STD_PIECE0.1490.1460.310.2260.00.00.00.00.00.00.00.0
STD_SINE0.1490.1460.310.2260.00.00.00.00.00.00.00.0
STD_SINGER0.1490.1460.310.2260.00.00.00.00.00.00.00.0
STD_SINU0.080.1460.2410.1580.00.00.00.00.00.00.00.0
STD_TENT0.1490.1460.310.2260.00.00.00.00.00.00.00.0
STD_CIRCLE0.2140.1460.3690.2860.00.00.00.00.00.00.00.0
COM0.1840.1630.6420.7460.0110.0140.0330.0080.0070.0010.0230.009
COM_LOG0.2710.2970.9660.8470.0990.0990.0870.0810.0710.080.0890.091
COM_PIECE0.2150.2090.6670.7880.0420.0380.060.0310.0290.0070.0530.023
COM_SINE0.2480.2550.8290.830.0770.0670.0750.0530.0560.0420.0680.055
COM_SINGERX0.4380.9420.9220.1620.1990.1980.1740.1010.1190.1530.179
COM_SINU0.635X0.9220.8630.2810.2740.2630.2320.1880.2060.2450.249
COM_TENT0.2020.152X0.7380.0330.0160.0430.0140.0120.0060.0320.008
COM_CIRCLE0.2230.210.479X0.0130.0090.0360.0090.0060.0010.0170.003
ELIT0.840.7210.9680.987X0.4740.4330.390.3860.3580.4340.377
ELIT_LOG0.8020.7280.9840.9910.53X0.5140.4420.4220.4380.4440.399
ELIT_PIECE0.8040.7390.9580.9640.5720.49X0.4120.4460.4520.4420.423
ELIT_SINE0.8280.7690.9860.9920.6140.5630.593X0.4950.5440.4940.499
ELIT_SINGER0.90.8130.9880.9940.6190.5820.5580.509X0.4870.5140.483
ELIT_SINU0.8820.7960.9940.9990.6460.5670.5520.460.518X0.5290.455
ELIT_TENT0.8490.7570.9690.9830.5690.560.5620.5090.490.475X0.488
ELIT_CIRCLE0.8220.7530.9920.9970.6260.6050.5810.5060.5210.5480.516X
Table 12. Average p-value of SCA compared to others experiments.
Table 12. Average p-value of SCA compared to others experiments.
STDSTD_LOGSTD_PIECESTD_SINESTD_SINGERSTD_SINUSTD_TENTSTD_CIRCLECOMCOM_LOGCOM_PIECECOM_SINE
STDX0.4290.6370.5430.3690.210.8690.8570.1430.1430.1430.143
STD_LOG1.0X1.01.00.3690.211.00.8810.1430.1430.1430.143
STD_PIECE0.7940.429X0.5620.3690.210.8760.8570.1430.1430.1430.143
STD_SINE0.8860.4290.867X0.3690.210.860.8570.1430.1430.1430.143
STD_SINGER0.9890.9890.9890.989X0.360.9890.9180.2230.240.20.222
STD_SINU0.9350.9350.9350.9350.785X0.9350.8630.4920.4980.4380.494
STD_TENT0.5610.4290.5540.5690.3690.21X0.8570.1430.1430.1430.143
STD_CIRCLE0.50.4760.50.50.3690.210.5X0.2120.1720.2140.202
COM1.01.01.01.00.920.5821.00.93X0.5130.4580.678
COM_LOG1.01.01.01.00.9030.5761.00.9710.632X0.4640.673
COM_PIECE1.01.01.01.00.9440.6361.00.9280.6880.681X0.771
COM_SINE1.01.01.01.00.9210.5791.00.9410.4680.4730.374X
COM_SINGER1.01.01.01.00.9470.8471.01.00.9140.9070.7960.906
COM_SINU1.01.01.01.00.8950.8391.00.9290.6920.6770.5950.704
COM_TENT1.01.01.01.00.9260.6051.00.930.6090.520.4540.699
COM_CIRCLE1.01.01.01.00.9240.6911.00.9870.7340.7040.5960.754
ELIT1.01.01.01.00.9630.8581.01.00.9050.8920.7980.865
ELIT_LOG1.01.01.01.00.9420.8081.01.00.830.8310.7220.837
ELIT_PIECE1.01.01.01.00.8990.8441.01.00.840.830.7640.831
ELIT_SINE1.01.01.01.00.9340.8381.01.00.8680.880.7790.875
ELIT_SINGER1.01.01.01.00.9120.8321.01.00.8250.8470.7540.856
ELIT_SINU1.01.01.01.00.950.8541.01.00.9010.9090.820.885
ELIT_TENT1.01.01.01.00.8780.8441.01.00.8190.8160.750.795
ELIT_CIRCLE1.01.01.01.00.9270.8541.01.00.8820.8610.7860.872
STD0.1430.1430.1430.1430.00.00.00.00.00.00.00.0
STD_LOG0.1430.1430.1430.1430.00.00.00.00.00.00.00.0
STD_PIECE0.1430.1430.1430.1430.00.00.00.00.00.00.00.0
STD_SINE0.1430.1430.1430.1430.00.00.00.00.00.00.00.0
STD_SINGER0.1960.2480.2170.2190.0370.0590.1020.0660.0880.0510.1230.073
STD_SINU0.2270.3060.4690.3830.1440.1930.1580.1630.170.1480.1580.147
STD_TENT0.1430.1430.1430.1430.00.00.00.00.00.00.00.0
STD_CIRCLE0.1430.1430.2130.1570.00.00.00.00.00.00.00.0
COM0.230.3810.5380.4110.0960.1730.1610.1340.1770.10.1820.12
COM_LOG0.2380.3960.6260.4420.110.1710.1720.1220.1550.0920.1860.14
COM_PIECE0.3490.4780.6930.5490.2030.2790.2370.2230.2470.1810.2510.216
COM_SINE0.2390.370.4480.3910.1360.1650.170.1260.1460.1160.2060.129
COM_SINGERX0.5910.8680.7980.2830.3820.3290.290.3540.2810.3360.283
COM_SINU0.482X0.650.5960.2560.3410.3130.2920.2930.3040.3160.267
COM_TENT0.2760.423X0.4260.1650.2080.190.160.1680.1470.2110.145
COM_CIRCLE0.3470.4770.72X0.1470.1840.1650.1290.1630.1280.2060.132
ELIT0.7190.7470.8360.854X0.6330.6080.5270.6710.6160.5640.526
ELIT_LOG0.6210.6620.7940.8170.372X0.5110.3760.5140.4690.4260.425
ELIT_PIECE0.6740.690.8120.8360.3950.492X0.4090.5310.4370.480.396
ELIT_SINE0.7130.7110.8420.8720.4770.6280.595X0.6120.5770.5280.511
ELIT_SINGER0.6490.710.8330.8380.3320.4910.4730.392X0.4420.4450.379
ELIT_SINU0.7220.6990.8540.8740.3870.5350.5660.4260.562X0.4730.42
ELIT_TENT0.6670.6860.790.7950.440.5790.5240.4760.5590.531X0.484
ELIT_CIRCLE0.7190.7360.8570.870.4780.5790.6080.4930.6250.5840.52X
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cisternas-Caneo, F.; Crawford, B.; Soto, R.; Giachetti, G.; Paz, Á.; Peña Fritz, A. Chaotic Binarization Schemes for Solving Combinatorial Optimization Problems Using Continuous Metaheuristics. Mathematics 2024, 12, 262. https://doi.org/10.3390/math12020262

AMA Style

Cisternas-Caneo F, Crawford B, Soto R, Giachetti G, Paz Á, Peña Fritz A. Chaotic Binarization Schemes for Solving Combinatorial Optimization Problems Using Continuous Metaheuristics. Mathematics. 2024; 12(2):262. https://doi.org/10.3390/math12020262

Chicago/Turabian Style

Cisternas-Caneo, Felipe, Broderick Crawford, Ricardo Soto, Giovanni Giachetti, Álex Paz, and Alvaro Peña Fritz. 2024. "Chaotic Binarization Schemes for Solving Combinatorial Optimization Problems Using Continuous Metaheuristics" Mathematics 12, no. 2: 262. https://doi.org/10.3390/math12020262

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop