Next Article in Journal
Bioinspired Whisker Sensor for 3D Mapping of Underground Mining Environments
Previous Article in Journal
Effect of Bionic Crab Shell Attitude Parameters on Lift and Drag in a Flow Field
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Autonomous Parameter Balance in Population-Based Approaches: A Self-Adaptive Learning-Based Strategy

1
Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Avenida Brasil 2241, Valparaíso, Valparaíso 2362807, Chile
2
Escuela de Construcción Civil, Pontificia Universidad Católica de Chile, Avenida Vicuña Mackenna 4860, Macul, Santiago 7820436, Chile
3
CNRS/CRIStAL, University of Lille, 59655 Villeneuve d’Ascq, France
*
Authors to whom correspondence should be addressed.
Biomimetics 2024, 9(2), 82; https://doi.org/10.3390/biomimetics9020082
Submission received: 28 December 2023 / Revised: 16 January 2024 / Accepted: 16 January 2024 / Published: 31 January 2024

Abstract

:
Population-based metaheuristics can be seen as a set of agents that smartly explore the space of solutions of a given optimization problem. These agents are commonly governed by movement operators that decide how the exploration is driven. Although metaheuristics have successfully been used for more than 20 years, performing rapid and high-quality parameter control is still a main concern. For instance, deciding the proper population size yielding a good balance between quality of results and computing time is constantly a hard task, even more so in the presence of an unexplored optimization problem. In this paper, we propose a self-adaptive strategy based on the on-line population balance, which aims for improvements in the performance and search process on population-based algorithms. The design behind the proposed approach relies on three different components. Firstly, an optimization-based component which defines all metaheuristic tasks related to carry out the resolution of the optimization problems. Secondly, a learning-based component focused on transforming dynamic data into knowledge in order to influence the search in the solution space. Thirdly, a probabilistic-based selector component is designed to dynamically adjust the population. We illustrate an extensive experimental process on large instance sets from three well-known discrete optimization problems: Manufacturing Cell Design Problem, Set covering Problem, and Multidimensional Knapsack Problem. The proposed approach is able to compete against classic, autonomous, as well as IRace-tuned metaheuristics, yielding interesting results and potential future work regarding dynamically adjusting the number of solutions interacting on different times within the search process.

1. Introduction

Metaheuristics (MH) correspond to a heterogeneous family of algorithms, and multiple classifications have been proposed, such as single-solution, population-based, and nature-inspired [1]. In addition, it is well-known that the tuning of their key components, such as movement operators, stochastic elements, and parameters, can be the objective of multiple improvements in order to achieve a better performance. In this regard, dynamically adjusting parameters such as the population size is an important topic to the scientific community, which focuses its research on the employment of population-based approaches in order to solve hard optimization problems. This parameter can be considered as one of the most transversal issues to be defined on population-based algorithms. Nevertheless, it can be illustrated that this issue can be the most difficult parameter to settle on MH [2]. Moreover, the real impact behind the number of agents has been rarely addressed and proved to depend on several scenarios [3]: for instance, variants designed to perform in a particular study case, approaches designed to perform on a specific application, and approaches designed to tackle high-dimensional problems. Thus, in order to improve the arduous task in controlling this parameter, we propose a novel self-adaptive strategy, which aims to dynamically balance the amount of agents by analyzing the dynamic data generated on run-time solving different discrete optimization problems. In the literature, this kind of strategy has developed a solid foot in the optimization field, and in particular on evolutionary algorithms, where the generalized ideas include convergence optimization, global search improvements, and high affinity on parallelism, among others [4,5,6,7]. For instance, it is well-known that the harmony search algorithm has drawbacks such as falling in local optima and premature convergence. However, these issues have been tackled by improvements on its internal components, data management, tuning parameters, and search process [8,9,10]. The real well-known issue which exists to this day concerns the proposition of tailored/fitted solutions focused to perform under certain conditions, constrained to a defined environment, tackling a clear objective, and a specific problem or even specific instances within a problem [3]. Moreover, in the literature there exist a wide number of proposed MH. However, a recurrent scenario is that most advances and improvements proposed in the state-of-the-art are focused on well-known algorithms, such as the works regarding the population size and other parameters in Particle Swarm Optimization (PSO) [11,12,13]. In this context, the proposed approach, named Learning-based Linear Population (LBLP), aims for an improvement in the performance achieved by a pure population-based algorithm thanks to the influence given by the incorporation of a learning-based component balancing the agents on run-time. In addition, the interaction between MH and Machine Learning (ML) has attracted massive attention from the scientific community given the great results yielded on their respective fields [14,15,16,17,18].
The design proposed in this work includes the definition of three components which are mainly based on ideas and techniques from the optimization and ML fields [19]. In this context, the first component focuses on the management of major issues concerning population-based related tasks, such as generation of the initial population, intensification, diversification, and binarization. In this first attempt, we employ the Spotted Hyena Optimizer (SHO) algorithm [20], which has proved to be a good option solving optimization problems [21,22,23,24,25,26]. Regarding the second component, the main objective is the management of the dynamic data generated. This component includes two major tasks, which are the management of the data-structures behind LBLP and the management of the learning-based method. The learning process is carried out by a statistical modeling method ruled by the means of multiple linear regression. In this context, the process in control of the population size will be influenced by the knowledge generated through this process. The third proposed component concerns the management of parameters and agents used by LBLP while carrying out the solving process. In this regard, three major tasks are performed through the search process, the selection mechanism, control of probabilities, and increase/decrease of solutions within the population. The objective behind the selection mechanism includes the proper choice of a population size to perform for a certain amount of iterations on run-time. The design behind the mechanism follows a Monte Carlo simulation strategy. The second task concerns the control of parameters such as the probabilities employed. In addition, the third task carries out the generation-increase and the removal of solutions.
In order to test the performance and prove the viability or our proposed hybrid approach, we solve three well-known optimization problems, named the Manufacturing cell design problem (MCDP) [27], the Set covering problem (SCP) [28], and the Multidimensional knapsack problem (MKP) [29]. The illustrated comparison is carried out in a three-step experimentation phase. Firstly, we carry out a performance comparison against reported results yielded by competitive state-of-the-art algorithms. Secondly, we compare against a pure implementation of SHO assisted by IRace, which is a well-known parameter tuning method [30]. Thirdly, we compare the results obtained by the pure implementation of SHO vs. our proposed hybrid. Finally, we illustrate the interesting experimental results and discussion, where the proposed LBLP achieves good performance, proving to be a good and competitive option to tackle hard optimization problems.
The main contributions and strong points in the proposal can be described as follows.
  • Robust self-adaptive hybrid approach capable of tackling hard optimization problems;
  • Online-tuning/Control of a key issue in population-based approaches: Adapting population size on run-time;
  • The hybrid approach successfully solved multiple hard optimization problems: In the experimentation phase, great results were achieved solving the MCDP, SCP, and MKP by employing an unique set of configuration values;
  • Scalability in the first component designed: This work proved great adaptability given to the employed population-based algorithm. This allows the incorporation of several movement operators from different population-based algorithms in order to be instantiated by the approach to perform (parallel approach);
  • Scalability in the third designed component: This work demonstrated significant benefits derived from the dynamic data generated through the search. The proposed design allows for the incorporation of different techniques, such as multiple supervised and deep learning methods.
The rest of this paper is organized as follows. The related work is introduced in Section 2. In Section 3 we illustrate the proper background in order to fully understand the proposed work and optimization problems solved. The proposed hybrid approach is explained in Section 4. Section 5 illustrates the experimental results. Finally, we conclude and suggest some lines of future research in Section 6.

2. Related Work

The proposed self-adaptive strategy has been designed by the interaction of multiple components from the optimization and machine learning field. In the literature, this kind of proposal has been known as hybrid approaches, which aims to incorporate knowledge from data and experience to the search process while solving a given problem. This line of investigation has received noteworthy attention from the scientific community and multiple taxonomies have been reported [19,31].
Preliminary works concerning machine learning at the service of optimization methods has been a trendy approach in recent years. In Ref. [32], a hybrid approach conformed by TS and Support Vector Machine (SVM) was proposed. The objective was to design an approach capable of tackling hard combinatorial optimization problems, such as Knapsack Problem (KP), Set Covering Problem (SCP), and the Traveling Salesman Problem (TSP). The proposed hybrid defined decision rules from a corpus of solutions generated in a random fashion, which were used to predict high quality solutions for a given instance and lead the search. However, the complexity behind the designed approach is a key factor and authors highlight the arduous and time consuming tasks, such as the knowledge needed to build the corpus, and the extraction of the classification rules. In addition, more recent hybrids that integrate self-adaptive strategies in their process have been receiving significant attention given the achieved results. In Ref. [9], an ensemble learning model focused on detecting fake news was proposed. This hybrid includes off-line process and online process. The authors proposed the incorporation of a self-adaptive harmony search at the off-line process in order to modify the weight of four defined training models based on different CNN versions. However, the issues persist to this end, such as computation complexity, resources, and solutions being tailored for an specific objective.
The objective behind the proposed approach concerns the improvement in the performance of a pure population-based algorithm based on the proper control of parameters [33]. This work gives emphasis on the population size value, which is well-known for being a key parameter defined by all swarm-based approaches. In this regard, similar objectives can be observed in Refs. [22,34], where the authors proposed two hybrids that follow the same objective. The approaches employ Autonomous Search (AS) to assist the MH Human Behavior-Based Optimizer (HBBO) and SHO, respectively. AS is described as a reactive process that lets the solvers automatically reconfigure their parameters in order to improve when poor performances are detected. Nevertheless, data-driven hybrids that follow an equal objective are scarce. Currently, the body research related to this work focuses on classification, clustering, and data mining techniques. In this context, the authors in Ref. [35] proposed a hybrid framework based on the Co-evolutionary Genetic Algorithm (CGA) supported by machine learning. They employed inductive learning methods from a group of elite populations in order to influence the population with lower fitness. Their objective was to achieve a constant evolution of agents with low performance through the search. The learning process was carried out by the C4.5 and CN2 algorithms in order to perform the classification. Regarding data mining-based approaches, in Ref. [36], a hybrid version of ant colony optimization that incorporates a data mining module to guide the search process was proposed. Regarding the usage of clustering-based methods, Streichert et al. [37] proposed the clustering-based nitching method. The main objective behind the proposed approach was to identify multiple global and local optima in a multimodal search space. The model is employed over well-known evolutionary algorithms, and the aim of the model was the preservation of diversity using a multi population approach.
The proposed hybrid brings inspiration from multiple ideas described as follows. Firstly, we propose a hybrid approach that is capable of solving different optimization problems. In addition, the main objective concerns the design of a self-adaptive strategy in order to dynamically adjust and control a key parameter such as the population size on population-based approaches. In this regard, we detected a scarce number of illustrated works that focus their efforts on this issue. In the literature, the objective of most proposed works concern the tuning of parameters. The values are adjusted before the execution of the algorithm, usually through a number of previous runs. However, the proposed work adjusts the parameter values on-the-fly. In this context, a control method chooses a set of values for the optimization algorithm to perform in a given amount of time. The performance achieved is properly measured. Thus, the control method is able to know how good that choice was. These steps are repeated, and the aim is to maximize the chances of success by making the best decisions in the optimization process. On the other hand, although there is a clear presence of well-known learning-based methods such as clustering and classification, regression analysis is hardly employed, leaving out highly potential models that can tackle the presented issue. Lastly, we highlight the promising results obtained by solving three different hard optimization problems, which are illustrated in Section 5. In this regard, most proposed approaches in the literature are problem-oriented. Nevertheless, one of our objectives is for the proposed approach to be nature-friendly for the given problem. Thus, with the presented results a promising contribution to the field is illustrated.

3. Background

In this section, we review essential topics needed in order to fully understand the proposed hybrid. Firstly, the main features of population-based methods are presented, followed by the description of the employed SHO algorithm. Secondly, the detailed description of the proposed problems solved in this work are illustrated.

3.1. Metaheuristics

The MHs can be described as general-purpose methods that have great capabilities to tackle optimization problems [38]. This heterogeneous family of algorithms has been the focus of several works as a consequence of their attractive features such as the capability to tackle hard optimization problems in a finite computational time, achieving close-to-optimal solutions [39]. In the literature, subgroups from this family have been identified thanks to different criteria given the features of the proposed algorithms. Firstly, single solution algorithms were designed to carry out the transformation of a single solution during the search. Well-known examples are the local search [40], simulated annealing [41], etc. On the other hand, population-based algorithms focus on the transformation of multiple solutions during the search. In this context, all the agents/solutions in the population interact between them and evolve. Well-known algorithms are the shuffle frog leaping algorithm [42], ant colony optimization [43], gray wolf optimizer [44], and so on. Another big family of proposed algorithms consist of nature-inspired approaches. They are born as metaphors that define their behaviors on the basis of nature. For instance, the genetic algorithm [45], memetic algorithm [46], and differential evolution [47]. Additionally, an inverse phenomena can be described for non-natural algorithms, such as imperialist competitive algorithm [48], and several subgroups of algorithms designed from multiple fields, such as music, physics, and so on. However, all the proposed algorithms in this heterogeneous family share between them equal concepts in their design, such as ideas, components, parameters, and so on.

3.1.1. Spotted Hyena Optimizer

In this work, we employ the SHO algorithm, which is a population-based MH that follows clustering ideas in their performance and has proved to be a good option for solving optimization problems. The main concept behind this algorithm is the social relationship between spotted hyenas and their collaborative behavior, which was originally designed to optimize constraint and unconstrained design problems. Regarding the description and equations of the movement operators, at the beginning, encircling prey is applied. The objective is to update the position of each agent towards the current best candidate solution in the population. In order to carry out the perturbation on each agent, we employ Equations (1) and (2). In (1), D h is the distance between the current agent (P) and the actual best agent in the population ( P p ). In addition, in Equation (2), we compute the update of the current agent. In both equations, B and E correspond to co-efficient vectors; they are computed as illustrated in Equations (3) and (4), where r d 1 and r d 2 are random [0, 1] vectors.
D h = B · P p ( x ) P ( x )
P ( x + 1 ) = P p ( x ) E · D h
B = 2 · r d 1
E = 2 h · r d 2 h
h = 5 ( I t e r a t i o n ( 5 / M a x i t e r a t i o n ) )
The second movement employed is named hunting. The main objective is to influence the decision regarding the next position of each agent and the main idea is to compose a cluster towards the current best agent. In order to carry out this movement, we employ Equations (6)–(8). In (6) and (7), D h represents the distance, P h represents the actual best agent in the population, and P k the current agent being updated. Equation (7) illustrates the data-structure that contains the population clustered, where N indicates the number of agents.
D h = B · P h P k
P k = P h E · D h
C h = P k + P k + 1 + . . . + P k + N
Attacking the prey is illustrated as the third movement employed. This operator concerns the exploitation of the search space. In (9), each agent belonging to the cluster D h , generated in (8), will be updated.
P ( x + 1 ) = C h / N
The fourth movement concerns the performance of a passive exploration. The proposed SHO performs with B and E as co-efficient with random values to force the agents to move far away from the actual best agent in the population. This mechanism improves the global search of the approach. Additionally, SHO was initially designed to work on a continuous space. In order to tackle the MCDP, SCP, and MKP, a transformation of domain is needed and this process is illustrated in the next subsection.

3.1.2. Domain Transfer

In the literature, continuous population-based MH have proved to be very effective in tackling several high complex optimization problems [49]. Currently, the increment in complexity of binary modern industrial problems have pushed new challenges to the scientific community, which have ended up proposing continuous methods as potential options to tackle this domain. For instance, Binary Bat Algorithm [50], PSO [51], Binary Salp Swarm Algorithm [52], Binary Dragonfly [53], and Binary Magnetic Optimization Algorithm [54], among others [55,56,57]. In order to carry out the transformation, binarization strategies have been proposed [51]. In this regard, a well-known employed strategy concerns the Two-step binarization scheme, which as the name implies, is composed of a two step process where transformation and binarization is performed. Firstly, transfer functions were introduced to the field in Ref. [58] with the aim to give a probability between 0 and 1 employing low computational resources. Thus, transfer functions, illustrated in Table 1, are applied to the values generated by the movement operator from the continuous MH. This process achieves these values to be in the range between 0 and 1. Secondly, the application of binarization is carried out, which focuses on the value discretization applied to the output values from the first step. This process decides for a binary value (0 or 1) to be selected. In this regard, classic methods have been described as follows:
  • Standard: If the condition is satisfied, standard method returns 1, otherwise returns 0.
    X i d ( t + 1 ) = 1 , i f r a n d T ( x i d ( t + 1 ) ) 0 , o t h e r w i s e
  • Complement: If the condition is satisfied, standard method returns the complement value.
    X i d ( t + 1 ) = x i d ( t + 1 ) ¯ , i f r a n d T ( x i d ( t + 1 ) ) 0 , o t h e r w i s e
  • Static probability: A probability is generated and evaluated with a transfer function.
    X i d ( t + 1 ) = 0 , i f T ( x i d ( t + 1 ) ) α X i d ( t + 1 ) , i f T ( x i d ( t + 1 ) ) 1 2 ( 1 + α ) 1 , i f T ( x i d ( t + 1 ) ) 1 2 ( 1 + α )
  • Elitist Discretization: Method Elitist Roulette, also known as Monte Carlo, consists of selecting randomly among the best individuals of the population, with a probability proportional to its fitness.
    X i d ( t + 1 ) = x i d ( t + 1 ) , i f r a n d T ( x i d ( t + 1 ) ) 0 , o t h e r w i s e
In this work, the two-step strategy employed consists of the transfer function V 4 and the elitist discretization.

3.2. Optimization Problems

In this subsection we illustrate a detailed explanation of the three optimization problems tackled by our proposed LBLP.

3.2.1. Manufacturing Cell Design Problem

The Manufacturing Cell Design Problem (MCDP) [59] is a classical optimization problem that finds application in lines of manufacture. In this regard, the MCDP consists of organizing a manufacturing plant or facility into a set of cells, each of them made up of different machines meant to process different parts of a product that have similar characteristics. The main objective is to minimize the movement and exchange of material between cells in order to reduce the production costs and increase productivity. The optimization model is stated as follows. Let:
  • M—the number of machines;
  • P—the number of parts;
  • C—the number of cells;
  • i—the index of machines (i = 1, …, M);
  • j—the index of parts (j = 1, …, P);
  • k—the index of cells (k = 1, …, C);
  • A = [ a i j ] —the binary machine-part incidence matrix M × P;
  • M m a x —the maximum number of machines per cell. We selected as the objective function to minimize the number of times that a given part must be processed by a machine that does not belong to the cell that the part has been assigned to. Let:
y i k = { 1 if machine i cell k ; 0 otherwise ;
z j k = { 1 if part j family k ; 0 otherwise ;
  The problem is represented by the following mathematical model:
M i n i m i z e k = 1 C i = 1 M j = 1 P a i j z j k ( 1 y i k )
Subject to
k = 1 C y i k = 1 i
k = 1 C z j k = 1 j
i = 1 M y i k M m a x k
In this work, we solved a set of 35 instances from different authors. Each instance has its own configuration, the amounts of machines goes from 5 to 40, parts goes from 7 to 100, and so on. For this experiment, the instances tested have been executed 30 times.

3.2.2. Set Covering Problem

The set covering problem (SCP) is one of the well-known Karp’s 21 NP-complete problems, where the goal is to find a subset of columns in a 1-0 matrix so that they cover all the rows of the matrix at a minimum cost. Several applications of the SCP can be seen in the real world, for instance, bus crew scheduling [60], location of emergency facilities [61], and vehicle routing [62]. The formal definition is presented as follows. Let m × n be a binary matrix A = ( a i j ) and a positive n-dimensional vector C = ( c j ), where each element c j of C gives the cost of selecting the column j of matrix A. If a i j is equal to 1, then it means that the row i is covered by column j, otherwise it is not. The goal of the SCP is to find a minimum cost of columns in A such that each row in A is covered by at least one column. A mathematical definition of the SCP can be expressed as follows:
M i n i m i z e j = 1 n c j x j
S u b j e c t t o j = 1 n a i j x j 1 , i = 1 , 2 , . . . , m
x j 0 , 1 , j = 1 , 2 , . . , m
where x j is 1 if column j is in the solution, otherwise it is 0. The constraint ensures that each row i is covered by at least one column. In this work, we solved 65 different instances, which have been organized into 11 sets extracted from the Beasley’s ORlibrary. The employed instances were pre-processed in order to reduce the size and complexity. In this context, multiple pre-processing methods have been proposed in the literature for the SCP [63]. In this work, we used two of them, which have proved to be the most effective: Column Domination (CD) and Column Inclusion (CI). Firstly, the definition of CD concerns a set of rows L j being covered by another column j and c j < c j . We then say that column j is dominated by c j , and column j is removed from the solution. Second, in CI, the process is described as when a row is covered by only one column after the CD is applied. This means that there is no better column to cover those rows, and therefore this column must be included in the optimal solution. For this experiment, the test instances have been executed 30 times.

3.2.3. Multidimensional Knapsack Problem

Multidimensional Knapsack Problem (MKP) is NP-hard and can be considered as the generalized form of the classic Knapsack Problem (KP). The goal of MKP is to search for a subset of given objects that maximize the total profit while satisfying all constraints on resources. In addition, the KP is a widely-used problem with real-world applications in diverse fields including cryptography, allocation problems, scheduling, and production [64,65]. The model can be stated as follows.
M a x i m i z e j = 1 n c j x j
S u b j e c t t o j = 1 n a i j x j b i , i M = 1 , 2 , . . . , m
x j { 0 , 1 } , j N = 1 , 2 , . . . , n
where n is the number of items and m is the number of knapsack constraints with capacities b i . Each item j requires a i j units of resource consumption in the ith knapsack and yields c j units of profit upon inclusion. The goal is to find a subset of items that yields maximum profit without exceeding the resource capacities. In this work, we solved 6 different set instances from the Beasley’s ORlibrary. The details concerning the solved benchmark is illustrated in Table 2.

4. Proposed Hybrid Approach

In this section we describe the details concerning the proposed hybrid: the main ideas, motivations, and design. Firstly, a general description of the process carried out is presented. In Section 4.2 we describe a more detailed methodology behind LBLP.  Section 4.3, describes the main ideas, objectives, and techniques employed in the design of the proposed components. Finally, Section 4.4 illustrates the proposed algorithms.

4.1. General Description

The proposed LBLP follows a population-based solving strategy, which concerns multiple agents evolving in the solution space, intensification and diversification are performed, and the process is terminated when a threshold defined as an amount of iteration is met. Dynamically the adjusting parameters, especially population size, is an important topic that continues to be of growing interest to the natural computation community. In Ref. [3], the authors carried out a complete analysis of different implementations of PSO in order to define the perfect number of agents to perform. However, they highlighted that the same configuration will not necessarily fit each optimization problem or even each instance of the same problem. In this proposal we employ a population-based algorithm and consequently improve the performance by modifying the population size on run-time. This proposed modification is designed by the means of a learning component based on regression, which transforms all the yielded results employing different population sizes during solving time. Thus, the modifications are managed based on the possible best performance that can be achieved by employing a certain size as a population value. In this context, this whole process is governed by two parameters that are used as thresholds in order to carry out different tasks for LBLP: (1) The instance for a new population size to perform and (2) the instance when the knowledge needs to be generated. The first threshold is named α , which decides when the selection process will be carried out. This process will be selecting a suitable population size to perform. The second proposed parameter is named β , which manages when the regression analysis needs to be performed. The steps comprehending the proposed LBLP are described as follows:
Step 1:
Set the initial parameters for the population-based algorithm and the regression analysis.
Step 2:
Select the initial population size to perform.
Step 3:
Generate initial population.
Step 4:
while the termination criteria is not met.
Step 4.1:
Carry out intensification and diversification on the population.
Step 4.2:
Management of dynamic data generated.
Step 4.3:
Check if β amount of iteration has been met.
Step 4.3.1:
Perform regression analysis.
Step 4.3.2:
Management knowledge generated.
Step 4.4:
Check if α amount of iteration has been met.
Step 4.4.1:
Perform the selection mechanism.
Step 4.4.2:
Perform the balance of population.
Step 4.5:
Update the population-based algorithm’s parameters.

4.2. Methodology

The proposed LBLP defines four different population sizes as schemes to be selected to perform during the search. The initial probability given to each scheme to be selected is equally defined. For instance, if we configure four different size values, their initial probability to be picked corresponds to 0.25. Thus, at iteration 1, the selection mechanism (given by the Roulette selector component) will be choosing a scheme, and this selected value is the one to be performed in the next α iterations. In addition, in each iteration, the component managing the movement operators (given by the Driver component) will be sequentially carrying out diversification and intensification within the agents on the search space. This process generates dynamic data on each iteration that is sorted and stored, and this recollected data will be processed when the threshold β is met, where regression is applied and knowledge is generated. This learning process concerns the results yielded by the regression and the value interpretations, where the scheme with the best computed forecasting fitness value is selected and rewarded as the winner. In this regard, if this probability is selected it will be boosted by the model.

4.3. LBLP Components

In this subsection, we present a detailed explanation and definition of each component proposed in our first attempt designing LBLP.

4.3.1. Component 1: The Driver

The solving strategy employed by the proposed hybrid follows a population-based design. This component brings inspiration from the optimization field in order to search in the solution space of a given problem. The objective behind this component includes the generation of initial/new population (solutions), intensification, diversification, and binarization. In this first attempt proposing LBLP, we employ SHO mostly because it can be identified as a modern MH, outside of the well-known PSO, differential evolution, and genetic algorithms. In addition, the selection was based on the expertise of the research team. Nevertheless, in future upgrades, the incorporation of several algorithms smartly-selected to perform on run-time will be considered. Regarding the domain transfer process, the driver will be carrying the two-step strategy over the solutions generated. The strategy performed was function V 4 and the elitist discretization, which has already been proved to perform.

4.3.2. Component 2: Regression Model

This component is the key factor in LBLP. It concerns the analysis, storage, and decision making over the dynamic data generated. In this regard, while a scheme is performing, the regression model will be storing and indexing their respective fitness values achieved. Concerning the data-structure employed, in this work they were designed as vectors, but a more generalized description is presented as follows.
D S f i t i d = [ v a l u e i d , v a l u e i + 1 d , , v a l u e n d ] with i = { 1 , 2 , , n }
D S p r o b i d = [ v a l u e i d , v a l u e i + 1 d , , v a l u e n d ] with i = { 1 , 2 , , n }
D S s o l i d = [ v a l u e i d , v a l u e i + 1 d , , v a l u e n d ] with i = { 1 , 2 , , n }
D S r a n k i d = [ v a l u e i d , v a l u e i + 1 d , , v a l u e n d ] with i = { 1 , 2 , , n }
where D S f i t i d stores the fitness values reached by the agents of each scheme performed. D S p r o b i d concerns the data-structure with the probabilities for each scheme to be selected. D S s o l i d represents the data-structure which stores the corresponding solutions for each regression analysis carried out. The data-structure D S r a n k i d concerns the ranking for each scheme regarding the best values reached. In addition, d represents the number of schemes designed to be employed by LBLP. Regarding the regression analysis, it is carried out by the means of linear regression, where the fitted function is of the form:
y = w x + b
where y corresponds to the dependent variable, which is the fitness and value to predict. x represents the independent variable, which corresponds to the scheme performed. In this simple linear regression model, we present the close relationship between the performance and population size, which is employed through search. Regarding our proposed learning-model, we define four fitted functions for each scheme defined in this work, and they are represented as follows.
y P I G r a d e 1 = w i x P I G r a d e 1 + b i
y P I G r a d e 2 = w i x P I G r a d e 2 + b i
y P I G r a d e 3 = w i x P I G r a d e 3 + b i
y P I G r a d e 4 = w i x P I G r a d e 4 + b i
In order to solve these functions, we employ the least squares method which is a well-known approach used in the regression field. The outputs of the mentioned analysis goes to D S s o l i d , where in order to select the winner scheme the model takes the following decision:
f ( D S p r o b i d ) = M I N ( D S s o l i d )
where the probabilities concerning each scheme, stored in D S p r o b i d , will be updated taking in consideration Equation (15) and D S r a n k i d . Thus, this process will be addressed by the selection of the best prognostic regarding fitness defined by the four linear models, and the best result will be given “priority”. A practical example can be described as follows: At the beginning, in each iteration, the approach will select a scheme using a probabilistic roulette. For a four-way scheme, the initial probabilities for each scheme to be selected was in a 25%–25%–25%–25% ratio. Additionally, the regression model is always storing and sorting the fitness values and agents on run-time. When the threshold is met, Equation (15) analyses the prognostic achieved and gives the winning scheme a higher probability to be chosen. For instance, we designed a ratio of 55%–15%–15%–15%.

4.3.3. Component 3: Roulette Selector

The idea associated behind this component corresponds to a roulette system, where the main objective concerns the probabilistic selection mechanism behind the agents performing on run-time. In this work, a 4-way scheme defining 4 different population sizes (20, 30, 40, and 50 agents) is employed. In the literature, the perfect number of agents to be employed has been an everlasting discussion within the scientific community. In this context, in Ref. [3], the reasoning for the selection goes after the complexity such as the high-dimensional or unimodal problems, a designed topology of the proposed approach, and for approaches tackling very large tasks. Thus, the definition of this parameter value concerns an adjusting-testing process. In this work we follow the first standard recommendation given by the authors, which is between 20 and 50 agents. In future upgrades to be proposed for LBLP, new configuration will be employed.
Regarding the selection mechanism, the schemes are placed and selected by their assigned probabilities. The initial probability of each scheme to be selected is defined as follows.
p i = 1 N and i = { 1 , 2 , , N }
where N corresponds to the number of schemes designed for the approach. Thus, in a 4-way scheme they are described as follows.
1 N P I G r a d e 1 + 1 N P I G r a d e 2 + 1 N P I G r a d e 3 + 1 N P I G r a d e 4 = 1
The probabilities for each scheme will be modified by the regression model after the corresponding analysis on the dynamic data generated is carried out on run-time.

4.4. Proposed Algorithm

In this section, we illustrate a detailed description of the proposed Algorithm 1.
Algorithm 1 Proposed LBLP
1:
The driver: set initial parameters required for movement operators
2:
Regression Model: set internal parameters, DSfit, DSprob, DSrank, DSsol
3:
The driver: generate initial population
4:
Roulette selector: select initial scheme to perform
5:
while ( i M a x i m u m I t e r a t i o n ) do
6:
    The driver: perform intensification family of operators
7:
    The driver: perform diversification family of operators
8:
    Regression Model: store and check values for DSfit
9:
    if best value was reached performing scheme selected by Roulette selector then
10:
        Regression Model: check and update values for DSrank
11:
    end if
12:
    if threshold β is met then
13:
        Regression Model: perform regression analysis
14:
        Regression Model: update DSsol
15:
        Regression Model: check M I N ( D S s o l )
16:
        Regression Model: update DSprob
17:
    end if
18:
    if threshold α is met then
19:
        Roulette selector: select scheme to perform
20:
        if check number of agents by the scheme selected then
21:
           Roulette selector: balance the population
22:
        end if
23:
    end if
24:
end while

5. Experimental Results

In this section, we describe the experimentation process carried out to evaluate the performance of our proposed LBLP. In this context, a two-step experimentation phase was designed in order to test the competitiveness. Firstly, we compare against state-of-the-art algorithms solving the MCDP, SCP, and MKP. In the second step, we compare the results obtained by our LBLP against implementations based in SHO + IRace, and classic SHO. Additionally, the results are evaluated using the relative percentage deviation ( R P D ). The R P D quantifies the deviation of the best value obtained by the approach from S o p t for each instance. The configuration employed is illustrated in Table 3, and we highlight the good performance achieved.
R P D = ( S S o p t ) S o p t × 100

5.1. First Experimentation Phase

As mentioned before, this subsection illustrates a detailed comparison and discussion of the performance achieved by LBLP against reported data from state-of-the-art algorithms for each problem.

5.1.1. Manufacturing Cell Design Problem

In this comparison, we employ the reported results illustrated by Binary Cat Swarm Optimization (BCSO) [66], Egyptian Vulture Optimization Algorithm (EVOA), and the Modified Binary Firefly Algorithm (MBFA) [67]. In order to have a deeper sample of algorithms related to the proposed work, we also include a Human Behavior-Based Algorithm supported by Autonomous Search approach [34], which focuses on the control of the population size on run-time. In addition, to compare and understand the results, we highlighted in bold the best result for each instance when the optimum is met.
Table 4 illustrates the comparison of the reported results, and the description is as follows. The first column ID represents the identifier assigned to each instance. The S o p t depicts the global optimum or best value known for the given instance. Column Best, Mean, and R P D are the given values for best value reached, the mean value of 30 executions, and the relative percentage deviation correspondingly for each approach. Regarding the performance comparison, the lead is clearly dominated by BCSO and the proposed LBLP. Analyzing the values reported in column Best, BCSO gets all 35 best values known, in comparison to 25 values for LBLP. In addition, concerning the median values for column Best in all the instances, LBLP can be placed in second place with 35.14 and the algorithm with the best performance reported is BCSO, which has a median value of 34.51. In this regard, far behind follows MBFA, EVOA, and HBBO-AS which computed 42.54, 50.83, and 55.03 respectively. On the other hand, concerning the median value for column Mean, the proposed LBLP gets first place with 36.00 against a 36.61 reached by BCSO. This can be interpreted as being more robust and consistent in the reported performance. Moreover, we highlight that in several results, the proposed LBLP remains close to the best values known for those instances, which gives room for future improvements. Thus, the overall observations can be described as follows. LBLP does not fall behind against state-of-the-art algorithms specially designed to tackle the MCDP. In addition, the proposed approach achieved better results than HBBO-AS, which can be interpreted as how a population-based approach makes great profit due to the adaptability given by statistical modeling methods.

5.1.2. Set Covering Problem

In this comparison, we made use of the reported results illustrated by binary cat swarm optimization (BCSO) [68], binary firefly optimization (BFO) [69], binary shuffled frog leaping algorithm (BSFLA) [70], binary artificial bee colony (BABC) [71], and binary electromagnetism-like algorithm (BELA) [72]. In addition, we highlighted in bold the best result for each instance when the optimum is met.
Table 5 illustrates the comparison of results achieved by LBLP against the state-of-the-art algorithms specially designed to tackle the SCP; the description is as follows. The column ID represents the identifier assigned to each instance. The S o p t depicts the global optimum or best value known for the given instance. Column Best, Mean, and R P D are the given values for the best value reached, the mean value of 30 executions, and the relative percentage deviation correspondingly for each approach. Regarding the performance comparison, between the six approaches the lead is carried by LBLP. In this regard, a closer observation to the median values can be interpreted as follows. The proposed approach achieved the smallest value for column Best and Mean with 197.31 and 199.75, correspondingly. Moreover, the overall performance in the hardest sets of instances, such as groups F, G, and H, is pretty good as we analyzed the RPD values in comparison to BCSO and BELA. We highlight that in several results the proposed LBLP remains close to the best values known for those instances, encouraging us to continue working and further improve our method.

5.1.3. Multidimensional Knapsack Problem

Regarding the MKP, the state-of-the-art algorithms employed include the filter-and-fan heuristic (F& F) [73], Binary version of the PSO algorithm (3R-BPSO) [74], and a hybrid quantum particle swarm optimization (QPSO) algorithm [75]. These approaches were defined in the literature as specifically designed methods to effectively tackle the MKP, and a certain degree of adaptability was designed into their search process on run-time. For instance, the 3R-BPSO algorithm employs three repair operators in order to fix infeasible solutions generated on run-time. In addition, if the results of an algorithm for a set of benchmark instances are not available, the algorithm will be ignored in the comparative study, for instance, 3R-BPSO in mknapcb2 and mknapcb5. In order to compare and understand the results, we highlighted in bold the best result for each instance when the optimum is met.
Table 6 illustrates the reported performance by the state-of-the-art approaches vs. LBLP. The column ID represents the identifier assigned to each instance. The S o p t depicts the global optimum or best value known for the given instance. Column Best, Mean, and R P D are the given values for the best value reached, the mean value of 30 executions, and the relative percentage deviation correspondingly for each approach. Regarding the performance comparison, QPSO and LBLP lead the ranking by the reported performance. The QPSO approach reported a total of 21 best known values and LBLP reached 20 optimum values out of 30. However, observing the median values for column best, the proposed LBLP falls behind even against F & F with a 67,179.10 vs. 67,438.93. In this context, this issue can be clearly observed by the RPD values, instances mknapcb2 and mknapcb5 computed 1.47% and 3.13%, respectively, for tests 5.250.04 and 10.250.04. Thus, there exists a considerable distance between the performance for those instances where the best values known is not reached by LBLP. Nevertheless, in this first attempt, LBLP proved to be a competitive approach capable of tackling multiple optimization problems. In addition, this issue encouraged us to further improve and take profit from multiple mechanisms and heuristics to be employed in the design.

5.1.4. Overall Performance in This Phase

In this first experimentation phase, LBLP was compared against state-of-the-art approaches specially designed to tackle the MCDP, SCP, and MKP. In this context, the proposed hybrid demonstrated a competitive performance for the three different problems tested. Regarding the MCDP, LBLP achieved 25 best values known out of a total of 35, achieving second place overall; see Figure 1. Regarding the SCP, LBLP achieved 39 best values known out of 65, achieving first place; see Figure 2. In addition, it is well-known that optimization methods such as MH are designed to perform in certain environments. Thus, there exists a certain degree of uncertainty when employing such methods to tackle different types of optimization problems. For instance, we can observe the polarized performance reported by BCSO solving the MCDP and SCP. In this regard, this is one of the strong points of our proposition, as the optimization problem to be tackled is not an issue given the adaptability of our proposed LBLP. Regarding the MKP, the proposed LBLP reached second place with 20 best values known out of 30; see Figure 3. Regarding the overall performance, LBLP proved to be competitive. However, we observed an inconsistent performance solving the MKP when the optimum value was not reached. This issue can be interpreted as a consequence of LBLP not taking enough profit from the population size vs. the diversification/intensification relationship and the frequency on which knowledge is opportunely generated. In this first attempt designing LBLP, the approach works with static values for α and β . In this context, a first improvement can be described as the incorporation of a new learning-based component managing the values for α and β on demand. The objective will be to achieve higher adaptability, giving the decision to auto-assign thresholds to perform the scheme-selection mechanism and the regression analysis.

5.2. Second Experimentation Phase

In this subsection, we take a closer look at the performance achieved by classic and hybrid approaches. We compare and discuss implementations based on the classic SHO, classic SHO assisted by IRace, and the proposed LBLP. In addition, in order to further demonstrate the improvement given by hybrids in optimization tools, the Wilcoxon’s signed rank (Mann and Whitney 1947) test is carried out. We highlight the improvements, shortcomings, complexity, and robustness observed through the comparison.

5.2.1. Manufacturing Cell Design Problem

In this experimentation, Table 7 illustrates the comparison of results obtained by Classic SHO, Classic SHO assisted by IRace, and LBLP. In addition, in Table 8, a comparison against a hybrid approach is presented. This hybrid was proposed by Soto et al. in Ref. [34] and includes an approach based on the interaction between the population-based human behavior-based algorithm supported by autonomous search algorithm and autonomous search (HBBO-AS), which focuses on the modification of the population. The table description is as follows: column ID represents the identifier for each instance; the S o p t depicts the global optimum or best value known for the given instance; column Best, Worst, Mean, and R P D are the given values for best value reached, the worst value reached, the mean value of 30 executions, and the relative percentage deviation correspondingly for each approach. In order to compare and understand the results, we highlighted in bold the best result for each instance when the optimum is met.
Regarding the overall performance of approaches related to SHO, LBLP takes the lead and Classic SHO goes in last place. If we observe the median values, LBLP obtained 35.14 in comparison to 36.09 achieved by Classic SHO, and 35.43 by Classic SHO + IRace for column best. However, Classic SHO + Irace seems to be more consistent as we observed columns Worst and Mean, where 36.37 and 35.90 were the median values achieved against 37.54 and 36.00 reached by LBLP. Nevertheless, the achieved performance can be expressed as hybrid approaches being more competitive than their respective classic algorithms. LBLP demonstrated to be a good option tackling the MCDP and rooms for improvements were observed. In order to further demonstrate the performance by hybrids solving the MCDP, a statistical analysis is carried out. Table 9 illustrates a matrix that comprehends the resulting p-values after applying the well-known Average Wilcoxon–Mann–Whitney test for all the instances corresponding to the MCDP. Thus, a p-value less than 0.05 means that the difference is statistically significant, so the comparison of their averages is valid, such as LBLP vs. SHO. Concerning the comparison between LBLP and HBBO-AS, the approach led by autonomous search falls clearly behind on all the columns presented. However, new ideas and future interaction between optimization tools are highlighted. For instance, regarding performance metrics, the main job carried out by AS was to detect low performance or repetitive values/patterns on the solution. In this context, new components based on deep learning would clearly be effective at tackling this task.

5.2.2. Set Covering Problem

In this subsection, the results obtained by the three implementations are illustrated in Table 10. The description of the table is as follows: column ID represents the identifier assigned to each instance; S o p t depicts the global optimum or best value known for the given instance; column Best, Mean, and R P D are the given values for best value reached, the mean value of 30 executions, and the relative percentage deviation correspondingly for each approach. In order to compare and understand the results, we highlighted in bold the best result for each instance when the optimum is met.
Regarding the best values achieved, LBLP leads with 39 followed by 23 achieved by Classic SHO + IRace, and Classic SHO with 18. This is confirmed by the median values in column best and RPD, where LBLP achieved 197.31 and 1.09, Classic SHO computed 199.89 and 2.24, and Classic SHO + IRace obtained 197.95 and 1.42. Moreover, we highlight that even in the instances where the best values are not met, LBLP stays close to the reported values and this can be corroborated by the small RPD values computed. On the other hand, two interesting phenomenons can be observed in this test. Firstly, the hybridized implementation outperforms the classic approach. In addition, the LBLP median values for column mean can be interpreted as a degree of deficit in robustness. In order to tackle this issue, new improvements will be performed over the regression model and configuration parameters. Nevertheless, LBLP reached most values known and proved to be a competitive option tackling the SCP, which has multiple opportunities to evolve and improve in future works. In order to further demonstrate the performance by hybrids solving the SCP, a statistical analysis is carried out. Table 11 illustrates a matrix that comprehends the resulting p-values after applying the well-known Average Wilcoxon–Mann–Whitney test for all the instances corresponding to the SCP. Thus, a p-value less than 0.05 means that the difference is statistically significant, so the comparison of their averages is valid, such as LBLP vs. SHO.

5.2.3. Multidimensional Knapsack Problem

In this subsection, the results obtained tackling the MKP are compared and discussed. In Table 12, we illustrate the comparison of results obtained by the three implementation works. In addition, Table 13 illustrates a comparison between the proposed LBLP and LMBP, which is a hybrid architecture based on population algorithm assisted by multiple regression models [76]. The table description is as follows: column ID represents the identifier assigned to each instance; S o p t depicts the global optimum or best value known for the given instance; Column Best, Worst, Mean, and R P D are the given values for the best value reached, the worst value reached, the mean value of 30 executions, and the relative percentage deviation correspondingly for each approach. In order to compare and understand the results, we highlighted in bold the best result for each instance when the optimum is met.
Regarding the best values concerning Table 12, the implementation employing IRace leads the overall performance with median values for column Best of 67,268, followed by LBLP with 67,179, and 66,730 for classic SHO. In addition, this is corroborated by the median values computed for column RPD, and IRace achieved 0.27 against a 0.35 for the proposed hybrid. However, the phenomenon observed in this test differs completely in comparison to the ones reported in the previous subsections. The median values reported for column Mean illustrates good robustness in the overall performance of LBLP. On the other hand, the bad results illustrated by Classic SHO + IRace can be interpreted as an inconsistency in the performance and as being trapped in local optima in multiple MKP instances. In order to further demonstrate the performance by hybrids solving the MKP, a statistical analysis is carried out. Table 14 illustrates a matrix that comprehends the resulting p-values after applying the well-known Average Wilcoxon–Mann–Whitney test for all the instances corresponding to the MKP. Thus, a p-value less than 0.05 means that the difference is statistically significant, so the comparison of their averages is valid, such as LBLP vs. SHO. Regarding results in Table 13, an equal competition is observed, and LMPB is capable of achieving better values for solving instances where LBLP falls behind. Nevertheless, it is interesting to consider designing a more complete or complex learning-based component. We observed that there is no certainty in achieving a good performance with a sole technique solving all the instances. Thus, a proper answer to this issue could be presented by the design of different learning techniques.

5.2.4. Overall Performance in This Phase

In this second experimentation phase, the proposed LBLP was compared against the Classic SHO and Classic SHO + IRace solving the MCDP, SCP, and MKP. The objective was to verify the improvements blending a learning-based method in the search process of a population-based strategy. In this regard, the proposed LBLP achieved good results solving the optimization problems. Figure 4, Figure 5 and Figure 6 illustrate a performance overview, which ended up corroborating the idea of profiting over dynamic data generated. On the other hand, the good performance demonstrated by Classic SHO + IRace is to be expected. IRace is an off-line method that specializes in the tuning of parameters. Regarding the complexity, users need a certain degree of expertise in R, as the scripts configuration process can be an arduous task, and the implemented optimization tool can be enhanced by IRace. Regarding the proposed approach, LBLP requires the configuration of a scheme, α , and β . In addition, the implementation comprehends a population-based algorithm and well-known statistical modeling methods.
Regarding the observed phenomenons, while solving the MCDP and SCP, LBLP achieved a considerable amount of best values for the column Best but falls behind for the column Mean. Nonetheless, this situation completely changed while solving the MKP. The interpretation can be described in two ways: LBLP asking for a faster response in the learning process and a more detailed configuration of the parameters. Firstly, the parameters proposed in this work are static through the search. This issue was already addressed in Section 5.1.4. Nevertheless, multiple and unexpected events may present themselves while the search is being carried out. Thus, in order for LBLP to answer properly, the first improvement needs to be done over α and β , which controls the scheme selection and the generation of knowledge. On the other hand, concerning the proposed scheme, 4 different values were employed that completely differ from the static values employed by Classic SHO + IRace, such as 41 and 33. In this regard, multiple options as schemes will be added and tested. For instance, 20 different schemes from 20 to 40 agents. Lastly, a more complex scenario can be designed for a further detailed mechanism to be employed by the learning model. The objective is for each defined scheme to implement different α and β values in order to increment the adaptiveness of LBLP.

6. Conclusions

In this work, a hybrid self-adaptive approach has been proposed in order to solve hard discrete optimization problems. The main objective was to improve the performance by transforming a general component that exists on all population-based algorithms—the population size. The proposed strategy focuses on the dynamic update of this parameter in order to give high adaptive capacities to the agents, which is governed by a learning-based component that takes profits from their dynamic data generated on run-time. Interesting facts concerning the design are described as follows. As the complexity of the learning component is not high as the statistical modeling methods employed are well-known, the main issue is the novelty in the designed mechanism taking profit of the technique. In this context, movement operators from SHO and linear regression are classic means in their respective fields to solve multiple problems. On the other hand, general complications and drawbacks can be described as follows: computational time incremented, increment of complexity based on the scalability in the designed architecture, and increment at the complexity based on the wide spectrum of optimization problems to be tackled.
Regarding the experimentation carried out, LBLP proved to be a good option in comparison to state-of-the-art methods. We solved three well-known different hard optimization problems: the MCDP, SCP, and MKP, employing a unique configuration set of parameter values for the LBLP. In this context, the first phase helped us to measure at which point LBLP was a viable optimization tool in comparison to already reported approaches. The second phase was meant to highlight the improvements achieved regarding the performance between a pure population-based algorithm vs. the incorporation of a low-level learning-based component of the design. In addition, the competitiveness against reported successful hybrids and parameter-tuned versions of the employed algorithms was highlighted. This is an interesting observation, mainly because of the limitations behind the proposed approach, which concerns the algorithms selected. For instance, if we observe the linear regression, the main drawback is the inclusion of a unique performance metric as an independent variable in the model. In this regard, there are several metrics that exist in the literature and can have different weights during the search, such as bad solutions, percentage of unfeasible solutions, diversity, and amount of feasible solutions generated, among others. Nevertheless, the overall good performance and the given rooms for improvement brings motivation to further exploit this research field. In addition, this work contributed with scientific evidence of hybridized strategies outperforming their classic algorithm, proving to be profitable approaches solving hard optimization problems.
Regarding the phenomenon described in the experimentation phases, future considerations and improvements were discussed. In this regard, two improvements are under consideration: (1) dynamically adjusting values for α and β and (2) multiple and larger ranges for population size values. On the other hand, the well-known drawback generally associated with on-line data-driven methods are the amount, profit, and quality of the data given to the model to properly and timely learn on run-time. Thus, as there is no guarantee for the performance achieved by different learning techniques, it is a major issue to carry out an extensive experimental process employing state-of-the-art regression-based methods. However, this consideration can end up on a considerable increment on solving time in comparison to the ones reported in this work. Thus, the incorporation of an optimizer regarding the computational resources employed on run-time will be a key factor for future proposals.

Author Contributions

Formal analysis, E.V., J.P. and R.S.; investigation, E.V., J.P., J.L.-R., R.S. and B.C.; resource, R.S.; software, E.V., C.L. and J.P.; validation, B.C. and E.-G.T.; writing—original draft, E.V., C.L., R.S. and B.C.; writing—review and editing, E.V., J.L.-R., C.L. and R.S. All authors have read and agreed to the published version of the manuscript.

Funding

Ricardo Soto is supported by Grant CONICYT/FONDECYT/REGULAR/1190129. Broderick Crawford is supported by Grant ANID/FONDECYT/REGULAR/1210810, and Emanuel Vega is supported by National Agency for Research and Development (ANID)/Scholarship Program/DOCTORADO NACIONAL/2020-21202527. Javier Peña is supported by National Agency for Research and Development (ANID)/Scholarship Program/DOCTORADO NACIONAL/2022-21220879.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Talbi, E.G. Metaheuristics: From Design to Implementation; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  2. Del Ser, J.; Osaba, E.; Molina, D.; Yang, X.S.; Salcedo-Sanz, S.; Camacho, D.; Das, S.; Suganthan, P.N.; Coello, C.A.; Herrera, F. Bio-inspired computation: Where we stand and what’s next. Swarm Evol. Comput. 2019, 48, 220–250. [Google Scholar] [CrossRef]
  3. Piotrowski, A.P.; Napiorkowski, J.J.; Piotrowska, A.E. Population size in particle swarm optimization. Swarm Evol. Comput. 2020, 58, 100718. [Google Scholar] [CrossRef]
  4. Hansen, N.; Müller, S.D.; Koumoutsakos, P. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput. 2003, 11, 1–18. [Google Scholar] [CrossRef]
  5. Hansen, N.; Auger, A. CMA-ES: Evolution strategies and covariance matrix adaptation. In Proceedings of the 13th Annual Conference Companion on Genetic and Evolutionary Computation, Dublin, Ireland, 12–16 July 2011; pp. 991–1010. [Google Scholar]
  6. Sarker, R.; Kamruzzaman, J.; Newton, C. Evolutionary optimization (EvOpt): A brief review and analysis. Int. J. Comput. Intell. Appl. 2003, 3, 311–330. [Google Scholar] [CrossRef]
  7. Hansen, N.; Ostermeier, A. Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 2001, 9, 159–195. [Google Scholar] [CrossRef] [PubMed]
  8. Gupta, S. Enhanced harmony search algorithm with non-linear control parameters for global optimization and engineering design problems. Eng. Comput. 2022, 38 (Suppl. 4), 3539–3562. [Google Scholar] [CrossRef]
  9. Huang, Y.F.; Chen, P.H. Fake news detection using an ensemble learning model based on self-adaptive harmony search algorithms. Expert Syst. Appl. 2020, 159, 113584. [Google Scholar] [CrossRef]
  10. Kulluk, S.; Ozbakir, L.; Baykasoglu, A. Self-adaptive global best harmony search algorithm for training neural networks. Procedia Comput. Sci. 2011, 3, 282–286. [Google Scholar] [CrossRef]
  11. Banks, A.; Vincent, J.; Anyakoha, C. A review of particle swarm optimization. Part I: Background and development. Nat. Comput. 2007, 6, 467–484. [Google Scholar] [CrossRef]
  12. Cheng, S.; Lu, H.; Lei, X.; Shi, Y. A quarter century of particle swarm optimization. Complex Intell. Syst. 2018, 4, 227–239. [Google Scholar] [CrossRef]
  13. Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
  14. Song, H.; Triguero, I.; Özcan, E. A review on the self and dual interactions between machine learning and optimisation. Prog. Artif. Intell. 2019, 8, 143–165. [Google Scholar] [CrossRef]
  15. Karimi-Mamaghan, M.; Mohammadi, M.; Meyer, P.; Karimi-Mamaghan, A.M.; Talbi, E.G. Machine learning at the service of meta-heuristics for solving combinatorial optimization problems: A state-of-the-art. Eur. J. Oper. Res. 2022, 296, 393–422. [Google Scholar] [CrossRef]
  16. Talbi, E.G. Machine learning into metaheuristics: A survey and taxonomy. Acm Comput. Surv. (CSUR) 2021, 54, 1–32. [Google Scholar] [CrossRef]
  17. Birattari, M.; Kacprzyk, J. Tuning Metaheuristics: A Machine Learning Perspective (Vol. 197); Springer: Berlin, Germany, 2009. [Google Scholar]
  18. Talbi, E.G. Combining metaheuristics with mathematical programming, constraint programming and machine learning. Ann. Oper. Res. 2016, 240, 171–215. [Google Scholar] [CrossRef]
  19. Calvet, L.; de Armas, J.; Masip, D.; Juan, A.A. Learnheuristics: Hybridizing metaheuristics with machine learning for optimization with dynamic inputs. Open Math. 2017, 15, 261–280. [Google Scholar] [CrossRef]
  20. Dhiman, G.; Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Software 2017, 114, 48–70. [Google Scholar] [CrossRef]
  21. Luo, Q.; Li, J.; Zhou, Y.; Liao, L. Using spotted hyena optimizer for training feedforward neural networks. Cogn. Syst. Res. 2021, 65, 1–16. [Google Scholar] [CrossRef]
  22. Soto, R.; Crawford, B.; Vega, E.; Gómez, A.; Gómez-Pulido, J.A. Solving the Set Covering Problem Using Spotted Hyena Optimizer and Autonomous Search. In Advances and Trends in Artificial Intelligence; Wotawa, F., Friedrich, G., Pill, I., Koitz-Hristov, R., Ali, M., Eds.; From Theory to Practice. IEA/AIE 2019. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2019; Volume 11606. [Google Scholar]
  23. Ghafori, S.; Gharehchopogh, F.S. Advances in spotted hyena optimizer: A comprehensive survey. Arch. Comput. Methods Eng. 2022, 29, 1569–1590. [Google Scholar] [CrossRef]
  24. Dhiman, G.; Kaur, A. Spotted hyena optimizer for solving engineering design problems. In Proceedings of the 2017 International Conference on Machine Learning and Data Science (MLDS), Noida, India, 14–15 December 2017; IEEE: Piscataway, NJ, USA; pp. 114–119. [Google Scholar]
  25. Dhiman, G.; Kumar, V. Spotted hyena optimizer for solving complex and non-linear constrained engineering problems. In Harmony Search and Nature Inspired Optimization Algorithms: Theory and Applications, ICHSA 2018; Springer: Singapore, 2019; pp. 857–867. [Google Scholar]
  26. Dhiman, G.; Kaur, A. A hybrid algorithm based on particle swarm and spotted hyena optimizer for global optimization. In Soft Computing for Problem Solving: SocProS 2017; Springer: Singapore, 2019; Volume 1, pp. 599–615. [Google Scholar]
  27. Mahdavi, I.; Paydar, M.M.; Solimanpur, M.; Heidarzade, A. Genetic algorithm approach for solving a cell formation problem in cellular manufacturing. Expert Syst. Appl. 2009, 36, 6598–6604. [Google Scholar] [CrossRef]
  28. Beasley, J.E. An algorithm for set covering problem. Eur. J. Oper. Res. 1987, 31, 85–93. [Google Scholar] [CrossRef]
  29. Fréville, A. The multidimensional 0–1 knapsack problem: An overview. Eur. J. Oper. Res. 2004, 155, 1–21. [Google Scholar] [CrossRef]
  30. Lopez-Ibanez, M.; Dubois-Lacoste, J.; Caceres, L.P.; Birattari, M.; Stutzle, T. The irace package: Iterated racing for automatic algorithm configuration. Oper. Res. Perspect. 2016, 3, 43–58. [Google Scholar]
  31. Talbi, E.-G. A taxonomy of hybrid metaheuristics. J. Heuristics 2002, 8, 541–564. [Google Scholar] [CrossRef]
  32. Zennaki, M.; Ech-Cherif, A. A new machine learning based approach for tuning metaheuristics for the solution of hard combinatorial optimization problems. J. Appl. Sci. 2010, 10, 1991–2000. [Google Scholar] [CrossRef]
  33. de Lacerda, M.G.P.; de Araujo Pessoa, L.F.; de Lima Neto, F.B.; Ludermir, T.B.; Kuchen, H. A systematic literature review on general parameter control for evolutionary and swarm-based algorithms. Swarm Evol. Comput. 2021, 60, 100777. [Google Scholar] [CrossRef]
  34. Soto, R.; Crawford, B.; González, F.; Vega, E.; Castro, C.; Paredes, F. Solving the Manufacturing Cell Design Problem Using Human Behavior-Based Algorithm Supported by Autonomous Search. IEEE Access 2019, 7, 132228–132239. [Google Scholar] [CrossRef]
  35. Handa, H.; Baba, M.; Horiuchi, T.; Katai, O. A novel hybrid framework of coevolutionary GA and machine learning. Int. J. Comput. Intell. Appl. 2002, 2, 33–52. [Google Scholar] [CrossRef]
  36. Adak, Z.; Demiriz, A. Hybridization of population-based ant colony optimization via data mining. Intell. Data Anal. 2020, 24, 291–307. [Google Scholar] [CrossRef]
  37. Streichert, F.; Stein, G.; Ulmer, H.; Zell, A. A clustering based niching method for evolutionary algorithms. In Genetic and Evolutionary Computation Conference; Springer: Berlin/Heidelberg, Germany, 2003; pp. 644–645. [Google Scholar]
  38. Sörensen, K.; Glover, F. Metaheuristics. Encycl. Oper. Res. Manag. Sci. 2013, 62, 960–970. [Google Scholar]
  39. Gogna, A.; Tayal, A. Metaheuristics: Review and application. J. Exp. Theor. Artif. Intell. 2013, 25, 503–526. [Google Scholar] [CrossRef]
  40. Crama, Y.; Kolen, A.W.; Pesch, E.J. Local search in combinatorial optimization. Artif. Neural Networks 1995, 931, 157–174. [Google Scholar]
  41. Kirkpatrick, S. Optimization by simulated annealing: Quantitative studies. J. Stat. Physics 1984, 34, 975–986. [Google Scholar] [CrossRef]
  42. Eusuff, M.; Lansey, K.; Pasha, F. Shuffled frog-leaping algorithm: A memetic meta-heuristic for discrete optimization. Eng. Optim. 2006, 38, 129–154. [Google Scholar] [CrossRef]
  43. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  44. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Software 2014, 69, 46–61. [Google Scholar] [CrossRef]
  45. Holland, J.H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  46. Moscato, P. On evolution, search, optimization, genetic algorithms and martial arts: Towards memetic algorithms. Caltech Concurr. Comput. Program C3p Rep. 1989, 826, 1989. [Google Scholar]
  47. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  48. Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; IEEE: Piscataway, NJ, USA; pp. 4661–4667. [Google Scholar]
  49. Cuevas, E.; Fausto, F.; González, A. New Advancements in Swarm Algorithms: Operators and Applications; Springer International Publishing: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  50. Mirjalili, S.; Mirjalili, S.M.; Yang, X.S. Binary bat algorithm. Neural Comput. Appl. 2014, 25, 663–681. [Google Scholar] [CrossRef]
  51. Mirjalili, S.; Lewis, A. S-shaped versus v-shaped transfer functions for binary particle swarm optimization. Swarm Evol. Comput. 2013, 9, 1–14. [Google Scholar] [CrossRef]
  52. Faris, H.; Mafarja, M.M.; Heidari, A.A.; Aljarah, I.; Ala’M, A.Z.; Mirjalili, S.; Fujita, H. An efficient binary salp swarm algorithm with crossover scheme for feature selection problems. Knowl.-Based Syst. 2018, 154, 43–67. [Google Scholar] [CrossRef]
  53. Mafarja, M.; Aljarah, I.; Heidari, A.A.; Faris, H.; Fournier-Viger, P.; Li, X.; Mirjalili, S. Binary dragonfly optimization for feature selection using time-varying transfer functions. Knowl.-Based Syst. 2018, 161, 185–204. [Google Scholar] [CrossRef]
  54. Mirjalili, S.; Hashim, S.Z.M. BMOA: Binary magnetic optimization algorithm. Int. J. Mach. Learn. Comput. 2012, 2, 204. [Google Scholar] [CrossRef]
  55. Valenzuela, M.; Pinto, H.; Moraga, P.; Altimiras, F.; Villavicencio, G. A percentile methodology applied to Binarization of swarm intelligence metaheuristics. J. Inf. Syst. Eng. Manag. 2019, 4, em0104. [Google Scholar] [CrossRef] [PubMed]
  56. Gölcük, İ.; Ozsoydan, F.B.; Durmaz, E.D. Analysis of Different Binarization Techniques within Whale Optimization Algorithm. In Proceedings of the 2019 Innovations in Intelligent Systems and Applications Conference (ASYU), Izmir, Turkey, 31 October–2 November 2019; IEEE: Piscataway, NJ, USA; pp. 1–5. [Google Scholar]
  57. Slezkin, A.O.; Hodashinsky, I.A.; Shelupanov, A.A. Binarization of the Swallow swarm optimization for feature selection. Program. Comput. Softw. 2021, 47, 374–388. [Google Scholar] [CrossRef]
  58. Kennedy, J.; Eberhart, R.C. A discrete binary version of the particle swarm algorithm. In Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, Orlando, FL, USA, 12–15 October 1997; IEEE: Piscataway, NJ, USA; Volume 5, pp. 4104–4108. [Google Scholar]
  59. Boctor, F.F. A Jinear formulation of the machine-part cell formation problem. Int. J. Prod. Res. 1991, 29, 343–356. [Google Scholar] [CrossRef]
  60. Smith, B. Impacs—A bus crew scheduling system using integer programming. Math Program 1988, 42, 181–187. [Google Scholar] [CrossRef]
  61. Toregas, C.; Swain, R.; Revelle, C.; Bergman, L. The location of emergency service facilities. Oper. Res. 1971, 19, 1363–1373. [Google Scholar] [CrossRef]
  62. Foster, B.; Ryan, D. An integer programming approach to the vehicle scheduling problem. Oper. Res. Q 1976, 27, 367–384. [Google Scholar] [CrossRef]
  63. Fisher, M.; Kedia, P. Optimal solution of set covering/partitioning problems using dual heuristics. Manag. Sci. 1990, 36, 674–688. [Google Scholar] [CrossRef]
  64. Pisinger, D. The quadratic knapsack problem—A survey. Discret. Appl. Math. 2007, 155, 623–648. [Google Scholar] [CrossRef]
  65. Horowitz, E.; Sahni, S. Computing partitions with applications to the knapsack problem. J. ACM (JACM) 1974, 21, 277–292. [Google Scholar] [CrossRef]
  66. Soto, R.; Crawford, B.; Toledo, A.; Fuente-Mella, H.; Castro, C.; Paredes, F.; Olivares, R. Solving the Manufacturing Cell Design Problem through Binary Cat Swarm Optimization with Dynamic Mixture Ratios. Comput. Intell. Neurosci. 2019, 2019, 4787856. [Google Scholar] [CrossRef] [PubMed]
  67. Almonacid, B.; Aspee, F.; Soto, R.; Crawford, B.; Lama, J. Solving the Manufacturing Cell Design Problem using the Modified Binary Firefly Algorithm and the Egyptian Vulture Optimization Algorithm. IET Softw. 2016, 11, 105–115. [Google Scholar] [CrossRef]
  68. Crawford, B.; Soto, R.; Berros, N.; Johnson, F.; Paredes, F.; Castro, C.; Norero, E. A binary cat swarm optimization algorithm for the non-unicost set covering problem. Math Probl. Eng. 2014, 2015, 578541. [Google Scholar] [CrossRef]
  69. Crawford, B.; Soto, R.; Olivares-Suarez, M.; Paredes, F. A binary firefly algorithm for the set covering problem. In 3rd Computer Science On-Line Conference 2014 (CSOC 2014); Advances in intelligent systems and computing; Springer: Cham, Switzerland, 2014; Volume 285, pp. 65–73. [Google Scholar]
  70. Crawford, B.; Soto, R.; Pena, C.; Palma, W.; Johnson, F.; Paredes, F. Solving the set covering problem with a shuffled frog leaping algorithm. In Proceedings of the 7th Asian Conference, ACIIDS 2015, Bali, Indonesia, 23–25 March 2015; Proceedings, Part II. Lecture Notes in Computer Science. Springer: Cham, Switzerland, 2015; Volume 9012, pp. 41–50. [Google Scholar]
  71. Cuesta, R.; Crawford, B.; Soto, R.; Paredes, F. An artificial bee colony algorithm for the set covering problem, In 3rd Computer Science On-Line Conference 2014 (CSOC 2014); Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2014; Volume 285, pp. 53–63. [Google Scholar]
  72. Soto, R.; Crawford, B.; Munoz, A.; Johnson, F.; Paredes, F. Preprocessing, repairing and transfer functions can help binary electromagnetism-like algorithms. In Artificial Intelligence Perspectives and Applications; Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2015; Volume 347, pp. 89–97. [Google Scholar]
  73. Khemakhem, M.; Haddar, B.; Chebil, K.; Hanafi, S. A Filter-and-Fan Metaheuristic for the 0-1 Multidimensional Knapsack Problem. Int. J. Appl. Metaheuristic Comput. (IJAMC) 2012, 3, 43–63. [Google Scholar] [CrossRef]
  74. Chih, M. Three pseudo-utility ratio-inspired particle swarm optimization with local search for multidimensional knapsack problem. Swarm Evol. Comput. 2018, 39, 279–296. [Google Scholar] [CrossRef]
  75. Haddar, B.; Khemakhem, M.; Hanafi, S.; Wilbaut, C. A hybrid quantum particle swarm optimization for the multidimensional knapsack problem. Eng. Appl. Artif. Intell. 2016, 55, 1–13. [Google Scholar] [CrossRef]
  76. Vega, E.; Soto, R.; Contreras, P.; Crawford, B.; Peña, J.; Castro, C. Combining a Population-Based Approach with Multiple Linear Models for Continuous and Discrete Optimization Problems. Mathematics 2022, 10, 2920. [Google Scholar] [CrossRef]
Figure 1. Performance comparison between state-of-the-art approaches vs. LBLP tackling the MCDP.
Figure 1. Performance comparison between state-of-the-art approaches vs. LBLP tackling the MCDP.
Biomimetics 09 00082 g001
Figure 2. Performance comparison between state-of-the-art approaches vs. LBLP tackling the SCP.
Figure 2. Performance comparison between state-of-the-art approaches vs. LBLP tackling the SCP.
Biomimetics 09 00082 g002
Figure 3. Performance comparison between state-of-the-art approaches vs. LBLP tackling the MKP.
Figure 3. Performance comparison between state-of-the-art approaches vs. LBLP tackling the MKP.
Biomimetics 09 00082 g003
Figure 4. Performance comparison between SHO, SHO assisted by IRace, and LBLP tackling the MCDP.
Figure 4. Performance comparison between SHO, SHO assisted by IRace, and LBLP tackling the MCDP.
Biomimetics 09 00082 g004
Figure 5. Performance comparison between SHO, SHO assisted by IRace, and LBLP tackling the SCP.
Figure 5. Performance comparison between SHO, SHO assisted by IRace, and LBLP tackling the SCP.
Biomimetics 09 00082 g005
Figure 6. Performance comparison between SHO, SHO assisted by IRace, and LBLP tackling the MKP.
Figure 6. Performance comparison between SHO, SHO assisted by IRace, and LBLP tackling the MKP.
Biomimetics 09 00082 g006
Table 1. Transfer function.
Table 1. Transfer function.
S-Shapev-Shape
S 1 : T ( x i d ( t + 1 ) ) = 1 1 + e 2 x i d ( t + 1 ) V 1 : T ( x i d ( t + 1 ) ) = e r f π 2 x i d ( t + 1 )
S 2 : T ( x i d ( t + 1 ) ) = 1 1 + e x i d ( t + 1 ) V 2 : T ( x i d ( t + 1 ) ) = t a n h x i d ( t + 1 )
S 3 : T ( x i d ( t + 1 ) ) = 1 1 + e x i d ( t + 1 ) 2 V 3 : T ( x i d ( t + 1 ) ) = x i d ( t + 1 ) 1 + x i d ( t + 1 ) 2
S 4 : T ( x i d ( t + 1 ) ) = 1 1 + e x i d ( t + 1 ) 3 V 4 : T ( x i d ( t + 1 ) ) = 2 π a r c t a n π 2 x i d ( t + 1 )
Table 2. Configuration details from MKP instances employed in this work.
Table 2. Configuration details from MKP instances employed in this work.
IDTest ProblemOptimal Solutionnm
mknapcb15.100.0024,3811005
5.100.0124,2741005
5.100.0223,5511005
5.100.0323,5341005
5.100.0423,9911005
mknapcb25.250.0059,3122505
5.250.0161,4722505
5.250.0262,1302505
5.250.0359,4632505
5.250.0458,9512505
mknapcb35.500.00120,1485005
5.500.01117,8795005
5.500.02121,1315005
5.500.03120,8045005
5.500.04122,3195005
mknapcb410.100.0023,06410010
10.100.0122,80110010
10.100.0222,13110010
10.100.0322,77210010
10.100.0422,75110010
mknapcb510.250.0059,18725010
10.250.0158,78125010
10.250.0258,09725010
10.250.0361,00025010
10.250.0458,09225010
mknapcb610.500.00117,82150010
10.500.01119,24950010
10.500.02119,21550010
10.500.03118,82950010
10.500.04116,53050010
Table 3. The second experimentation phase’s configuration parameters for LBLP, Classic SHO, and Classic SHO + IRace.
Table 3. The second experimentation phase’s configuration parameters for LBLP, Classic SHO, and Classic SHO + IRace.
MCDPSCPMKP
AlgorithmsParametersValuesParametersValuesParametersValues
Classic SHOSearch Agents30Search Agents30Search Agents30
Control Parameter (h)[5, 0]Control Parameter (h)[5, 0]Control Parameter (h)[5, 0]
M Constant[0.5, 1]M Constant[0.5, 1]M Constant[0.5, 1]
Number of Generations10,000Number of Generations10,000Number of Generations10,000
Classic SHO + IRaceSearch Agents33Search Agents41Search Agents30
Control Parameter (h)[5, 0]Control Parameter (h)[5, 0]Control Parameter (h)[5, 0]
M Constant[0.5, 1]M Constant[0.5, 1]M Constant[0.5, 1]
Number of Generations10,000Number of Generations10,000Number of Generations10,000
LBLPSearch AgentsSchemes (20, 30, 40, 50)Search AgentsSchemes (20, 30, 40, 50)Search AgentsSchemes (20, 30, 40, 50)
Control Parameter (h)[5, 0]Control Parameter (h)[5, 0]Control Parameter (h)[5, 0]
M Constant[0.5, 1]M Constant[0.5, 1]M Constant[0.5, 1]
Number of Generations10,000Number of Generations10,000Number of Generations10,000
α 100 α 100 α 100
β 1000 β 1000 β 1000
Table 4. Computational results achieved by LBLP and state-of-the-art approaches solving the MCDP.
Table 4. Computational results achieved by LBLP and state-of-the-art approaches solving the MCDP.
IDSoptLBLPBCSOEVOAMBFAHBBO-AS
BestMeanRPD (%)BestMeanRPD (%)BestMeanRPD (%)BestMeanRPD (%)BestMeanRPD (%)
CFP01000.00.00000.00000.00000.0000.000.00
CFP02333.70.00330.00330.00330.0033.000.00
CFP03555.30.00550,00550.00550.0055.000.00
CFP04222.20.00220.00220.00220.0022.000.00
CFP05888.80.00880.00880.009912.5088.000.00
CFP06444.80.00440.00440.00440.0044.000.00
CFP07777.00.00770.00770.008814.2977.000.00
CFP08777.10.00770.00770.00770.0077.000.00
CFP09252526.60.0025250,0025250.0027278.002525.000.00
CFP10000.00.00000.0001.20.00330.0000.000.00
CFP11000.00.00000.0000.80.00000.0000.000.00
CFP12777.90.00770.001113.357.14910.128.5789.3914.28
CFP13899.911.76880.001214.350.0088.40.0099.6012.50
CFP142425.424243032.93639.62931.20
CFP151717.017173135.71821.11722.79
CFP163030.32929.054244.63943.83638.00
CFP172626.82626.533234.23233.23334.20
CFP184243.44141.184649.95256.24849.00
CFP194040.738385153.44951.65052.20
CFP2022.0222836712.32833.40
CFP213737.835355760.34343.55658.79
CFP2200.004.93037.5015.54244.00
CFP231011.11013.533944.213154448.20
CFP241819.91820.984449.72527.64649.79
CFP254041.74042.66061.64956.16163.20
CFP265961.45962.1568706465.67171.80
CFP276464.66164.056970.66768.87171.40
CFP285455.254548494.17692.199106.19
CFP299193.59196.1102112.8106109.1118122.00
CFP303737.43742.65759.74358.36465.00
CFP315253.45257.97075.35460.47984.19
CFP326868.86672.158687.67677.69093.80
CFP339393.69394.93136144.8116122.6155159.00
CFP34259261.2256256352369.2325329.5386408.20
CFP359091.683110.58181195.6114119.2225231.39
X 35.1436.000.9034.5136.610.0050.8354.588.2442.5445.864.8755.0357.652.06
Table 5. Computational results achieved by LBLP and state-of-the-art approaches solving the SCP.
Table 5. Computational results achieved by LBLP and state-of-the-art approaches solving the SCP.
IDSoptLBLPBCSOBFOBSFLABABCBELA
BestMeanRPD (%)BestMeanRPD (%)BestMeanRPD (%)BestMeanRPD (%)BestMeanRPD (%)BestMeanRPD (%)
4.14294294320.004594807.004294300.004304300.234304300.234474484.20
4.25125125170.0057059411.305175170.975165180.785135130.205595599.18
4.35165165210.0059060714.305195220.585205200.785195210.585375394.07
4.44944945030.0054757810.704954970.205015041.424954960.205275306.68
4.55125145170.395455546.405145150.395145140.395145170.395275292.93
4.65605605600.0063765013.805635650.535635630.545615650.186076088.39
4.74304304320.004624677.404304300.004314320.234314340.234484494.19
4.84924924950.0054656711.004974991.014974991.024934940.205095123.46
4.96456456480.0071172510.906556582.186566562.346496510.936826826.40
4.105145175260.585375524.505195230.975185190.785175190.5857157111.09
5.12532532550.0027928710.302572601.582542550.402542550.4028028110.67
5.23023093092.2933934012.303093112.313073071.663093092.323183215.30
5.32262262300.002472519.302292331.322282300.882292331.332422407.08
5.42422422450.002512533.702422420.002422420.002422450.002512523.72
5.52112112130.002302309.002112130.002112130.002112120.002252276.64
5.62132132130.002322438.902132130.002132140.002142140.4724724815.96
5.72932973011.3633233813.302983011.702972991.372983011.713163177.85
5.82882882910.0032033011.102912921.042912931.042892910.353153179.38
5.92792802810.362952975.702842841.792812830.722802810.3631431512.54
5.102652652670.002852877.502682701.132652660.002672700.752802825.66
6.11381421442.861511609.401381400.001401411.451421432.9015215210.14
6.21461461500.001521574.101471490.681471470.681471500.681601619.59
6.31451451480.0016016410.301471501.371471481.381481492.0716016310.34
6.41311311330.001381425.301311310.001311330.001311330.001401426.87
6.51611611610.001691735.001641571.861661693.111651672.4818418714.29
A.12532532560.0028628713.002552560.792552580.792542540.402612643.16
A.22522562571.572742768.702592612.772602603.172572591.9827928110.71
A.32322332350.4325726310.802382402.582372392.162352381.292522538.62
A.42342352390.432482516.002352370.422352380.432362370.852502526.84
A.52362362370.002442443.002362370.002362390.002362380.002412432.12
B.16969710.00797914.5071722.8970701.4570701.45868724.64
B.27676780.00868913.2078782.6376770.0078792.63888815.79
B.38080800.0085856.3080800.0080800.0080800.0085876.25
B.47979800.00898912.7080811.2679800.0080811.2784886.33
B.57272740.0073731.4072730.0072730.0072740.0078818.33
C.12272292320.882422426.602302321.322292310.882312331.762372384.41
C.22192212250.912402419.602232241.822232251.832222231.372372398.22
C.32432432560.0027727814.002532544.112532534.122542554.5327127111.52
C.42192192220.0025025012.302252272.732272283.652312335.4824624812.33
C.52152152190.0024324413.002172190.932172180.932162170.472242254.19
D.16060610.0065668.3060610.0060620.0060610.0062623.33
D.26666660.0070706.1068683.0367681.5268683.03737410.61
D.37273771.3879819.7075774.1675774.1776775.5679819.72
D.46263641.6064673.2062620.0063651.6163651.6167698.06
D.56161620.0065666.6063633.2763663.2863663.2866678.20
E.12929300.0029300.0029310.0029290.0029330.0030313.45
E.23032326.45343413.3032326.6631323.3332326.67353516.67
E.32728293.64313214.8029307.4028283.7029317.41343425.93
E.42829303.51323314.3029313.5729303.5729303.57333417.84
E.52828300.0030307.1029293.5728310.0029323.5730317.14
F.11414150.00171721.4015177.1415157.1414150.00171721.43
F.21515150.00181820.0016166.6615150.0016166.67181820.00
F.314161613.33171721.40161714.28161714.29161714.29171821.49
F.41414160.00171721.4015187.1415167.1415177.14171921.43
F.51314157.41151615.40151915.38151715.38151615.38161723.08
G.11761761780.001901938.001851915.111821833.411831843.9819419610.23
G.21541581632.561651667.101611634.541611614.551621635.1917617614.29
G.31661691701.7918718820.601751775.421731744.221741754.8218418510.84
G.41681701711.181791836.501761764.761731772.981751774.1719619716.67
G.51681681700.001811847.701771815.351741743.571791816.5519819917.86
H.16366674.65707111.1069709.5268697.94707111.11707111.11
H.26365683.1367676.3066664.7666664.7669729.52717112.70
H.35962654.96687015.30656710.1662635.08666711.86687015.25
H.45859601.71666713.8063656.7763648.62646410.34707220.69
H.55556611.80616210.9059607.2759617.2760619.09696925.45
X196.40197.31199.751.09214.98219.4210.12199.51200.922.95199.15200.372.43199.32200.853.04212.42213.6910.82
Table 6. Computational results achieved by LBLP and state-of-the-art approaches solving the MKP.
Table 6. Computational results achieved by LBLP and state-of-the-art approaches solving the MKP.
InstanceTest ProblemSoptLBLP QPSO 3R—PSO F & F
BestMeanRPD (%)BestMeanRPD (%)BestMeanRPD (%)BestMeanRPD (%)
5.100.0024,38124,38124,3600.0024,381243810.0024,38124,3810.0024,3810.00
5.100.0124,27424,27424,2740.0024,27424,2740.002427424,2740.0024,2740.00
5.100.0223,55123,55123,5460.0023,55123,5510.0023,53823,5380.0623,5510.00
5.100.032353423534234730.0023,53423,5340.002353423,5080.0023,5340.00
mknapcb15.100.0423,99123,99123,9800.0023,99123,9910.0023,99123,9610.0023,9910.00
5.250.0059,31259,31258,9340.0059,31259,3120.0059,3120.00
5.250.0161,47261,47261,3240.0061,47261,4700.0061,4680.01
5.250.0262,13062,13061,9970.0062,13062,1300.0062,1300.00
5.250.0359,46359,46356,9010.0059,42759,4270.0659,4360.05
mknapcb25.250.0458,95158,08257,7891.4758,95158,9510.0058,9510.00
5.500.00120,148120,148120,1210.00120,130120,1050.01120,141102,1010.01120,1340.01
5.500.01117,879115,634114,1431.90117,844117,8340.03117,864117,8250.01117,8640.01
5.500.02121,131121,131120,4990.00121,112121,0920.02121,129121,1030.00121,1310.00
5.500.03120,804119,124117,3111.39120,804120,7400.00120,804120,7220.00120,7940.01
mknapcb35.500.04122,319122,319119,1530.00122,319122,3000.00122,319122,3100.00122,3190.00
10.100.0023,06423,06422,9810.0023,06423,0640.0023,06423,0500.0023,0640.00
10.100.0122,80122,80122,7750.0022,80122,8010.0022,80122,7520.0022,8010.00
10.100.0222,13122,13122,1310.0022,13122,1310.0022,13122,1190.0022,1310.00
10.100.0322,77222,77222,2830.0022,77222,7720.0022,77222,7440.0022,7720.00
mknapcb410.100.0422,75122,75122,6470.0022,75122,7510.0022,75122,6510.0022,7510.00
10.250.0059,18758,47658,1641.2059,182591730.0159,1640.04
10.250.0158,78157,93757,2861.4458,78158,7330.0058,6930.15
10.250.0258,09758,09757,9210.0058,09758,0960.0058,0940.01
10.250.036100061,00060,6500.0061,00060,9860.0060,9720.05
mknapcb510.250.0458,09256,27656,2593.1358,09258,0920.0058,0920.00
10.500.00117,821117,779117,7540.04117,744117,7330.07117,790117,6990.03117,7340.07
10.500.01119,249119,206119,1790.04119,177119,1480.06119,155119,1250.08119,1810.06
10.500.02119,215119,215119,1620.00119,215119,1460.00119,211119,0940.00119,1940.02
10.500.03118,829118,813118,7770.01118,775118,7470.05118,813118,7540.01118,7840.04
mknapcb610.500.04116,530116,509116,4700.02116,502116,4490.02116,470116,5090.05116,4710.05
X 67,455.3367,179.1066,741.460.4167,443.8767,430.470.0267,438.930.02
Table 7. Computational results achieved by LBLP, Classic SHO, and Classic SHO + IRace solving the MCDP.
Table 7. Computational results achieved by LBLP, Classic SHO, and Classic SHO + IRace solving the MCDP.
IDSoptLBLPClassic SHOClassic SHO + IRace
BestWorstMeanRPD (%)BestWorstMeanRPD (%)BestWorstMeanRPD (%)
CFP010000.00.00000.00.00000.00.00
CFP023353.70.00343.50.00343.60.00
CFP035565.30.00587.20.00565.40.00
CFP042232.20.00232.60.00242.90.00
CFP0588108.80.00898.60.00888.00.00
CFP064464.80.00475.70.00464.70.00
CFP077777.00.007108.60.00777.00.00
CFP087787.10.00798.50.00777.00.00
CFP0925252826.60.00252726.00.00252726.20.00
CFP100000.00.00000.00.00000.00.00
CFP110000.00.00000.00.00000.00.00
CFP127797.90.00898.713.33777.00.00
CFP1389119.911.76101312.322.22999.011.76
CFP14242725.4252827.4242424.0
CFP15171717.0192019.7181918.5
CFP16303330.3333534.4313332.2
CFP17262826.8262726.8262827.1
CFP18424543.4434644.8424443.2
CFP19404340.7434544.3414141.0
CFP20222.0243.5242.8
CFP21373937.8404140.8384038.9
CFP22000.0020.4010.7
CFP23101411.1121413.6111312.0
CFP24182219.9212221.8192019.4
CFP25404541.7414341.8404240.8
CFP26596561.4595959.0596160.2
CFP27646864.6656665.4646464.0
CFP28545755.2545756.1545655.0
CFP29919693.5939493.7929493.0
CFP30374037.4384039.1373837.5
CFP31525553.4535453.5525252.0
CFP32687168.8697270.6686968.6
CFP33939593.6959595.0949695.0
CFP34259263261.2259260259.6259259259.0
CFP35909691.6929493.6919191.0
X5.8535.1437.5436.000.9036.0937.5737.042.7435.4336.3735.900.90
Table 8. Computational results achieved by LBLP and HBBO + AS solving the MCDP.
Table 8. Computational results achieved by LBLP and HBBO + AS solving the MCDP.
IDSoptLBLPHBBO + AS
BestWorstMeanRPD (%)BestWorstMeanRPD (%)
CFP010000.00.00000.00.00
CFP023353.70.00333.00.00
CFP035565.30.00555.00.00
CFP042232.20.00222.00.00
CFP0588108.80.00888.00.00
CFP064464.80.00444.00.00
CFP077777.00.00777.00.00
CFP087787.10.00777.00.00
CFP0925252826.60.00252525.00.00
CFP100000.00.00000.00.00
CFP110000.00.00000.00.00
CFP127797.90.008119.414.28
CFP1389119.911.769119.612.5
CFP14242725.4293331.2
CFP15171717.0172622.8
CFP16303330.3364238.0
CFP17262826.8333534.2
CFP18424543.4485049.0
CFP19404340.7505452.2
CFP20222.0283933.4
CFP21373937.8566358.8
CFP22000.0424744.0
CFP23101411.1445348.2
CFP24182219.9465249.8
CFP25404541.7616763.2
CFP26596561.4717371.8
CFP27646864.6717271.4
CFP28545755.299112106.2
CFP29919693.5118125122.0
CFP30374037.4646765.0
CFP31525553.4798984.2
CFP32687168.8909793.8
CFP33939593.6155167159.0
CFP34259263261.2386417408.2
CFP35909691.6225234231.4
X5.8535.1437.5436.000.9055.0359.9157.652.06
Table 9. Average Wilcoxon–Mann–Whitney test for MCDP.
Table 9. Average Wilcoxon–Mann–Whitney test for MCDP.
ApproachLBLPClassic SHOClassic SHO + IRace
LBLP-0.00≥0.05
Classic SHO≥0.05-≥0.05
Classic SHO + IRace≥0.050.00-
Table 10. Computational results achieved by LBLP, Classic SHO, and Classic SHO + IRace solving the SCP.
Table 10. Computational results achieved by LBLP, Classic SHO, and Classic SHO + IRace solving the SCP.
IDSoptLBLPClassic SHOClassic SHO + IRace
BestMeanRPD (%)BestMeanRPD (%)BestMeanRPD (%)
4.14294294320.004304320.234294310.00
4.25125125170.005285283.085125130.00
4.35165165210.005325323.055185180.39
4.44944945030.005055062.204985000.81
4.55125145170.395145140.395145160.39
4.65605605600.005605620.005605610.00
4.74304304320.004304300.004304320.00
4.84924924950.005035032.214924930.00
4.96456456480.006696693.656556571.54
4.105145175260.585185180.785175190.58
5.12532532550.002572571.572532550.00
5.23023093092.293123143.263103122.61
5.32262262300.002342343.482262260.00
5.42422422450.002422420.002422440.00
5.52112112130.002112110.002112110.00
5.62132132130.002162171.402142160.47
5.72932973011.362962961.022972991.36
5.82882882910.002912921.042892910.35
5.92792802810.362802810.362802820.36
5.102652652670.002712712.242672680.75
6.11381421442.861401401.441411432.15
6.21461461500.001461460.001461460.00
6.31451451480.001481482.051461470.69
6.41311311330.001331331.521321330.76
6.51611611610.001651662.451631631.23
A.12532532560.002562561.182542560.39
A.22522562571.572592592.742572581.96
A.32322332350.432392392.972352351.28
A.42342352390.432352350.432352360.43
A.52362362370.002362360.002362370.00
B.16969710.0072724.2670721.44
B.27676780.0081816.3778782.60
B.38080800.0080800.0080820.00
B.47979800.0081822.5080811.26
B.57272740.0072720.0072730.00
C.12272292320.882332342.612312331.75
C.22192212250.912232231.812222221.36
C.32432432560.002512513.242462461.23
C.42192192220.002252252.702212210.91
C.52152152190.002152150.002152170.00
D.16060610.0060600.0060600.00
D.26666660.0068682.9967671.50
D.37273771.3876765.4174762.74
D.46263641.6062620.0063641.60
D.56161620.0061610.0061630.00
E.12929300.0029290.0029290.00
E.23032326.4531313.2832336.45
E.32728293.6427270.0028283.64
E.42829303.5129293.5129303.51
E.52828300.0028280.0028300.00
F.11414150.0014140.0014140.00
F.21515150.0015150.0015150.00
F.314161613.33171819.35161813.33
F.41414160.0015166.9014140.00
F.51314157.4114167.4114167.41
G.11761761780.001781801.131771780.57
G.21541581632.561581592.561581602.56
G.31661691701.791701722.381691701.79
G.41681701711.181681680.001681680.00
G.51681681700.001701701.181681690.00
H.16366674.6568707.6366664.65
H.26365683.1365663.1365673.13
H.35962654.9662654.9662634.96
H.45859601.7158580.0059591.71
H.55556611.8058605.3157583.57
X196.40197.31199.751.09199.85200.312.24197.95199.051.42
Table 11. Average Wilcoxon-Mann-Whitney test for SCP.
Table 11. Average Wilcoxon-Mann-Whitney test for SCP.
ApproachLBLPClassic SHOClassic SHO + IRace
LBLP-0.00≥0.05
Classic SHO≥0.05-≥0.05
Classic SHO + IRace≥0.050.00-
Table 12. Computational results achieved by LBLP, Classic SHO, and Classic SHO + IRace solving the MKP.
Table 12. Computational results achieved by LBLP, Classic SHO, and Classic SHO + IRace solving the MKP.
IDTest ProblemSoptLBLPClassic SHOClassic SHO + IRace
BestWorstMeanRPD (%)BestWorstMeanRPD (%)BestWorstMeanRPD (%)
mknapcb15.100.0024,38124,38124,30124,3600.0024,38122,43123,7960.0024,38122,67424,2790.00
5.100.0124,27424,27424,27424,2740.0024,27424,27424,2740.002427424,27424,2740.00
5.100.0223,55123,55123,53823,5460.0023,55121,19622,8680.0023,55122,13823,3110.00
5.100.0323,53423,53423,28823,4730.0023,53421,18122,8280.0023,53421,88723,1880.00
5.100.0423,99123,99123,94723,9800.0023,99121,83223,6240.0023,99123,03123,8090.00
mknapcb25.250.0059,31259,31258,47358,9340.0059,31255,16058,0660.0059,31255,75358,8140.00
5.250.0161,47261,47260,69261,3240.0061,47255,32560,8570.0061,47259,62861,3610.00
5.250.0262,13062,13061,70261,9970.0060,26656,65059,3983.0062,13057,78161,6950.00
5.250.0359,46359,46355,16456,9010.0059,46353,51757,7390.0059,46357,08459,0820.00
5.250.0458,95158,08257,55057,7891.4756,00353,20355,6115.0058,36156,61158,2041.00
mknapcb35.500.00120,148120,148119,978120,1210.00120,148108,133118,8260.00119,908113,912118,7690.20
5.500.01117,879115,634112,821114,1431.90117,879109,627116,9710.00117,879111,985116,5230.00
5.500.02121,131121,131119,156120,4990.00120,525114,499118,7780.50121,131112,652120,3680.00
5.500.03120,804119,124115,828117,3111.39120,200114,190119,4790.50120,804114,764119,4150.00
5.500.04122,319122,319117,242119,1530.00121,707111,971119,0790.50122,074113,5291200240.20
mknapcb410.100.0023,06423,06422,90522,9810.0021,91120,81521,6595.0023,06422,37222,9740.00
10.100.0122,80122,80122,63022,7750.0022,80121,20522,4340.0022,80121,20522,6250.00
10.100.0222,13122,13122,13122,1310.0022,13121,02421,9760.0022,13120,80322,0650.00
10.100.0322,77222,77222,05222,2830.0022,77221,63322,5560.0022,77221,17822,6290.00
10.100.0422,75122,75122,41722,6470.0022,75121,15822,2730.0022,75121,61322,5350.00
mknapcb510.250.0059,18758,47657,53058,1641.2056,82051,13855,5694.0058,00356,26357,7772.00
10.250.0158,78157,93756,49057,2861.4455,84251,93354,8655.0058,78156,43058,2870.00
10.250.0258,09758,09757,06257,9210.0055,77351,31154,4354.0058,09754,61157,2260.00
10.250.0361,00061,00060,32660,6500.0061,00056,73060,4020.0059,17054,43658,2713.00
10.250.0458,09256,27656,20456,2593.1355,76850,74955,0664.0057,51154,63656,9651.00
mknapcb610.500.00117,821117,779117,736117,7540.04117,232110,198116,3170.50117,585111,706116,7030.20
10.500.01119,249119,206119,164119,1790.04118,534107,865115,8670.60119,249109,709117,6270.00
10.500.02119,215119,215119,129119,1620.00118,619110,316116,2940.50118,857114,103118,6200.30
10.500.03118,829118,813118,761118,7770.01117,878109,627116,5580.80118,710112,775117,3450.10
10.500.04116,530116,509116,445116,4700.02115,365106,136112,7811.00116,297108,156115,1570.20
X 67,455.3367,17966,29866,7410.3566,73061,83465,7081.1667,26863,59066,6640.27
Table 13. Computational results achieved by LBLP, and LMPB solving the MKP.
Table 13. Computational results achieved by LBLP, and LMPB solving the MKP.
IDTest ProblemSoptLBLPLMPB
BestWorstMeanRPD (%)BestWorstMeanRPD (%)
mknapcb15.100.0024,38124,38124,30124,3600.0024,38117,59518,1930.00
5.100.0124,27424,27424,27424,2740.0024,27417,40117,6740.00
5.100.0223,55123,55123,53823,5460.0023,55117,69217,8610.00
5.100.0323,53423,53423,28823,4730.0023,53419,68519,6920.00
5.100.0423,99123,99123,94723,9800.0023,99117,74417,8630.00
mknapcb25.250.0059,31259,31258,47358,9340.0059,31246,04946,5880.00
5.250.0161,47261,47260,69261,3240.0061,47246,89047,2990.00
5.250.0262,13062,13061,70261,9970.0062,13049,23749,2620.00
5.250.0359,46359,46355,16456,9010.0059,46342,80446,3650.00
5.250.0458,95158,08257,55057,7891.4758,95146,87047,0050.00
mknapcb35.500.00120,148120,148119,978120,1210.00101,98073,16888,11015.12
5.500.01117,879115,634112,821114,1431.9099,90171,26590,50715.25
5.500.02121,131121,131119,156120,4990.00102,55974,67891,01415.33
5.500.03120,804119,124115,828117,3111.39100,86474,71591,76916.5
5.500.04122,319122,319117,242119,1530.00102,52074,53791,77216.18
mknapcb410.100.0023,06423,06422,90522,9810.0023,06417,29822,2760.00
10.100.0122,80122,80122,63022,7750.0022,80117,35221,2960.00
10.100.0222,13122,13122,13122,1310.0022,13115,69920,4870.00
10.100.0322,77222,77222,05222,2830.0022,77218,81719,7960.00
10.100.0422,75122,75122,41722,6470.0022,75117,56422,6040.00
mknapcb510.250.0059,18758,47657,53058,1641.2059,18748,08655,8190.00
10.250.0158,78157,93756,49057,2861.4458,78143,17355,3030.00
10.250.0258,09758,0975706257,9210.0058,09745,53852,9080.00
10.250.0361,00061,00060,32660,6500.0061,00047,58757,3420.00
10.250.0458,09256,27656,20456,2593.1358,09247,70355,0370.00
mknapcb610.500.00117,821117,779117,736117,7540.04103,22674,74693,30912.38
10.500.01119,249119,206119,164119,1790.04105,08876,53196,82411.87
10.500.02119,215119,215119,129119,1620.00104,87074,62096,15212.03
10.500.03118,829118,813118,761118,7770.01104,30874,84595,33912.22
10.500.04116,530116,509116,445116,4700.02101,38074,44192,26013
X 67,455.3367,17966,29866,7410.3561,881.0333346,144.3333354,5914.66
Table 14. Average Wilcoxon-Mann-Whitney test for MKP.
Table 14. Average Wilcoxon-Mann-Whitney test for MKP.
ApproachLBLPClassic SHOClassic SHO + IRace
LBLP-0.00≥0.05
Classic SHO≥0.05-≥0.05
Classic SHO + IRace≥0.050.00-
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vega, E.; Lemus-Romani, J.; Soto, R.; Crawford, B.; Löffler, C.; Peña, J.; Talbi, E.-G. Autonomous Parameter Balance in Population-Based Approaches: A Self-Adaptive Learning-Based Strategy. Biomimetics 2024, 9, 82. https://doi.org/10.3390/biomimetics9020082

AMA Style

Vega E, Lemus-Romani J, Soto R, Crawford B, Löffler C, Peña J, Talbi E-G. Autonomous Parameter Balance in Population-Based Approaches: A Self-Adaptive Learning-Based Strategy. Biomimetics. 2024; 9(2):82. https://doi.org/10.3390/biomimetics9020082

Chicago/Turabian Style

Vega, Emanuel, José Lemus-Romani, Ricardo Soto, Broderick Crawford, Christoffer Löffler, Javier Peña, and El-Gazhali Talbi. 2024. "Autonomous Parameter Balance in Population-Based Approaches: A Self-Adaptive Learning-Based Strategy" Biomimetics 9, no. 2: 82. https://doi.org/10.3390/biomimetics9020082

APA Style

Vega, E., Lemus-Romani, J., Soto, R., Crawford, B., Löffler, C., Peña, J., & Talbi, E. -G. (2024). Autonomous Parameter Balance in Population-Based Approaches: A Self-Adaptive Learning-Based Strategy. Biomimetics, 9(2), 82. https://doi.org/10.3390/biomimetics9020082

Article Metrics

Back to TopTop