Next Article in Journal
Deep Learning-Based Remaining Useful Life Prediction Method with Transformer Module and Random Forest
Previous Article in Journal
Proposing a High-Precision Petroleum Pipeline Monitoring System for Identifying the Type and Amount of Oil Products Using Extraction of Frequency Characteristics and a MLP Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Combining a Population-Based Approach with Multiple Linear Models for Continuous and Discrete Optimization Problems

1
Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, Chile
2
Departamento de Informática, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(16), 2920; https://doi.org/10.3390/math10162920
Submission received: 27 June 2022 / Revised: 4 August 2022 / Accepted: 8 August 2022 / Published: 13 August 2022
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
Population-based approaches have given us new search strategies and ideas in order to solve optimization problems. Usually, these methods are based on the performance carried out by a finite number of agents, which by the interaction between them they evolve and work all over the search space. Also, it is well-known that the correct employment of parameter values in this kind of method can positively impact their performance and behavior. In this context, the present work focuses on the design of a hybrid architecture which smartly balances the population size on run-time. In order to smartly balance and control the population size, a modular approach, named Linear Modular Population Balancer (LMPB), is proposed. The main ideas behind the designed architecture include the solving strategy behind a population-based metaheuristic, the influence of learning components based on multiple statistical modeling methods which transform the dynamic data generated into knowledge, and the possibilities to tackle both discrete and continuous optimization problems. In this regard, three modules are proposed for LMPB, which concern tasks such as the management of the population-based algorithm, parameter setting, probabilities, learning methods, and selection mechanism for the population size to employ. In order to test the viability and effectiveness of our proposed approach, we solve a set of well-known benchmark functions and the multidimensional knapsack problem (MKP). Additionally, we illustrate promising solving results, compare them against state-of-the-art methods which have proved to be good options for solving optimization problems, and give solid arguments for future work in the necessity to keep evolving this type of proposed architecture.
MSC:
90C27; 90C59; 90C15

1. Introduction

The transformation of data into knowledge has been a trend strategy in modern proposed approaches, they are usually designed by the interdisciplinary interactions of components, such as learning techniques, solving strategies, mathematical ideas, and so on. In this context, data-driven approaches have several objectives, such as identifying key features, identifying redundant data, influencing the decision-making process, and so on [1,2,3]. In the optimization field, it is well-known that approximated methods try to find solutions as close as possible to the optimum with considerably less usage of resources which has been a trend for years. In this regard, a classic method employed is the Metaheuristics (MH) [4], which are algorithms that follow a pre-designed solving strategy, and can be applied to several optimization problems, and generates massive amount of data in the process [5]. Thus, they have been the objective of multiple works where these attributes are exploited in order to generate knowledge for decision making processes.
In this paper, we propose a novel approach, named Linear Modular Population Balancer (LMPB) as a modular hybrid architecture. We aim to contribute with an optimization tool capable of tackling both discrete and continuous optimization problems, through the interaction of MH and Machine Learning (ML). In this context, LMPB was designed to work under a population-based strategy, which can be seen as a finite number of agents that smartly explore and evolve while searching the solution space. The main objective behind the incorporation of ML into the search process focuses on the fact that population-based MH generates a massive amount of dynamic data through the search in the solution space. Thus, we aim to take advantage of this feature by the means of statistical modeling methods [6,7,8], take further profit by the knowledge generated, and give adaptability through the search modifying the number of agents which performs on run-time. On the other hand, the proposed LMPB can be described as an interaction of three modules which are described as follows. Firstly, module 1 concerns the management of the search algorithm, carrying out intensification and diversification. In this regard, we employ the movement operators from Spotted Hyena Optimizer (SHO) [9], which is a population-based algorithm, and it has proved to be effective in solving optimization problems [10,11]. Regarding module 2, include multiple tasks concerning internal management in the architecture process. The first task focuses on the management of values employed as population sizes by the architecture, which in the design are presented as schemes. The schemes, correspond to different amounts of agents which can be selected to be employed in a certain period of time. In this context, the selection process is carried out in a Monte Carlo probabilistic roulette mechanism. Thus, at the beginning, each scheme will be assigned an equal probability to be selected, which will be modified in function of the knowledge generated by the learning-based models. This decision behind such a modification in value is based on the objective to achieve a possible improvement in performance in the next period of time. Regarding the second and third tasks, their objective corresponds to the balance of the resulting population size performing the search. The second task is to control the generation of new randomly generated populations in two scenarios: when the selected scheme has a higher number of agents than the one performing and when generating the initial population at the beginning of the search. The third task is in control of removing agents from the population when a newly selected scheme has a smaller population size value than the one currently performing. The last task concerns the management of parameters needed by the proposed architecture to perform, such as the required by the movement operators, probabilities, and learning thresholds employed in the search. Module 3 includes the learning methods designed to process the dynamic data and generate knowledge through the search. This module is based on 6 different learning-based methods which are organized into two groups: 5 statistical modeling methods predicting which scheme has the higher probability to achieve a good performance, and a statistical modeling method selecting which of the 5 mentioned learning methods resultant prognostic should be employed in certain periods of time through the search. This group concerns a single learning method, which is designed by the means of logistic regression. Also, following an equal design, the 5 learning methods are based on Lasso, GammaRegressor, Bayesian, Ridge, and ElasticNet regressions.
In this work, in order to test the viability and the competitiveness of the proposed LMPB, multi-domain experimentation stages are designed. In this context, we solve a set of well-known continuous benchmark functions as a first stage comparison, which are organized as unimodal, multimodal, and multimodal with fixed-dimension, and a set of instances from the multidimensional knapsack problem are solved as a second stage. We analyze, discuss, and compare against reported results from state-of-the-art (SOTA) methods. Moreover, a detailed comparison is carried out by the classic implementation of SHO, Tabu Search (TS) [12], Simulated Annealing (SA) [13], and SHO assisted by IRace implemented by us. We highlight the good performance achieved by the proposed approach, the proper statistical analysis is carried out in order to support the results presented, which proved to be competitive against reported SOTA.
The main contributions can be illustrated as follows.
  • Robust hybrid architecture to tackle discrete and continuous optimization problems.
  • A key issue in population-based approaches is tackled: Adapting population size on run-time.
  • Scalability (module 1), multiple movement operators from different algorithms can be employed in order to carry out intensification and diversification.
  • Scalability (modules 3), incorporation of multiple machine learning methods in order o carry out regression and guide the search.
The rest of this paper is organized as follows. The related work is introduced in the next section. The proposed hybrid approach is explained in Section 3. Section 4 illustrates the experimental results. Finally, we conclude and suggest some lines of future research.

2. Related Work

The design of combined optimization tools has been a trend in recent years, the usage of multiple methods has demonstrated to be an effective approach to tackling different issues on the procedures to solve problems [14]. In this context, a well-known example of synergy is the combined usage of optimization techniques and machine learning. They are two fields that are based on artificial intelligence and their interaction has proven great improvements to their respective fields [15,16]. The proposed architecture can be classified as an optimization method assisted by machine learning, where the solving procedure is given by a population-based MH assisted by learning methods.
In the literature, preliminary approaches were designed by the interaction between data mining and evolutionary algorithms [17], the main objective was the analysis of large amounts of data in order to discover patterns, attributes, and so on. The topics developed by this approach have been illustrated as fitness approximations [18], setting parameters [19], initial solutions [20], and population management [21,22]. Regarding this last topic, works were focused on the application of Association rules, where the strategy was to find patterns in elite solutions in order to influence the population and have a higher probability of creating higher quality agents. On the other hand, through the years, a constant evolution has been reported between this interaction [23,24,25]. For instance, it is well-known that the parameter values employed are highly related to the performance achieved by a MH [26], thus, indispensable components have been developed by the scientific community in order to further improve from this complemented work. In this regard, authors in [27], propose an approach based on Tabu Search (TS) and Support Vector Machine (SVM) in order to successfully solve problems such as Knapsack Problem, Set Covering Problem, and the Travelling Salesman Problem. The general process design includes the decision rules management from a randomly generated corpus of solutions, which are used to predict high-quality solutions for a given instance and it is used to fine-tune and guide the search performed by TS. However, the authors specifically address the proposition as a high complexity approach, a consequence of the design and implementation process in the hybrid, they highlight the time consumed and knowledge necessarily needed, the process to build the corpus, and the extraction of the classification rules. Also, in [28], authors proposed a modular approach in order to tackle the tuning of parameters, where the model iterates by sampling different configurations. The results obtained are used by a regression model, which is based on linear regressions, quantile regression, and ridge or lasso regression, among others. The output of the model is subjected to perturbations resulting in new configuration outputs. Finally, all results obtained are optimized and iteratively tested by the model until a stopping criterion is met. However, a major issue is an exposed consequence of the sampling strategy (usually present in off-line learning) designed in the approach, which is called over-fitting of parameters. In this regard, the proposed LMPB works over an online learning strategy, and all the data is included, classified (for each scheme), and processed.
Regarding hybrids which are related to the population size, to the best of our knowledge, literature is scarce. In [29], authors a cross-entropy-Lagrangian hybrid algorithm for the multi-item capacitated lot-sizing problem. In this proposed approach, response surface methodology is employed to sort the cross entropy parameters values (population size and quantile size) in order to detect a correlation between the assigned values and the heuristic solutions. Also, in [30,31], authors propose a hybrid based on MH assisted by Autonomous Search (AS) in order to modify the population size when stagnation in the performance is detected. In this context, the performance reported from small samples of agents is observed during the search, when there is no better fitness or there is stagnation on the values achieved, the population for Human behavior-based optimization (HBBO) and SHO are modified. This modification on size is static, thus, a predefined amount of agents are added or removed from the population.

3. Proposed Hybrid Approach

In this section, the design of our proposed approach is described and discussed in detail. We illustrate the main ideas behind the proposed modules and learning methods. In Section 3.1 we present a general description of the proposed approach. In Section 3.2, we describe each component concerning, objectives, functionalities, and main ideas. Lastly, we present an overview of the process performed by the architecture in order to carry out the search.

3.1. General Description

The proposed architecture focuses on the balance of the population size on run-time. In order to carry out this objective, specially designed modules, and mechanisms are proposed, Figure 1. The general performance of LMPB, illustrated in Figure 2, has Module 1 at the core, which will be performing a population-based task such as intensification and diversification. Also, all dynamic data generated on run-time will be managed by module 3 which generates knowledge as output that is employed by module 2 in order to carry out key mechanism which gives adaptability in the search. In this context, two mechanisms rule over the search process: the learning mechanism and the population balance mechanism. The employed learning process can be described as a greedy learning mechanism proposed in [11], and its based on constant feedback of knowledge between the learning model and the decision-making component in order to influence/guide the search. The process is as follows, at certain times (a learning season) while carrying out the search, a previously configured threshold value ( α ) will be defining the seasonal learning on which the dynamic data will be transformed, generating knowledge as feedback to the architecture. This learning strategy is suitable to perform while searching in an unknown solution space, the constant feedback given to LMPB will end up generating a better response through the iterations. This quick and constant knowledge can be a key issue to take into consideration in the design of hybrid approaches in order to keep a continuously smart performance through the search. Regarding the mechanism balancing the population size, the modification process is ruled by the threshold β which defines the number of iterations to perform before carrying out the modification in the number of agents that are performing the search. The configured values are defined as schemes, which are multiple population sizes previously defined. The scheme selection process is ruled by probabilities, which are initially equally assigned values selected by the approach to perform. These probabilities are modified in run-time when threshold α is met and the values employed are based on the knowledge generated in that instance. For instance, the priority of a scheme (higher probability assigned) will be granted based on the best possible prognostic achieved, thus, the component will always search for a better or more fitted configuration in order to improve the performance. In Figure 3 we illustrate a graphic example of how the proposed thresholds are applied through the search.
The general process can be described as follows:
Step 1:
Set initial parameters for the population-based method.
Step 2:
Set population sizes to be used as schemes.
Step 3:
Set initial probabilities to be selected for each scheme.
Step 4:
Select a scheme to perform and generate the initial population.
Step 5:
Perform SHO: diversification movement operators.
Step 6:
All the dynamic data generated in 5 is stored and sorted.
Step 7:
Perform SHO: intensification movement operators.
Step 8:
All the dynamic data generated in 7 is stored and sorted.
Step 9:
if β amount of iterations has been carried out: the selection mechanism will be choosing the next scheme to perform.
Step 10:
if α amount of iterations has been carried out: the data is processed, knowledge is generated, and probabilities are updated influenced by the learning-model feedback.
Step 11:
if the termination criteria are not met, the search keeps being carried out, return to Step 5.

3.2. Proposed Modules

As mentioned before, the proposed architecture includes three modules, which are described as follows.

3.2.1. Module 1: Movements

The solving methodology employed in the proposed architecture is, as mentioned before, a population-based strategy. In this regard, multiple agents are generated in order to search the solution space, they evolve through the interaction between the environment and themselves. This interaction is usually defined and structured by the movement operators defined by the algorithm. In this work, we instantiate the SHO algorithm, and employ his four movement operators in order to solve the optimization problems.
Regarding the description from the movement operators, encircling prey is employed first, the objective corresponds to the position update of each agent towards the current best candidate solution (agent with the best solution among the population in that iteration). In order to carry out the perturbation on each agent, we employ Equations (1) and (2). In (1), D h is the distance between the current agent being updated (P) and the actual best agent in the population ( P p ). Also, in Equation (2), each agent is modified (updated). In both equations, B and E correspond to coefficient values, which are illustrated in Equations (3) and (4), where r d 1 and r d 2 are random [0, 1] values. In Equation (5), C I corresponds to the current iteration and T I to the total amount of iterations.
D h = | B · P p ( x ) P ( x ) |
P ( x + 1 ) = P p ( x ) E · D h
B = 2 · r n d 1
E = 2 h · r n d 2 h
h = 5 ( C I ( 5 / T I ) )
The second movement concerns hunting, we employ Equations (6)–(8) in the population. In (6) and (7), D h represents the distance, P h represents the actual best agent in the population, P k is the current agent being updated, and B and E correspond to coefficient values. In (7), and N indicates the number of agents.
D h = B · P h P k
P k = P h E · D h
C h = P k + P k + 1 + + P k + N
Attacking the prey is illustrated as the third movement and it is concerned with the performance of exploitation in the search space. In (8), each agent belonging to D h , generated in (7), will be updated. The last movement exclusively concerns the performance of a passive exploration and is named search for prey. The work proposes the work performed behind coefficients B and E with random values to force the agents to move far away from the actual best agents in the population. This mechanism improves the global search of the approach.
P ( x + 1 ) = C h / N

3.2.2. Module 2: Management

This module has three main objectives, the first objective concerns population manipulation, where two tasks are performed. Firstly, the elimination of agents from the current population. In this regard, the agents which have the currently worst performance are removed from the population in the scenario where the size of the new selected scheme is smaller. The second task focuses on the addition of new randomly generated agents to the current population; this scenario needs to be carried out when a new scheme is selected, and the size value is bigger. The second objective concerns the scheme selection mechanism, which follows a Monte Carlo roulette strategy, where the main issue is the selection through the probability assigned for each scheme to perform. The population sizes are configured values for each scheme employed by the architecture as illustrated in Table 1. The third objective has a close relationship with objective 2, it is focused on the management of probabilities, as mentioned before, each scheme has a certain probability of being selected and defines the next population size to perform, Table 2. At the beginning, the probabilities are defined as follows.
1 scheme 1 + 1 scheme 2 + 1 scheme 3 + 1 scheme 4 = 1
where these probabilities will be modified by the output from the learning-based models, the evaluation is carried out as follows.
W ( s c h e m e i ) = MIN ( y s c h e m e i , y s c h e m e i + 1 , , y s c h e m e n )
where W ( s c h e m e i ) represents the scheme with the highest possibility to achieve better performance in the next β iterations. For instance, in Table 3 is illustrated a case in which scheme 3 has won and is given a higher probability to be selected.

3.2.3. Module 3: Learning-Based Methods

The objective behind this module is diagnosis generation, which takes into consideration the performance achieved given the scheme employed. In other words, the general idea is the processing of dynamic data into knowledge, and posterior feedback to module 2. In order to design this module, multiple features were considered, such as the accuracy of the methods, complexity regarding data management, the less expensive (computing time), and implementation complexity. The data transformation process is carried out by the work of 6 different learning methods [8,32], which focus on 2 tasks: administrating and predicting.
Firstly, the objective behind the administrator corresponds to the smart selection of a predictor which aims to decide the most suited regression method to perform when α is met. This method is defined by the means of logistic regression, which despite its name, corresponds to a linear model for classification [33]. The values associated with y, which are the objectives to be predicted, take only small numbers of discrete values, and the fitted function can be illustrated as the following equation.
1 1 + e z
where z = β 0 + β 1 x 1 + + β r x n
Thus, on each learning season (given by the transition of α iterations), this administrator will be carrying out 5 different regressions and deciding which method is more suited/fitted to give a proper prognostic about future performance. Also, the defined dependent variables ( x i ) employed can be described as the percentage on which the prognostic output of each method has been employed and the quality of the output given by the accuracy of the prediction. This quality was defined by the percentage of accuracy, which is specified as detecting improvements in the performance when it is successfully selected as the fittest method to perform.
Regarding the predictors, they focus on learning-based methods which aim to give a diagnosis of a possible improved performance given different population configurations (schemes) and performance metrics. They follow the definition of linear models, which can be expressed as a linear combination of multiple features, the fitted function can be described as follows.
y ( β , x ) = β 0 + β 1 x 1 + + β n x n
where the classic approach is tackled by the minimization of the residual sum of squares, which can be described as follows.
min fi X β y 2 2
where β = β 0 + β 1 + + β n
Firstly, the proposed methods include the employment of Lasso, GammaRegressor, Bayesian, Ridge, and ElasticNet regressions [8]. In this regard, when threshold α is met, each method will be performing over all the schemes configured as illustrated in Figure 4. The dynamic data obtained through the search, such as the feasible/infeasible solution, best solution, and the respective scheme employed which achieved the data are employed to prognostic a possible future fitness value. Thus, every predictor method will have its best prognostic achieved and the final word will be given to the administrator, which decides the scheme to be employed based on the best full prognostic delivered by the predictors.
The description of each method employed can be described as follows. Lasso and Ridge regressions [34,35] involve adding penalties to the regression functions. They include types of regularization techniques, which are usually used to deal with the over-fitting in the model. Lasso performs L1 regularization, which consists of the addition of a sum of coefficients in the optimization objective. Thus, lasso regression optimizes the following equation.
min fi 1 2 n s a m p l e s X β y 2 2 + α β 1
where the penalty applied corresponds to α β 1 , with α as a constant value, which can impact the magnitude of the coefficients. Regarding Ridge regression, it performs L2 regularization, which adds a factor of the sum of squares of coefficients in the optimization objective. Thus, the optimization goes as follows.
min fi X β y 2 2 + α β 2 2
Here, it is important to highlight relevant differences, such as the ridge taking major advantage of the shrinkage of the coefficients, thus, reducing the model complexity and including all or none of the features in the model. Also, Lasso performs a shrinkage of the coefficients and feature selection. GammaRegressor [36] can be included as a generalized linear regression in which a gamma distribution is applied as a probability density function. The generalized linear model can be mathematically described as follows.
min fi 1 2 n s a m p l e s i d ( y i , y ^ i ) + α 2 β 2
Here, the gamma distribution defines the target domain y a s ( 0 , ) where the unit deviance d ( y i , y ^ i ) is defined as 2 ( log y ^ y + y y ^ 1 ) . Regarding ElasticNet [37], is defined as a linear regression that trains with both L1 and L2 regularization in the coefficients. Thus, it is a combination of features from Lasso and Ridge and is fitted to be employed when there are several features correlated between them. The objective function to be minimized is described as follows.
min fi 1 2 n s a m p l e s X β y 2 2 + α p β 1 + α ( 1 p ) 2 β 2
Lastly, Bayesian linear regression [38] is a statistical analysis that employs Bayesian inference, which is distinguished by the usage of probabilities to express all forms of uncertainty. The main features of this model comprehend the adaptation to the data and give possibilities to add regularization parameters in the statistical work. The probabilistic model can be described as follows.
p ( y X , w , α ) = N ( y X w , α )
Here, the output y is assumed to be Gaussian distributed around X w , and α is manipulated as a stochastic variable that needs to be estimated from the data (disadvantage in the time-consuming inference task).

3.3. Proposed Algorithm

The proposed general search process is illustrated Algorithm 1. In this regard, the solving structure follows a traditional population-based method, where the search is developed under iterative performance, the movement operators of SHO are employed sequentially over each agent in the population. Finally, Algorithm 2 presents the process where the learning components carry out their work.
Algorithm 1Proposed Architecture
1:
SHO: Set initial parameters
2:
Set the size values to perform as schemes
3:
Select a new scheme to perform
4:
Generate initial population based on the scheme selected
5:
while (stooping criteria is not met) do
6:
   SHO: Perform movements operators
7:
   Dynamic data stored and sorted
8:
   if check β amount iterations then
9:
      Select a new scheme to perform
10:
      Balance the population based on the scheme selected
11:
  end if
12:
  if check α amount iterations then
13:
     Call to Algorithm 2: Learning Model
14:
     Check M I N ( o u t p u t A l g o r i t h m 2 )
15:
     Data structures with probabilities are updated
16:
   end if
17:
end while
Algorithm 2Learning Model
1:
Data processed: percentage of feasible solutions generated over α iterations
2:
Data processed: percentage of infeasible solutions generated over α iterations
3:
Data processed: best solutions generated over α iterations
4:
Performs predictor: lasso
5:
Historical performance data stored and sorted
6:
Performs predictor: ridge
7:
Historical performance data stored and sorted
8:
Performs predictor: gammaregressor
9:
Historical performance data stored and sorted
10:
Performs predictor: bayesian
11:
Historical performance data stored and sorted
12:
Performs Administrator: logistic
13:
Check most suited results to be employed as diagnosis

4. Experimental Results

This section illustrates the experimental design and results achieved by the proposed approach. The experimentation is carried out in two phases, solving continuous optimization functions and a well-known discrete optimization problem such as the multidimensional knapsack problem. Thus, each phase describes the optimization problem tackled in detail, and comparison of performance against reported SOTA results. Also, the same configuration of LMPB parameters were employed in both phases, which are illustrated in Table 4.

4.1. Continuous Optimization Problem

In this work, in order to test the performance on continuous optimization problems, a set of 15 continuous functions, illustrated in Table 5, are selected to be tackled by LMPB. They are composed of three main categories, such as unimodal [39], multimodal [40], and fixed-dimension multimodal [39,40]. Regarding unimodal functions, they include f 1 to f 4 and correspond to Sphere, Schwefel No.2.22, Schwefel No.1.2, and Generalised Rosenbrock functions. The detailed description is as follows.
f 1 ( x ) = f ( x 1 , x 2 , , x n ) = i = 1 n x i 2
f 2 ( x ) = i = 1 n | x i | + i = 1 n | x i |
f 3 ( x ) = i = 1 n j = 1 i x j 2
f 4 ( x ) = i = 1 n 1 100 ( x i 2 x i + 1 ) 2 + ( 1 x i ) 2
Regarding multimodal functions, they include f 5 to f 9 and correspond to Generalised Schwefel No.2.26, Generalised Rastrigin, Ackley, Generalised Griewank, and Generalised Penalised Functions. The detailed description is as follows.
f 5 ( x ) = i = 1 n x i sin ( | x i | )
f 6 ( x ) = 10 n + i = 1 n ( x i 2 10 cos ( 2 π x i ) )
f 7 ( x ) = 20 exp ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + exp ( 1 )
f 8 ( x ) = 1 + i = 1 n x i 2 4000 i = 1 n cos ( x i i )
f 9 ( x ) = π n × 10 sin 2 π y 1 + i = 1 n 1 y i 1 2 1 + 10 sin 2 π y i + 1 + y n 1 2 + i = 1 n u x i , 10 , 100 , 4
where u ( x i , a , k , m ) is equal to
  • k ( x i a ) m if x i > a
  •  0 if a x i a
  • k ( x i a ) m if x i < a
and
  • y i = 1 + 1 4 ( x i + 1 )
Regarding multimodal functions with fixed-dimension, they include f 10 to f 15 and correspond to Shekel’s Foxholes, Six-hump Camel Back, Branin, Goldstein-Price, Hartman No.1, and Hartman No.2 functions. The detailed description is as follows.
f 10 ( x ) = 1 500 + j = 1 25 1 j + i = 1 2 x i a i , j 6 1
where:
a i , j = 32 16 0 16 32 32 0 16 32 32 32 32 32 32 16 32 32 32
f 11 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4
f 12 ( x ) = x 2 5.1 x 1 2 4 π 2 + 5 x 1 π 6 2 + 10 1 1 8 π cos ( x 1 ) + 10
f 13 ( x ) = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 30 + 2 x 1 3 x 2 2 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2
f 14 ( x ) = i = 1 4 c i e j = 1 3 a i , j x j p i , j 2
where the values of a, c, and p are tabulated in Table 6.
f 15 ( x ) = i = 1 4 c i e j = 1 6 a i , j x j p i , j 2
where the values of a, c and p are tabulated in Table 7.

4.1.1. Algorithms Used and Results Comparison

Regarding the results achieved, we carry out multiple comparisons in order to evaluate the current performance, possible short-term improvements, and long-term evolutions in the design. Firstly, in Table 8, Table 9 and Table 10, we illustrate results reported by SOTA MH, which have proved to reach good performance tackling this set of functions [41,42,43]. They include particle swarm optimization (PSO) [44], gravitational search algorithm (GSA) [45], differential evolution (DE) [46], whale optimization algorithm (WOA) [41], vapor–liquid equilibrium (VLE) [42], and a specifically designed approach to tackle on this type of benchmark named INMDA, which is a hybrid between Nelder–Mead algorithm and dragonfly algorithm [47]. In this regard, general ideas can be presented, for instance, most standard deviation (StdDev) presented illustrates small values, which can be interpreted as being stagnated in local optima. Also, the proposed INMDA outperforms on most average (Avg) values reported, which presented interesting ideas about the hybridization of stochastic features into an exact method. In this phase, we can observe LMPB achieving competitive results, however, standard deviation (StdDev) computed depicts high values ( f 4 , f 5 , f 6 , and f 7 ) which illustrates potential windows to improvement in the performance, for instance, incrementing the amount of generation for LMPB to work on.
Secondly, in Table 11 and Table 12, we illustrate detailed results achieved by the implementations of a hybrid framework that also has been specifically designed to tackle this type of benchmark, named learning-based linear balancer ( L B 2 ) [11] and a classic implementation of SHO assisted by IRace. Regarding the comparison of results, the three approaches presented a competitive performance, however, key elements need to be highlighted and discussed. The proposed LMPB reaches better values (Best) in comparison to L B 2 and SHO-IRace solving the benchmark function. After applying Mann-Whitney, LMPB keeps a difference in functions f 13 compared to L B 2 , and f 3 , f 6 , f 9 , f 10 , and f 13 compared to SHO-IRace. In this context, the differences in performance are statistically significant (p-values < 0.05 ), thus, LMPB is statistically superior in those functions. Also, contrary to the observed StdDev values computed for L B 2 , LMPB and SHO-IRace do not fall on constant stagnation (local optima). However, if we observe the average time (Avg time) achieved by the approaches, L B 2 is clearly superior on all the functions solved. This issue can be discussed based on the design and complexity behind both architectures. In this regard, the learning-based component employed by L B 2 was designed by a simple linear model. On the other hand, a more complex design was proposed on LMPB, where multiple linear models are working in parallel, which is mainly the reason for the extra computation time to meet the termination criteria. In this context, new ideas can be proposed as consequence, such as the improvement of the termination criteria by implementing a learning-based component in order to smartly end the search when no possible improvements in the results can be achieved. Lastly, we can confirm that the interaction between machine learning and MH outperforms the classic approach, the idea of profiting through the dynamic data generated can improve the adaptiveness and performance of the methods employed.

4.1.2. Overall Discussion

The comparison against SOTA illustrates that there is no method that is capable of perfectly tackling any optimization problem better than all the others, this also implies that there exists a high difficulty in designing a perfect component for a method in order to keep a suitable balance in the solving strategy for all the optimization problems. On the other hand, after carefully analyzing the results achieved and the performance displayed by the proposed approach to continuous space, positive thoughts about future research are highlighted. Firstly, being able to achieve a competitive performance by tackling the continuous benchmark means that LMPB successfully carried out intensification/diversification and avoided local optima.
  • Exploitation analysis: unimodal functions are suitable for benchmarking this issue, the good results achieved can be interpreted that LMPB successfully performed in terms of exploiting optimum values.
  • Exploration analysis: multimodal functions are suitable for benchmarking this issue, the competitive performance has proved its merits in terms of exploration and local minima avoidance.
It is proved that the interaction of multiple optimization tools brings new possibilities in order to solve hard optimization problems. The complexity at the initial step in defining a design is addressed as an arduous task, the aim is for the selection of certain useful methods (problem-related), identification of potential drawbacks, and the improvement of them by another method. However, knowledge of the features (positive and negative) of every method needs to be clear, thus making it a task for experienced researchers. On the other hand, the incorporation of several methods can bring an increment in the usage of computational resources which is closely related to the design behind the complexity in the framework/architecture. In this experimental test, compared to other approaches, the average solving time was higher and the complexity in the implementation is an issue. In this regard, it is well-known that there is no assurance for techniques to equally perform in different problems/issues, thus, the experimentation with several methods would open new major challenges. Also, in order to tackle the exponential increment in computational time, interesting ideas can be presented such as the improvement in the termination criteria or the employment of sophisticated techniques at the implementation level. Regarding the optimization of continuous problems, two topics will be considered challenging, tackling more complex functions, such as IEEE CEC composite functions and higher dimensional ones, also tackling real-world problems.

4.2. Discrete Optimization Problem

In this work, in order to test the performance of the proposed approach to tackling discrete optimization problems, the Multidimensional Knapsack Problem (MKP) was selected to be solved. In this regard, 6 different instance sets from Beasley’s OR library were employed. The details concerning the solved benchmark are illustrated in Table 13.
The Multidimensional Knapsack Problem (MKP) can be defined as an NP-hard problem and can be considered the generalized form of the classic Knapsack Problem (KP). The main objective of MKP is to search for a subset of given objects that maximize the total profit while satisfying all constraints on resources. Also, the KP is a well-known optimization problem that has been applied in multiple real-world fields, such as cryptography, allocation problems, scheduling, and production [48,49]. The model can be stated as follows.
M a x i m i z e j = 1 n c j x j
S u b j e c t t o j = 1 n a i j x j b i , i M = 1 , 2 , , m
x j { 0 , 1 } , j N = 1 , 2 , , n
where n is the number of items and m is the number of knapsack constraints with capacities b i . Each item j requires a i j units of resource consumption in the ith knapsack and yields c j units of profit upon inclusion. The goal is to find a subset of items that yield maximum profit without exceeding the resource capacities. Additionally, SHO was initially designed to work on a continuous space, in order to tackle the MCDP, SCP, and MKP, a transformation of the domain is needed. In this work, this task is performed by applying binarization strategies, where each strategy is composed of a transfer function [50] and a discretization method. In this regard, we follow the strategy proposed in [51], which employs the transfer function V 4 + Elitist discretization.

4.2.1. Algorithms Used and Results Comparison

Regarding the results achieved, we carry out multiple comparisons in order to evaluate the current performance, possible short-term improvements, and long-term evolutions in the design. Regarding the reported approaches employed to carry out the comparison, they include the filter-and-fan heuristic (F&F) [52], a Binary version of the PSO algorithm (3R-BPSO) [53], and a hybrid quantum particle swarm optimization (QPSO) algorithm [54]. The design behind these methods focuses on solving efficiently the MKP. For instance, the 3R-BPSO algorithm employs three repair operators in order to fix infeasible solutions generated on run-time. Table 14 illustrates the reported performance by the SOTA methods, where the RPD value represents the relative percentage deviation computed as follows.
RPD = ( S S o p t ) S o p t × 100
This RPD value will help us to understand the distance between the values reached (Best) against the optimum (Opt) value for each instance. Thus, if we observe the results illustrated, the proposed LMPB achieves equal or better performance in comparison to the SOTA solving the instances mknapcb1, mknapcb2, mknapcb4, and mknapcb5. In Table 15, we illustrate the results achieved by the proposed LMPB vs the implemented SHO assisted by IRace. Regarding the general performance, if we observe the RPD values, the proposed approach is clearly superior achieving 20 optimum values vs 0 reached by SHO-IRace. Also, this can be confirmed after applying Mann-Whitney, which given statistical significance (p-values < 0.05 ) in the achieved performance tackling the instance 5.100.04 from mknapcb1, all instances from mknapcb2, mknapcb4, and mknapcb5 in comparison to SHO-IRace. In Table 16, Table 17 and Table 18, we illustrate the results achieved by the proposed LMPB vs the classic implemented SHO, TS, and SA. Regarding the general performance, if we observe the RPD values, the three classic version falls behind in comparison to the proposed approach. Also, this can be confirmed after applying Mann-Whitney, which given statistical significance (p-values < 0.05 ) in the achieved performance tackling all the instances in comparison to SHO, TS, and SA. We highlight that all approaches do not seem to stagnate in local optima (StdDev), which allows us to understand how solid/well-balanced these proposed methods were initially defined (especially classic methods). Also, SHO, TS, SA, and SHO-IRace reported considerably better solving times in almost all instances in comparison to LMPB. It is a fact that hybrid approaches are leading the current optimization field and are a better answer to complex problems where adaptability in the search space is needed and a key issue to consider in the future. The issue with hybrids is the selection of algorithms, for instance, it is well-known the no-free-lunch theorem has been addressed to MH, where there is no certainty on an MH to achieve an equal performance tackling different kinds of problems. Moreover, an equal situation can be addressed in ML techniques, where the performance is not guaranteed and the complexity between supervised learning vs a deep learning technique is hard to measure.

4.2.2. Overall Discussion

In this experimentation test solving discrete optimization problems, the good overall performance has given us new ideas regarding fully tackling this domain. Firstly, we observed equal phenomena illustrated on the continuous experimentation, competitive performance was achieved against specifically designed approaches. Regarding the performance of SHO, TS, and SA, the achieved results illustrate great deficiencies in comparison to optimum values, also, slightly better values were reached with the assistance of IRace. In addition to all the observations presented in Section 4.1.2, what is interesting to highlight is the change of domain applied to the movements operator of SHO. In the literature, several scientific studies have highlighted the good performance of continuous MH solving discrete optimization problems in comparison to discrete MH [50,51,55]. In this work, we employed the binarization strategy based on V 4 + Elitist discretization, however, several combinations can be tested in order to probably achieve better performance. This binarization issue can be a challenging option to be tackled by a smart component in order to give multiple opportunities in the transformation of the domain to the search in run-time.

5. Conclusions and Future Work

In this paper, a competitive learning-based architecture is proposed, and well-known methods and techniques are employed to design a novel hybrid approach capable of tackling discrete and continuous optimization problems. The main objective behind the proposed design is the interaction between MH and machine learning, where LMPB follows a population-based solving strategy assisted by multiple linear models that profit from the dynamic data generated on run-time. Regarding the performance observed through the experimentation phase, LMPB achieved competitive results tackling both discrete and continuous optimization problems. In this regard, the proposed architecture went against specially designed methods which have proved to perform on such problems, while LMPB employed a unique configuration set of parameters for both cases, which makes the development of this approach an attractive topic and worth researching. Nevertheless, it is important to highlight issues observed in the testing which can be potential paths to carry out future improvements. Firstly, the complexity implementing the architecture can be described in two topics: MH algorithm and learning method employed. In this first attempt proposing LMPB, we instantiate SHO as a potential alternative, however, it is possible to instantiate multiple algorithms in order to define a more complex component of the architecture. Also, the learning model employed is a key issue, which impacts the solving time needed to meet the termination criteria. In this regard, the linear model proved to work for LMPB, however, several learning methods aim for regression. Thus, multiple experiments need to be carried out in order to find better options in order to improve the performance and adaptiveness of the architecture. Concerning the increment in solving time, the complexity behind the architecture and the mechanism employed are the key issue. Thus, as the results improve, it is worth working on the improvement of this optimization issue (termination criteria). Regarding future scope, the focus is on improving modules 1 and 3. In module 1 we want to implement a new population-based MH in order to have more options for applying intensification and diversification. for instance, a possible idea is illustrated in Figure 5, where module 1 will be managing two big groups of movements operators from SHO, Crow Search Algorithm (CSA), and Shuffle Frog Leaping Algorithm (SFLA) which are modern population MH. On the other hand, as mentioned in Section 4.2.2, add the capability to try several binarization strategies in order to smartly guide the transformation of the domain in the variables. In module 3, the aim is to implement other regression methods, such as SVM, Deep learning approaches, and so on. The final objective is to have rich adaptability given the most fitted method to perform prognostic on run-time.

Author Contributions

Formal analysis, E.V., P.C., J.P. and R.S.; investigation, E.V., P.C., J.P., R.S., B.C. and C.C.; resource, R.S.; software, E.V. and P.C.; validation, B.C. and C.C.; writing–original draft, E.V., R.S. and B.C.; writing—review and editing, E.V. and R.S. All authors have read and agreed to the published version of the manuscript.

Funding

Ricardo Soto is supported by Grant CONICYT/FONDECYT/REGULAR/1190129. Broderick Crawford is supported by Grant ANID/FONDECYT/REGULAR/1210810, and Emanuel Vega is supported by National Agency for Research and Development ANID/Scholarship Program/DOCTORADO NACIONAL/2020-21202527.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analysed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Yang, Y.; Zhang, Y.; Meng, X. A data-driven approach for optimizing the EV charging stations network. IEEE Access 2020, 8, 118572–118592. [Google Scholar] [CrossRef]
  2. Wu, Z.; Hu, J.; Ai, X.; Yang, G. Data-driven approaches for optimizing EV aggregator power profile in energy and reserve market. Int. J. Electr. Power Energy Syst. 2021, 129, 106808. [Google Scholar] [CrossRef]
  3. Wei, Y.; Zhang, X.; Shi, Y.; Xia, L.; Pan, S.; Wu, J.; Han, M.; Zhao, X. A review of data-driven approaches for prediction and classification of building energy consumption. Renew. Sustain. Energy Rev. 2018, 82, 1027–1047. [Google Scholar] [CrossRef]
  4. Khajehzadeh, M.; Taha, M.R.; El-Shafie, A.; Eslami, M. A survey on meta-heuristic global optimization algorithms. Res. J. Appl. Sci. Eng. Technol. 2011, 3, 569–578. [Google Scholar]
  5. Stork, J.; Eiben, A.E.; Bartz-Beielstein, T. A new taxonomy of global optimization algorithms. Nat. Comput. 2020, 21, 1–24. [Google Scholar] [CrossRef]
  6. Searle, S.R.; Gruber, M.H. Linear Models; John Wiley & Sons: Hoboken, NJ, USA, 2016. [Google Scholar]
  7. Hastie, T.J.; Pregibon, D. Generalized linear models. In Statistical Models in S; Routledge: London, UK, 2017; pp. 195–247. [Google Scholar]
  8. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  9. Dhiman, G.; Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 2017, 114, 48–70. [Google Scholar] [CrossRef]
  10. Luo, Q.; Li, J.; Zhou, Y.; Liao, L. Using spotted hyena optimizer for training feedforward neural networks. Cogn. Syst. Res. 2021, 65, 1–16. [Google Scholar] [CrossRef]
  11. Vega, E.; Soto, R.; Crawford, B.; Peña, J.; Castro, C. A learning-based hybrid framework for dynamic balancing of exploration-exploitation: Combining regression analysis and metaheuristics. Mathematics 2021, 9, 1976. [Google Scholar] [CrossRef]
  12. Glover, F. Future paths for integer programming and links to artificial intelligence. Comput. Oper. Res. 1986, 13, 533–549. [Google Scholar] [CrossRef]
  13. Kirkpatrick, S. Optimization by simulated annealing: Quantitative studies. J. Stat. Phys. 1984, 34, 975–986. [Google Scholar] [CrossRef]
  14. Talbi, E.G. Combining metaheuristics with mathematical programming, constraint programming and machine learning. Ann. Oper. Res. 2016, 240, 171–215. [Google Scholar] [CrossRef]
  15. Song, H.; Triguero, I.; Özcan, E. A review on the self and dual interactions between machine learning and optimisation. Prog. Artif. Intell. 2019, 8, 143–165. [Google Scholar] [CrossRef]
  16. Talbi, E.G. Machine learning into metaheuristics: A survey and taxonomy. ACM Comput. Surv. 2021, 54, 1–32. [Google Scholar] [CrossRef]
  17. Jourdan, L.; Dhaenens, C.; Talbi, E.G. Using datamining techniques to help metaheuristics: A short survey. In International Workshop on Hybrid Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2006; pp. 57–69. [Google Scholar]
  18. Jin, Y. A comprehensive survey of fitness approximation in evolutionary computation. Soft Comput. 2005, 9, 3–12. [Google Scholar] [CrossRef]
  19. Hong, T.P.; Wang, H.S.; Chen, W.C. Simultaneously applying multiple mutation operators in genetic algorithms. J. Heuristics 2000, 6, 439–455. [Google Scholar] [CrossRef]
  20. Ramsey, C.L.; Grefenstette, J.J. Case-Based Initialization of Genetic Algorithms. In Proceedings of the 5th International Conference on Genetic Algorithms, Urbana-Champaign, IL, USA, 1 June 1993; pp. 84–91. [Google Scholar]
  21. Dalboni, F.L.; Ochi, L.S.; Drummond, L.M.A. On improving evolutionary algorithms by using data mining for the oil collector vehicle routing problem. In Proceedings of the International Network Optimization Conference, Evry/Paris, France, 27–29 October 2003; pp. 182–188. [Google Scholar]
  22. Santos, H.G.; Ochi, L.S.; Marinho, E.H.; Drummond, L.M.D.A. Combining an evolutionary algorithm with data mining to solve a single-vehicle routing problem. Neurocomputing 2006, 70, 70–77. [Google Scholar] [CrossRef]
  23. Calvet, L.; de Armas, J.; Masip, D.; Juan, A.A. Learnheuristics: Hybridizing metaheuristics with machine learning for optimization with dynamic inputs. Open Math. 2017, 15, 261–280. [Google Scholar] [CrossRef]
  24. Jong, K.D. Parameter setting in EAs: A 30 year perspective. In Parameter Setting in Evolutionary Algorithms; Springer: Berlin/Heidelberg, Gemrany, 2007; pp. 1–18. [Google Scholar]
  25. Karimi-Mamaghan, M.; Mohammadi, M.; Meyer, P.; Karimi-Mamaghan, A.M.; Talbi, E.G. Machine Learning at the service of Meta-heuristics for solving Combinatorial Optimization Problems: A state-of-the-art. Eur. J. Oper. Res. 2022, 296, 393–422. [Google Scholar] [CrossRef]
  26. Talbi, E.G. Metaheuristics: From Design to Implementation; John Wiley & Sons: Hoboken, NJ, USA, 2009; Volume 74. [Google Scholar]
  27. Zennaki, M.; Ech-Cherif, A. A new machine learning based approach for tuning metaheuristics for the solution of hard combinatorial optimization problems. J. Appl. Sci. 2010, 10, 1991–2000. [Google Scholar] [CrossRef]
  28. Trindade, Á.R.; Campelo, F. Tuning metaheuristics by sequential optimisation of regression models. Appl. Soft Comput. 2019, 85, 105829. [Google Scholar] [CrossRef]
  29. Caserta, M.; Rico, E.Q. A cross entropy-Lagrangean hybrid algorithm for the multi-item capacitated lot-sizing problem with setup times. Comput. Oper. Res. 2009, 36, 530–548. [Google Scholar] [CrossRef]
  30. Soto, R.; Crawford, B.; Vega, E.; Gómez, A.; Gómez-Pulido, J.A. Solving the Set Covering Problem Using Spotted Hyena Optimizer and Autonomous Search. In Advances and Trends in Artificial Intelligence. From Theory to Practice; IEA/AIE 2019; Wotawa, F., Friedrich, G., Pill, I., Koitz-Hristov, R., Ali, M., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2019; Volume 11606. [Google Scholar]
  31. Soto, R.; Crawford, B.; González, F.; Vega, E.; Castro, C.; Paredes, F. Solving the Manufacturing Cell Design Problem Using Human BehaviorBased Algorithm Supported by Autonomous Search. IEEE Access 2019, 7, 132228–132239. [Google Scholar] [CrossRef]
  32. Egwim, C.N.; Egunjobi, O.O.; Gomes, A.; Alaka, H. A Comparative Study on Machine Learning Algorithms for Assessing Energy Efficiency of Buildings. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Online, 13–17 September 2021; Springer: Cham, Switzerland, 2021; pp. 546–566. [Google Scholar]
  33. Menard, S. Applied Logistic Regression Analysis; Sage: Newcastle upon Tyne, UK, 2002; Volume 106. [Google Scholar]
  34. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. 1996, 58, 267–288. [Google Scholar] [CrossRef]
  35. McDonald, G.C. Ridge regression. Wiley Interdiscip. Rev. Comput. Stat. 2009, 1, 93–100. [Google Scholar] [CrossRef]
  36. Akwimbi, J. Modelling The Growth of Pension Funds Using Generalized Linear Model (gamma Regression). Ph.D. Thesis, University of Nairobi, Nairobi, Kenya, 2014. [Google Scholar]
  37. Yu, L.; Ma, X.; Wu, W.; Wang, Y.; Zeng, B. A novel elastic net-based NGBMC (1, n) model with multi-objective optimization for nonlinear time series forecasting. Commun. Nonlinear Sci. Numer. Simul. 2021, 96, 105696. [Google Scholar] [CrossRef]
  38. Gelman, A.; Carlin, J.B.; Stern, H.S.; Rubin, D.B. Bayesian Data Analysis; Chapman and Hall/CRC: Boca Raton, FL, USA, 1995. [Google Scholar]
  39. Digalakis, J.; Margaritis, K. On benchmarking functions for genetic algorithms. Int. J. Comput. Math 2001, 77, 481–506. [Google Scholar] [CrossRef]
  40. Yang, X. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio-Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  41. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  42. Cortés-Toro, E.M.; Crawford, B.; Gómez-Pulido, J.A.; Soto, R.; Lanza-Gutiérrez, J.M. A New Metaheuristic Inspired by the Vapour-Liquid Equilibrium for Continuous Optimization. Appl. Sci. 2018, 8, 2080. [Google Scholar] [CrossRef]
  43. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  44. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  45. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  46. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  47. Xu, J.; Yan, F. Hybrid Nelder–Mead algorithm and dragonfly algorithm for function optimization and the training of a multilayer perceptron. Arab. J. Sci. Eng. 2019, 44, 3473–3487. [Google Scholar] [CrossRef]
  48. Pisinger, D. The quadratic knapsack problem—A survey. Discrete applied mathematics. Discret. Appl. Math. 2007, 155, 623–648. [Google Scholar] [CrossRef]
  49. Horowitz, E.; Sahni, S. Computing partitions with applications to the knapsack problem. J. ACM 1974, 21, 277–292. [Google Scholar] [CrossRef]
  50. Mirjalili, S.; Lewis, A. S-shaped versus v-shaped transfer functions for binary particle swarm optimization. Swarm Evol. Comput. 2013, 9, 1–14. [Google Scholar] [CrossRef]
  51. Lanza-Gutierrez, J.M.; Crawford, B.; Soto, R.; Berrios, N.; Gomez-Pulido, J.A.; Paredes, F. Analyzing the effects of binarization techniques when solving the set covering problem through swarm optimization. Expert Syst. Appl. 2017, 70, 67–82. [Google Scholar] [CrossRef]
  52. Khemakhem, M.; Haddar, B.; Chebil, K.; Hanafi, S. A Filter-and-Fan Metaheuristic for the 0–1 Multidimensional Knapsack Problem. Int. J. Appl. Metaheuristic Comput. 2012, 3, 43–63. [Google Scholar] [CrossRef]
  53. Chih, M. Three pseudo-utility ratio-inspired particle swarm optimization with local search for multidimensional knapsack problem. Swarm Evol. Comput. 2018, 39, 279–296. [Google Scholar] [CrossRef]
  54. Haddar, B.; Khemakhem, M.; Hanafi, S.; Wilbaut, C. A hybrid quantum particle swarm optimization for the multidimensional knapsack problem. Eng. Appl. Artif. Intell. 2016, 55, 1–13. [Google Scholar] [CrossRef]
  55. Lemus-Romani, J.; Becerra-Rozas, M.; Crawford, B.; Soto, R.; Cisternas-Caneo, F.; Vega, E.; García, J. A novel learning-based binarization scheme selector for swarm algorithms solving combinatorial problems. Mathematics 2021, 9, 2887. [Google Scholar] [CrossRef]
Figure 1. Graphic illustration of the proposed components for LMPB.
Figure 1. Graphic illustration of the proposed components for LMPB.
Mathematics 10 02920 g001
Figure 2. Graphic illustration of the proposed components for LMPB.
Figure 2. Graphic illustration of the proposed components for LMPB.
Mathematics 10 02920 g002
Figure 3. Graphic illustration of the applied thresholds through the search.
Figure 3. Graphic illustration of the applied thresholds through the search.
Mathematics 10 02920 g003
Figure 4. Graphic learning-based models.
Figure 4. Graphic learning-based models.
Mathematics 10 02920 g004
Figure 5. Graphic illustration of the proposed improvement to be carried out in module 1.
Figure 5. Graphic illustration of the proposed improvement to be carried out in module 1.
Mathematics 10 02920 g005
Table 1. Population sizes employed as schemes.
Table 1. Population sizes employed as schemes.
IDAmount of Agents
scheme 120
scheme 230
scheme 340
scheme 450
Table 2. Probabilities initially assigned to each scheme.
Table 2. Probabilities initially assigned to each scheme.
IDProbability to Be Selected
scheme 10.25
scheme 20.25
scheme 30.25
scheme 40.25
Table 3. Modified probabilities for each scheme to be selected.
Table 3. Modified probabilities for each scheme to be selected.
IDProbability to Be Selected
scheme 10.20
scheme 20.20
scheme 30.40
scheme 40.20
Table 4. Configuration parameters defined for LMPB.
Table 4. Configuration parameters defined for LMPB.
ParametersValues
Search agentsScheme (20, 30, 40, 50)
Control parameter (h)[5, 0]
M constant[0.5, 1]
Number of generations5000
α 50
β 1000
Table 5. Optimum values reported for the benchmark functions in the literature, with their corresponding solutions, and search subsets.
Table 5. Optimum values reported for the benchmark functions in the literature, with their corresponding solutions, and search subsets.
FunctionSearch SubsetsOptSol
f 1 (x) [ 100 , 100 ] 30 0 [ 0 ] 30
f 2 (x) [ 10 , 10 ] 30 0 [ 0 ] 30
f 3 (x) [ 100 , 100 ] 30 0 [ 0 ] 30
f 4 (x) [ 30 , 30 ] 30 0 [ 1 ] 30
f 5 (x) [ 500 , 500 ] 30 −12596.487 [ 420.9687 ] 30
f 6 (x) [ 5.12 , 5.12 ] 30 0 [ 0 ] 30
f 7 (x) [ 32 , 32 ] 30 0 [ 0 ] 30
f 8 (x) [ 600 , 600 ] 30 0 [ 0 ] 30
f 9 (x) [ 50 , 50 ] 30 0 [ 1 ] 30
f 10 (x) [ 65.536 , 65.536 ] 2 1 [ 32 ] 2
f 11 (x) [ 5 , 5 ] 2 −1.0316285(0.08983, −0.7126) and
(−0.08983, 0.7126)
f 12 (x) [ 5 , 10 ] for x 1
and [ 0 , 15 ] for x 2
0.397887(−3.142, 12.275), (3.142, 2.275),
and (9.425, 2.425)
f 13 (x) [ 2 , 2 ] 2 3(0, −1)
f 14 (x) [ 0 , 1 ] 3 −3.86(0.114, 0.556, 0.852)
f 15 (x) [ 0 , 1 ] 6 −3.32(0.201, 0.150, 0.477, 0.275, 0.275, 0.377, 0.657)
Table 6. Values of a i j , c i , and p i j for function f 14 (x); n = 3 and j = 1, 2, 3.
Table 6. Values of a i j , c i , and p i j for function f 14 (x); n = 3 and j = 1, 2, 3.
i a ij c i p ij
13103010.36890.11700.2673
20.110351.20.46990.43870.7470
33103030.10910.87320.5547
40.110303.20.038150.57430.8828
Table 7. Values of a i j , c i , and p i j for function f 15 (x); n = 6 and j = 1, 2, …, 6.
Table 7. Values of a i j , c i , and p i j for function f 15 (x); n = 6 and j = 1, 2, …, 6.
i a ij c i p ij
1103173.51.7810.1310.1690.5560.0120.8280.588
20.0510170.18141.20.2320.4130.8300.3730.1000.999
333.51.71017830.2340.1410.3520.2880.3040.665
41780.05100.1143.20.4040.8820.8730.5740.1090.038
Table 8. Results comparison in unimodal benchmark functions.
Table 8. Results comparison in unimodal benchmark functions.
FLMPBWOADEGSAPSOVLEINMDA
AvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDev
f 1 0.09072.03860.00000.0000 8.2000 × 10 14 5.9000 × 10 14 2.5300 × 10 16 0.0000 1.3600 × 10 4 2.0200 × 10 4 4.4989 × 10 7 1.413 × 10 6 0.00000.0000
f 2 0.03460.52930.00000.0000 1.5000 × 10 9 9.9000 × 10 10 5.5655 × 10 2 0.1941 4.2144 × 10 2 4.5421 × 10 2 3.0840 × 10 6 6.0498 × 10 6 0.00000.0000
f 3 0.00000.0000 5.3900 × 10 7 2.9300 × 10 6 6.8000 × 10 11 7.4000 × 10 11 8.9353 × 10 2 3.1896 × 10 2 70.12622.1195.20200.79860.00000.0000
f 4 28.534270.045427.8660.76360.00000.000067.54362.22596.71860.11679.19937.4000.00000.0000
Table 9. Results comparison in multimodal benchmark functions.
Table 9. Results comparison in multimodal benchmark functions.
FLMPBWOADEGSAPSOVLEINMDA
AvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDev
f 5 −914.19754974.5174 5.0808 × 10 3 6.9580 × 10 2 1.1080 × 10 4 5.7470 × 10 2 2.8211 × 10 3 4.9304 × 10 2 4.8413 × 10 3 1.1528 × 10 3 1.2566 × 10 4 68.705−2245.15002.8400
f 6 0.18655.28890.00000.000069.20038.80025.9687.470146.70411.62934.583017.88600.00000.0000
f 7 7.65819.72177.40439.8976 9.7000 × 10 8 4.2000 × 10 8 6.2087 × 10 2 0.236280.276020.509013.17043.92110.0000 1.6200 × 10 16
f 8 0.00560.1538 2.8900 × 10 4 1.5860 × 10 3 0.00000.000027.7025.0403 9.2150 × 10 3 7.7240 × 10 3 0.50740.50410.00000.0000
f 9 1.8286 1.5985 × 10 9 0.33970.2149 7.9000 × 10 15 8.0000 × 10 15 1.79960.95114 6.9170 × 10 3 2.6301 × 10 2 0.23690.28770.00000.0000
Table 10. Results comparison in multimodal benchmark functions with fixed-dimension.
Table 10. Results comparison in multimodal benchmark functions with fixed-dimension.
FLMPBWOADEGSAPSOVLEINMDA
AvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDev
f 10 11.68587.82372.11202.49860.99800 3.3000 × 10 16 5.85983.83133.62722.56080.99800 2.5294 × 10 7 N/AN/A
f 11 0.00010.0022 4.2000 × 10 7 −1.0316 3.1000 × 10 13 −1.0316 4.8800 × 10 16 −1.0316 6.2500 × 10 16 −1.0315 1.8408 × 10 4 N/AN/A
f 12 −1.35490.28140.39791 2.7000 × 10 5 0.39789 9.9000 × 10 9 0.397890.00000.397890.00000.39815 4.5697 × 10 4 N/AN/A
f 13 0.00010.00223.0000 4.2200 × 10 15 3.0000 2.0000 × 10 15 3.0000 4.1700 × 10 15 3.0000 1.3300 × 10 15 3.0097 1.6256 × 10 2 N/AN/A
f 14 −1.42990.7508−3.8562 2.7060 × 10 3 N/AN/A−3.8628 2.2900 × 10 15 −3.8628 2.5800 × 10 15 −3.8628 6.6880 × 10 5 N/AN/A
f 15 −0.86210.4242−2.98110.37665N/AN/A−3.3178 2.3081 × 10 2 −3.2663 6.0516 × 10 2 −3.3179 2.1311 × 10 2 N/AN/A
Table 11. Detailed result comparison between the proposed LMPB and L B 2 .
Table 11. Detailed result comparison between the proposed LMPB and L B 2 .
FOptLMPB LB 2
BestWorstAvgStdDevAvg Time (s)BestWorstAvgStdDevAvg Time (s)
f 1 0028.27860.09072.0386150.1993000050.2377
f 2 0014.70920.03460.5293190.9703000080.7524
f 3 00000986.8423000096.3627
f 4 0029.495728.534270.0454296.1747 1.59197 × 10 7 1.2262 × 10 6 6.7549 × 10 7 5.4204 × 10 7 71.0024
f 5 −12569.487−12569.4879016.3258−914.19754974.5174250.7817 1.2570 × 10 4 1.2567 × 10 4 1.2569 × 10 4 0.0014110.3354
f 6 001.89340.18651.2189217.4014000060.6482
f 7 0020.00017.65819.7217427.1252 4.4408 × 10 16 4.4409 × 10 16 4.4409 × 10 16 024.9122
f 8 007.38800.00560.1538255.7067000021.7758
f 9 01.82901.82901.829002223.45751.82851.82861.8286 1.5985 × 10 9 24.9172
f 10 16.940712.718711.68587.8237901.5922111017.5661
f 11 −1.031600.02330.00010.0022142.001000007.5244
f 12 0.3979−1.1395−1.5122−1.35490.281423.03921.19052.03251.54360.42234.5528
f 13 30.001200.00010.0022129.001032.684532.684532.6845 1.4854 × 10 8 3.6846
f 14 −3.86−2.0080−0.0554−1.42990.7508229.6161−2.0081−2.0080−2.0081 5.0800 × 10 10 7.1120
f 15 −3.32−1.1676−0.0056−0.86210.4242330.3406−2.1676−2.1676−2.167608.1145
Table 12. Detailed result comparison between the proposed LMPB and SHO-IRace.
Table 12. Detailed result comparison between the proposed LMPB and SHO-IRace.
FOptLMPBSHO-IRace
BestWorstAvgStdDevAvg Time (s)BestWorstAvgStdDevAvg Time (s)
f 1 0028.27860.09072.0386150.1993086.47290.10022.1974130.2574
f 2 0014.70920.03460.5293190.9703022.21190.03620.5566181.1410
f 3 00000986.842302118.029597.4849352.2949882.2675
f 4 0029.495728.534270.0454296.17470188.632228.522169.3946271.6308
f 5 −12569.487−12569.4879016.3258−914.19754974.5174250.7817−12569.48629016.3365−925.50514981.0787229.1431
f 6 001.89340.18651.2189217.401402382.55450.268713.1434163.7028
f 7 0020.00017.65819.7217427.1252 4.4408 × 10 16 22.23587.39769.6549325.4619
f 8 007.38800.00560.1538255.706703.46900.05930.4755195.3925
f 9 01.82901.82901.829002223.457535.58371766.7315526.3003410.13042060.7682
f 10 16.940712.718711.68587.8237901.592212.7186498.943413.11479.6306855.9498
f 11 −1.031600.02330.00010.0022142.001000.17450.00010.0021120.5488
f 12 0.3979−1.1395−1.5122−1.35490.281423.0392−1.1395−1.6328−1.41910.237221.7540
f 13 30.001200.00010.0022129.001032.6846635.1801255.2925237.1631204.8439
f 14 −3.86−2.0080−0.0554−1.42990.7508229.6161−2.00800.0467-1.23190.7848183.3399
f 15 −3.32−1.1676−0.0056−0.86210.4242330.3406−2.0080−1.6155−0.84800.4529313.5484
Table 13. Configuration details from MKP instances employed in this work.
Table 13. Configuration details from MKP instances employed in this work.
IDTest ProblemOptimal Solutionnm
mknapcb15.100.00243811005
5.100.01242741005
5.100.02235511005
5.100.03235341005
5.100.04239911005
mknapcb25.250.00593122505
5.250.01614722505
5.250.02621302505
5.250.03594632505
5.250.04589512505
mknapcb35.500.001201485005
5.500.011178795005
5.500.021211315005
5.500.031208045005
5.500.041223195005
mknapcb410.100.002306410010
10.100.012280110010
10.100.022213110010
10.100.032277210010
10.100.042275110010
mknapcb510.250.005918725010
10.250.015878125010
10.250.025809725010
10.250.036100025010
10.250.045809225010
mknapcb610.500.0011782150010
10.500.0111924950010
10.500.0211921550010
10.500.0311882950010
10.500.0411653050010
Table 14. Computational results achieved by LMPB and state-of-the-art approaches solving the MKP.
Table 14. Computational results achieved by LMPB and state-of-the-art approaches solving the MKP.
LMPB QPSO 3R—PSO F & F
IDTest ProblemOptBestAvgRPD (%)BestAvgRPD (%)BestAvgRPD (%)BestAvgRPD (%)
5.100.00243812438118193.26470.0024381243810.0024381243810.0024381N/A0.00
5.100.01242742427417674.11590.0024274242740.0024274242740.0024274N/A0.00
5.100.02235512355117860.94330.0023551235510.0023538235380.0623551N/A0.00
5.100.03235342353419692.47540.0023534235340.0023534235080.0023534N/A0.00
mknapcb15.100.04239912399117863.38120.0023991239910.0023991239610.0023991N/A0.00
5.250.00593125931246587.95610.0059312593120.00N/AN/AN/A59312N/A0.00
5.250.01614726147247299.20740.0061472614700.00N/AN/AN/A61468N/A0.01
5.250.02621306213049261.72060.0062130621300.00N/AN/AN/A62130N/A0.00
5.250.03594635946346365.18880.0059427594270.06N/AN/AN/A59436N/A0.05
mknapcb25.250.04589515895147005.23850.0058951589510.00N/AN/AN/A58951N/A0.00
5.500.0012014810198088110.077815.121201301201050.011201411021010.01120134N/A0.01
5.500.011178799990190506.609115.251178441178340.031178641178250.01117864N/A0.01
5.500.0212113110255991014.052015.331211121210920.021211291211030.00121131N/A0.00
5.500.0312080410086491796.012216.501208041207400.001208041207220.00120794N/A0.01
mknapcb35.500.0412231910252091771.778916.181223191223000.001223191223100.00122319N/A0.00
10.100.00230642306422275.53210.0023064230640.0023064230500.0023064N/A0.00
10.100.01228012280121295.60740.0022801228010.0022801227520.0022801N/A0.00
10.100.02221312213120486.65560.0022131221310.0022131221190.0022131N/A0.00
10.100.03227722277218785.58840.0022772227720.0022772227440.0022772N/A0.00
mknapcb410.100.04227512275122604.25870.0022751227510.0022751226510.0022751N/A0.00
10.250.00591875918755818.99610.0059182591730.01N/AN/AN/A59164N/A0.04
10.250.01587815878155302.69300.0058781587330.00N/AN/AN/A58693N/A0.15
10.250.02580975809752907.79820.0058097580960.00N/AN/AN/A58094N/A0.01
10.250.03610006100057342.30730.0061000609860.00N/AN/AN/A60972N/A0.05
mknapcb510.250.04580925809255037.26800.0058092580920.00N/AN/AN/A58092N/A0.00
10.500.0011782110322693309.365512.381177441177330.071177901176990.03117734N/A0.07
10.500.0111924910508896823.878011.871191771191480.061191551191250.08119181N/A0.06
10.500.0211921510487096151.907612.031192151191460.001192111190940.00119194N/A0.02
10.500.0311882910430895338.566512.221187751187470.051188131187540.01118784N/A0.04
mknapcb610.500.0411653010138092260.284413.001165021164490.021164701165090.05116471N/A0.05
Table 15. Computational results achieved by LMPB and SHO-IRace solving the MKP.
Table 15. Computational results achieved by LMPB and SHO-IRace solving the MKP.
LMPBSHO-IRace
IDOptBestWorstAvgStdDevRPD (%)Avg Time (s)BestWorstAvgStdDevRPD (%)Avg Time (s)
5.100.0024381243811759518193.2647689.35220.004669.6999206611759518269.0889719.683415.255872.1652
5.100.0124274242741740117674.1159522.11860.005697.6984197921740117680.8992536.959518.465553.4974
5.100.0223551235511769217860.9433395.57850.004292.5007201191769217956.3902485.537614.574467.5135
5.100.0323534235341968519692.475449.32860.005347.2370207031968519692.893174.209212.023854.0709
5.100.0423991239911774417863.3812320.11720.005747.0107195251774417840.1698265.927518.614897.6560
5.250.0059312593124604946587.9561858.53380.008670.5223502564604946612.2596903.515915.266656.1823
5.250.0161472614724689047299.2074749.79090.007810.9763515274689047277.6690738.817816.176568.5947
5.250.0262130621304923749261.7206163.31910.005671.1701502924923749257.9839117.642719.054843.4766
5.250.0359463594634280446365.18882137.54360.0016606.7606508904280446275.68292190.803714.4115333.6760
5.250.0458951589514687047005.2385369.04290.006987.2142498934687046979.8645348.919415.366414.6560
5.500.001201481019807316888110.077811544.982615.1231594.70541014007316889634.361410969.323615.6040985.2208
5.500.01117879999017126590506.609111400.254615.2541155.1138991237126590470.857111432.073715.9141596.6308
5.500.021211311025597467891014.052012735.628715.3333245.15041035797467894113.144211512.856214.4939693.0396
5.500.031208041008647471591769.012210609.504416.5044675.91071015727471591395.012810851.057615.9239272.9026
5.500.041223191025207453791771.778910591.142216.1842645.16081020577453790647.502411272.319316.5643738.2077
10.100.0023064230641729822275.5321670.60740.007179.9602197511729817766.0012587.712314.368278.5790
10.100.0122801228011735221295.507444.23360.006618.0995190811735217470.8750284.283216.314660.8592
10.100.0222131221311569920486.6556948.50330.008081.3328193421569916531.9192901.222712.605975.8820
10.100.0322772227721881719795.5884469.07940.006866.3064200171881718861.1892148.765612.095070.9132
10.100.0422751227511756422604.2587436.99230.006945.8575196671756417804.0787443.925413.555626.2527
10.250.0059187591874808655818.996111675.87560.009550.5818522504808648545.8764815.528011.727242.5197
10.250.0158781587814317355302.69305750.75010.0013587.1938508694317346824.41943789.485013.468701.9378
10.250.0258097580974553852907.798210827.50620.0015849.1611502614553846420.77041018.777213.4813069.1670
10.250.0361000610004758757342.307310802.16530.0011107.4894522864758748855.70661996.152714.286072.3390
10.250.0458092580924770355037.268011251.26480.009075.8829514034770348273.0614868.314611.517042.7040
10.500.001178211032267474693309.365513265.193112.3833763.72031036087474691656.552213723.037112.0630478.6163
10.500.011192491050887653196823.878012237.090211.8738343.99761049967653197534.932511834.392311.9542585.3414
10.500.021192151048707462096151.907611857.687912.0346075.88741053297462095092.773012464.668011.6437875.6117
10.500.031188291043087484595338.566511119.613312.2247983.94971036637484594803.795711431.025712.7643169.6069
10.500.041165301013807444192260.284410578.115213.0043098.13061018697444192366.412310647.490012.5843326.2896
Table 16. Computational results achieved by LMPB and SHO solving the MKP.
Table 16. Computational results achieved by LMPB and SHO solving the MKP.
LMPBSHO
IDOptBestWorstAvgStdDevRPD (%)Avg Time (s)BestWorstAvgStdDevRPD (%)Avg Time (s)
5.100.0024381243811759518193.2647689.35220.004669.6999179501639117109.1000658.052426.36210.4372
5.100.0124274242741740117674.1159522.11860.005697.6984178541648617055.0500541.337926.44181.5390
5.100.0223551235511769217860.9433395.57850.004292.5007178861625617297.0000639.437024.05150.7572
5.100.0323534235341968519692.475449.32860.005347.2370184451788917963.2500161.911921.62190.4938
5.100.0423991239911774417863.3812320.11720.005747.0107176781743017528.4000105.711526.31140.3210
5.250.0059312593124604946587.9561858.53380.008670.5223448914445344596.0000143.721224.311230.0471
5.250.0161472614724689047299.2074749.79090.007810.9763459284430645047.0000480.160025.28848.3955
5.250.0262130621304923749261.7206163.31910.005671.1701425634252042522.15009.371631.491292.4814
5.250.0359463594634280446365.18882137.54360.0016606.7606467824603846257.8500272.583021.3215333.6760
5.250.0458951589514687047005.2385369.04290.006987.2142454454381544565.4000446.680422.911076.0837
5.500.001201481019807316888110.077811544.982615.1231594.7054911108980790131.7000417.401124.162191.6924
5.500.01117879999017126590506.609111400.254615.2541155.1138917018947990880.9500521.398022.202157.1687
5.500.021211311025597467891014.052012735.628715.3333245.1504924369170291753.8500169.822923.682873.9049
5.500.031208041008647471591769.012210609.504416.5044675.9107936389151293040.6500487.980722.482986.8021
5.500.041223191025207453791771.778910591.142216.1842645.1608903288782590077.7000750.900026.152664.4909
10.100.0023064230641729822275.5321670.60740.007179.9602196261804319071.1000576.533214.90112.5756
10.100.0122801228011735221295.507444.23360.006618.0995175461603617085.5500377.420723.0499.3756
10.100.0222131221311569920486.6556948.50330.008081.3328180571701217309.7000337.480018.40120.5140
10.100.0322772227721881719795.5884469.07940.006866.3064200241875519178.6000401.201912.0699.6616
10.100.0422751227511756422604.2587436.99230.006945.8575186511809918185.0000171.616418.0297.3623
10.250.0059187591874808655818.996111675.87560.009550.5818451434449344914.5000310.030223.72566.9247
10.250.0158781587814317355302.69305750.75010.0013587.1938480904735647735.0500297.788918.18590.0571
10.250.0258097580974553852907.798210827.50620.0015849.1611475364593847088.3000421.650018.17492.0703
10.250.0361000610004758757342.307310802.16530.0011107.4894479684688447176.6500330.782521.36589.1354
10.250.0458092580924770355037.268011251.26480.009075.8829471394489546559.7500854.325518.85933.7649
10.500.001178211032267474693309.365513265.193112.3833763.7203909958969090281.8000381.116222.762665.9732
10.500.011192491050887653196823.878012237.090211.8738343.9976902078769189507.1500602.092624.353015.1351
10.500.021192151048707462096151.907611857.687912.0346075.8874941969135992369.0500615.966920.982569.8929
10.500.031188291043087484595338.566511119.613312.2247983.9497945499179693328.1000614.444220.442707.8142
10.500.041165301013807444192260.284410578.115213.0043098.1306912348933690872.6500450.890521.702465.6711
Table 17. Computational results achieved by LMPB and TS solving the MKP.
Table 17. Computational results achieved by LMPB and TS solving the MKP.
LMPBTS
IDOptBestWorstAvgStdDevRPD (%)Avg Time (s)BestWorstAvgStdDevRPD (%)Avg Time (s)
5.100.0024381243811759518193.2647689.35220.004669.6999179201464617200.2660441.615026.509.2408
5.100.0124274242741740117674.1159522.11850.005697.6984178951528117209.6480764.074626.288.1997
5.100.0223551235511769217860.9432395.57840.004292.5006175571547316471.3200537.526625.459.0440
5.100.0323534235341968519692.475349.32860.005347.2370181531495318068.7160764.730622.866.9277
5.100.0423991239911774417863.3811320.11710.005747.0107175991572217760.1780245.755026.648.1933
5.250.0059312593124604946587.9560858.53380.008670.5223454314191645392.1620297.620423.4051.9413
5.250.0161472614724689047299.2073749.79080.007810.9763446513904842666.39201629.983327.3690.8240
5.250.0262130621304923749261.7205163.31900.005671.1700445874224443400.5440710.572428.2455.6142
5.250.0359463594634280446365.18872137.54350.0016606.7606465104037646108.06801191.608321.7873.4146
5.250.0458951589514687047005.2385369.04290.006987.2141436224151143578.5660235.757526.0083.7176
5.500.001201481019807316888110.077711544.982615.1231594.7054893658519988040.65801055.135225.62657.7096
5.500.01117879999017126590506.609011400.254615.2541155.1138911928732690738.17401051.463122.64380.6708
5.500.021211311025597467891014.052012735.628715.3333245.1504921558716890280.15002329.082223.92448.3687
5.500.031208041008647471591769.012210609.504416.5044675.9107923448844491129.6820993.037023.56417.6390
5.500.041223191025207453791771.778810591.142216.1842645.1608869558083285634.6120985.276328.91326.0826
10.100.0023064230641729832275.532024670.60740.007179.9601193651711719292.6880607.915916.044.2649
10.100.0122801228011735231295.507424044.23360.006618.0994185351642017955.4980714.623818.715.3840
10.100.0222131221311569930486.655523948.50330.008081.3327175231483516785.3360484.250020.823.3187
10.100.0322772227721881732795.588424469.07940.006866.3063182291817918190.500021.041619.954.3792
10.100.0422751227511756432604.258624436.99230.006945.8575188331761918463.4220277.100717.224.9251
10.250.0059187591874808655818.996011675.87560.009550.5818441354002543711.1320933.049625.4339.0509
10.250.0158781587814317355302.693010750.75010.0013587.1938464384242745226.7400943.053521.0047.0854
10.250.0258097580974553852907.798210827.50620.0015849.1611440804189043428.4520463.402724.1340.9750
10.250.0361000610004758757342.307310802.16530.0011107.4894463774507446255.3360258.935423.9743.8730
10.250.0458092580924770355037.267911251.26480.009075.8829430493823242366.8760981.845825.9042.2721
10.500.001178211032267474693309.365413265.193112.3833763.7203909198912390331.2340447.716122.83178.1260
10.500.011192491050887653196823.878012237.090211.8738343.9976919688586991923.88201797.318122.88231.8760
10.500.021192151048707462096151.907511857.687912.0346075.8874959849256795468.65801811.027019.49270.4680
10.500.031188291043087484595338.566411119.613312.2247983.9497912978508090911.85601131.860523.17197.9920
10.500.041165301013807444192260.284410578.115213.0043098.1306927928802793015.97801423.481620.37342.9408
Table 18. Computational results achieved by LMPB and SA solving the MKP.
Table 18. Computational results achieved by LMPB and SA solving the MKP.
LMPBSA
IDOptBestWorstAvgStdDevRPD (%)Avg Time (s)BestWorstAvgStdDevRPD (%)Avg Time (s)
5.100.0024381243811759518193.2647689.35220.004669.6999166451575016488.2857257.516331.7315.5006
5.100.0124274242741740117674.1159522.11850.005697.6984167321557416061.4381560.761731.0715.0110
5.100.0223551235511769217860.9432395.57840.004292.5006146631338014398.9333378.207837.749.1421
5.100.0323534235341968519692.475349.32860.005347.2370170331474716594.0540730.572627.6210.4877
5.100.0423991239911774417863.3811320.11710.005747.0107171061630716974.1016296.630528.7012.3591
5.250.0059312593124604946587.9560858.53380.008670.5223448614323044563.9048518.980624.3676.2560
5.250.0161472614724689047299.2073749.79080.007810.9763419024132141646.1333249.885531.8465.4760
5.250.0262130621304923749261.7205163.31900.005671.1700433164063642807.8381791.179830.2854.3458
5.250.0359463594634280446365.18872137.54350.0016606.7606481124194146223.61592115.762419.0972.0230
5.250.0458951589514687047005.2385369.04290.006987.2141442354228444005.2921447.848624.9691.7240
5.500.001201481019807316888110.077711544.982615.1231594.7054912268793190928.0222669.946524.07333.4513
5.500.01117879999017126590506.609011400.254615.2541155.1138907498821390514.8825512.755423.02365.0060
5.500.021211311025597467891014.052012735.628715.3333245.1504883978600387795.4984701.448027.02219.4263
5.500.031208041008647471591769.012210609.504416.5044675.9107896158885589479.8889290.567425.82289.9401
5.500.041223191025207453791771.778810591.142216.1842645.1608879748470087449.4000952.090728.08393.7943
10.100.0023064230641729832275.532024670.60740.007179.9601186451724518052.5873629.003319.167.8250
10.100.0122801228011735231295.507424044.23360.006618.0994188411751518615.5016423.913817.3710.6280
10.100.0222131221311569930486.655523948.50330.008081.3327174651657517456.634970.787021.088.9347
10.100.0322772227721881732795.588424469.07940.006866.3063181521578617972.5873402.417620.298.8359
10.100.0422751227511756432604.258624436.99230.006945.8575187051743118372.9206553.125017.788.9342
10.250.0059187591874808655818.996011675.87560.009550.5818432803994642544.8889940.829026.8831.3592
10.250.0158781587814317355302.693010750.75010.0013587.1938467854399946371.5143752.375820.4156.4902
10.250.0258097580974553852907.798210827.50620.0015849.1611435584228843386.3968320.301525.0346.1420
10.250.0361000610004758757342.307310802.16530.0011107.4894428224042642461.4381766.115829.8051.4840
10.250.0458092580924770355037.267911251.26480.009075.8829416854053741224.6794456.996828.2482.5832
10.500.001178211032267474693309.365413265.193112.3833763.7203907418727890169.9238720.153822.98139.6827
10.500.011192491050887653196823.878012237.090211.8738343.9976893168772689057.6190400.370925.10171.5081
10.500.021192151048707462096151.907511857.687912.0346075.8874912628998591043.4254445.030723.45205.4551
10.500.031188291043087484595338.566411119.613312.2247983.9497906558915790110.2825511.538623.71175.9941
10.500.041165301013807444192260.284410578.115213.0043098.1306915878883991338.3778528.397621.40349.4433
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vega, E.; Soto, R.; Contreras, P.; Crawford, B.; Peña, J.; Castro, C. Combining a Population-Based Approach with Multiple Linear Models for Continuous and Discrete Optimization Problems. Mathematics 2022, 10, 2920. https://doi.org/10.3390/math10162920

AMA Style

Vega E, Soto R, Contreras P, Crawford B, Peña J, Castro C. Combining a Population-Based Approach with Multiple Linear Models for Continuous and Discrete Optimization Problems. Mathematics. 2022; 10(16):2920. https://doi.org/10.3390/math10162920

Chicago/Turabian Style

Vega, Emanuel, Ricardo Soto, Pablo Contreras, Broderick Crawford, Javier Peña, and Carlos Castro. 2022. "Combining a Population-Based Approach with Multiple Linear Models for Continuous and Discrete Optimization Problems" Mathematics 10, no. 16: 2920. https://doi.org/10.3390/math10162920

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop