Next Article in Journal
Application of Mathematical Models to Assess the Impact of the COVID-19 Pandemic on Logistics Businesses and Recovery Solutions for Sustainable Development
Next Article in Special Issue
On Little’s Formula in Multiphase Queues
Previous Article in Journal
Stable and Convergent Finite Difference Schemes on NonuniformTime Meshes for Distributed-Order Diffusion Equations
Previous Article in Special Issue
A Self-Adaptive Cuckoo Search Algorithm Using a Machine Learning Technique
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Learning-Based Hybrid Framework for Dynamic Balancing of Exploration-Exploitation: Combining Regression Analysis and Metaheuristics

1
Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, Chile
2
Departamento de Informática, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(16), 1976; https://doi.org/10.3390/math9161976
Submission received: 21 April 2021 / Revised: 20 July 2021 / Accepted: 11 August 2021 / Published: 18 August 2021
(This article belongs to the Special Issue Mathematical Methods for Operations Research Problems)

Abstract

:
The idea of hybrid approaches have become a powerful strategy for tackling several complex optimisation problems. In this regard, the present work is concerned with contributing with a novel optimisation framework, named learning-based linear balancer ( L B 2 ). A regression model is designed, with the objective to predict better movements for the approach and improve the performance. The main idea is to balance the intensification and diversification performed by the hybrid model in an online-fashion. In this paper, we employ movement operators of a spotted hyena optimiser, a modern algorithm which has proved to yield good results in the literature. In order to test the performance of our hybrid approach, we solve 15 benchmark functions, composed of unimodal, multimodal, and mutimodal functions with fixed dimension. Additionally, regarding the competitiveness, we carry out a comparison against state-of-the-art algorithms, and the sequential parameter optimisation procedure, which is part of multiple successful tuning methods proposed in the literature. Finally, we compare against the traditional implementation of a spotted hyena optimiser and a neural network approach, the respective statistical analysis is carried out. We illustrate experimental results, where we obtain interesting performance and robustness, which allows us to conclude that our hybrid approach is a competitive alternative in the optimisation field.

1. Introduction

In recent years, the constant increase in complexity of the problems to be solved in the industry and academy have raised the necessity to further improve and evolve new techniques. In this context, hybrid approaches have been a standard and focus of multiple works. They have proved to be the most successful strategy in terms of solving capacity tackling hard optimisation problems [1]. In modern approaches, the use of randomised optimisation methods have been the focus of work by the scientist community, a well-known example are the metaheuristics. They have been successfully used to solve large instance of complex and difficult optimisation problems, being useful when exact methods are unable to provide solutions in a reasonable amount of time [2]. Usually, in the design behind an algorithm, we find multiple complex items which are in charge of carrying out the work in order to solve optimisation problems [3]. Inherent features, like intensification and diversification [4,5,6], which are in control on how the approach can exploit and explore the search space, respectively. Additionally, parameters and search components, such as population, probabilities, search operators, initial solutions, and so on, comprehend important family items in the work of an approach. In order to be intelligent, an agent which works in a changing environment should have the ability to learn [7]. If the approach can learn and adapt, we do not need to foresee and provide solutions for all possible situations which may appear on run-time. Machine learning, being part of artificial intelligence, encircle a number of algorithms with the aim to optimise a performance criterion using example data or past experience [8,9,10]. A well-known style of learning is the supervised learning, which is mainly composed by learning functions with the aim of predict values, and some of his classical objectives are regression and classification [11].
In this paper, we examine whether a formal relationship between an effective balance of intensification and diversification, influenced by a regression model, and a classic configuration of a metaheuristic exists, and whether it is sufficiently strong to be exploited for an automated framework. Most metaheuristics operate in a sequential, iterative, and in a previously designed manner, but the environment where they operate usually has a dynamic nature. Additionally, they are stochastic algorithms, which comprehends deterministic and random components. The stochastic components can take many forms, such as simple randomisation by randomly sampling the search space or by random walks. Thus, the randomness brings certain degree of uncertainness in the search. For instance, if an agent just finished performing an intensification movement, and the next step in the process performs a diversification movement, it has no certainty on reaching a better neighbourhood. In Figure 1, we illustrate a graphic example of a situation where a white agent needs to make a move; we aim to help the agent to have higher possibility to reach a green dot (possible best solution) or a yellow dot (less possible best solution) than a red dot (bad solution). The objective in the design of this framework is to let the approach learn how to orchestrate the work performed by the agents in every iteration, hence, we enforce the decision making of the approach and make him learn from previous iterations on run-time. In this regard, two components are designed: movements operators; in this work we employ movements from the spotted hyena optimiser (SHO) algorithm [12], and a regression model. First, SHO is an interesting modern metaheuristic, which has proved to yield good results in solving optimisation problems [13,14]. It is mainly based in the grouping behaviours of a special type of Hyena, where the strong point in the algorithm is the clustering features of the agents searching in the solution space. On the other hand, the learning model, is where the central axis of the work is completed by linear regression analysis. The work is completed as follows: dynamic data generated by the agents through iterations will be managed by the learning model. In this context, each time a threshold amount of iteration is met, a regression analysis is carried out by the learning model. Thus, the search will be influenced by the resulting knowledge from the previous learning process.
The efficiency of L B 2 proposed in this research is evaluated in three phases by solving 15 well-known mathematical optimisation problems. The employed benchmark concerns unimodal, multimodal, and mutimodal functions with fixed dimension. Additionally, these continuous functions comprehends multiple features, such as being convex, non-differentiable, unconstrained, and so on. Regarding the experimentation phases, we compare our results with state-of-the-art optimisation methods, such as particle swarm optimisation (PSO) [15], gravitational search algorithm (GSA) [16], differential evolution (DE) [17], whale optimisation algorithm (WOA) [18], vapour–liquid equilibrium (VLE) [19], and an hybrid between Nelder–Mead algorithm and dragonfly algorithm (INMDA) [20]. In the second phase, we compare against sequential parameter optimisation (SPO) [21]. The key work in SPO is performed by a prediction model, bringing improvements in the parameters values and algorithm performance in an iterative scheme. Third, we carry out a statistical evaluation of the results obtained by the traditional implementation of SHO, neural network (NN) [22], sine cosine algorithm (SCA) [23], and our proposed hybrid approach. Finally, we illustrate interesting experimental results, where the proposed hybrid approach achieves good performance proving to be a good and competitive option to tackle continuous optimisation problems.
The rest of this paper is organised as follows. The related work is introduced in the next section. The proposed hybrid approach is explained in Section 3. Section 4 illustrates the experimental results. Finally, we conclude and suggest some lines of future research.

2. Related Work

This work proposes a learning-based hybrid approach, where the main feature is the capability to influence the search performed by the agents ruled by the movements of SHO. Therefore, following the taxonomy illustrated by Talbi in [24], our proposed work can be described as a low-level teamwork hybridisation. Concerning the works reported in the literature between machine learning and metaheuristics [8,25], it is well-known that this relationship is not a one-way street, we do not have only approaches were machine learning techniques assist and enhance metaheuristics, but also the other way around: machine learning models improved by metaheuristics, is a much consolidated group in the hybridisation field [26,27,28,29,30]. This paper is concerned with the first group, where novel approaches have been proposed, such as [31], where a diversification-based learning (DBL) framework is proposed. DBL is designed under families of components introduced in the field of metaheuristics and machine learning that have broad applications in optimisation. Additionally, a novel approach based on two well-known components is presented in [32], an hybrid between intelligent guided adaptive search (IGAS) and path-relinking algorithm, named IGASPR. The main learning phase is ruled by the means of growing neural gas (GNG), the objective is to influence the construction of solutions controlling the features of the best solutions in each iteration. Concerning proposed works under the influence of regression analysis, [33] illustrates a data mining based approach for PSO. The main ideas behind that contribution is that the parameter selection task can appropriately be addressed by a data mining-based approach. The designed model employs a regression analysis by means of non-linear regression models, the main objective is to learn suitable parameters values from previous moves for PSO on run-time. In this field, this type of scheme is also known as specifically-located hybridisation and it is concerned with the parameter control strategies. In the literature, [34] also employ this type of hybridisation. The authors propose an hybrid employing Tabu search (TS) and support vector machine (SVM). The proposed approach is designed to tackle on hard combinatorial optimisation problems, such as knapsack problem, set covering problem, and the travelling salesman problem. The main task concerns the selection of decision rules from a corpus of solutions generated in a randomly fashion, which are used to predict high quality solutions for a given instance and it is used to fine-tune and guide the search performed by TS. However, it is stated by the authors that the complexity of the approach is a key factor, they highlight the time consumed and knowledge necessarily needed to implement, the process to build the corpus, and the extraction of the classification rule. On the other hand, regarding hybrid specialising in intensification and diversification, to the best of our knowledge there was none under the influence of a regression model. However, in [25], the feasible options on intensification employing clustering [35] and frequent itemsets using association rules [36,37] are illustrated. Regarding diversification, the use of clustering [35], self-organising maps (SOM) [38], and binary space-partitioning (BSP) trees [39] have proved to be good options balancing this issue in different approaches.
The L B 2 proposed in this work draws inspiration by the following arguments. Firstly, the scarce literature concerning machine learning mainly associated to regression model assisting metaheuristics. Second, most approaches are problem-dependant, for instance, in [32], the problem to be tackled by the regression model is the selection of best fitted parameters for PSO in order to improve the performance. It is a good exploratory and pioneer approach considering this attempt to be on run-time. However, the uncertainty in extrapolating this specifically-located implementation to other approaches is high, especially taking into account the “no free lunch” theorem. Therefore, our proposition focus in two major issues when designing a global search method, diversification, and intensification. Thus, if we analyse the metaheuristic field, they are general features who are always present. Third, the technique selected is a highly relevant issue. It is stated and explained by authors, in [33], the level of complexity is an issue to take into account in the design of the hybrid. Thus, we think this issue may have an impact replicating the results and extrapolating the implementation to an unknown environment. In this context, we employ classic techniques, where the novel mechanism are the clear advantages provided by our proposed hybrid approach.

3. Proposed Hybrid Approach

In this section we present the proposed L B 2 framework, we discuss the main ideas in the design, motivations, and inspiration behind the proposed approach. Firstly, in order to carry out the search in the solution space, the strategy employed is inspired by population-based metaheuristics. The main idea is to perform using a set of agents who evolve under the influence of multiple equations, known as movement of operators. In this regard, they are usually classified as intensification and diversification concerning the work performed, exploitation or exploration of the search space. In this work we employ SHO and his four movement operators, where each hyena is currently an agent in the framework.
The second answer proposed is concerned with the component in charge of the regression analysis. In this first attempt to design L B 2 the main concern was the complexity of the employed technique [40,41]. In this regard, multiple techniques and methods to carry out a regression analysis [42], such as linear models, SVM, and decision-tree-based models. Thus, a linear model was selected because it is the most commonly used, and all other regression methods build upon an understanding on how linear regression works [43,44]. Nevertheless, the regression model can potentially evolve in a more complex component, a more detailed explanation is presented in Section 5.
The global conceptualisation of the proposed hybrid is illustrated in Figure 2. It is based in the behaviour of multiple agents with the same attributes, also known as population. They are controlled by the movements of SHO, influenced and balanced by the learning-based model. A general description is presented in Section 3.1. The methodology and detailed explanation of the proposed approach is explained in Section 3.2. In Section 3.3, the population-based metaheuristic is presented, and the proposed algorithm is illustrated in Section 3.4.

3.1. General Description

The proposed L B 2 follows a population-based strategy, which concerns multiple agents evolving in the solution space, intensification and diversification are performed, and the process is terminated when a threshold amount of iteration is met. Dynamically adjusting the configuration and behaviour is an important topic that continues to be of growing interest. This work, in order to carry out the search, proposes two components: scheme and β . Firstly, the scheme is concerned with the amount of intensification and diversification to be performed in each iteration by the population. Regarding β , it is a parameter employed as the threshold where the learning model needs to carry out the regression analysis. The knowledge generated will be used to influence the selection mechanism, which manages the scheme that needs to be performed. In this regard, the selection will dynamically rule over the work of each agent, indicating the amount of exploration and exploitation to carry out in the search space. The proposed steps of the proposed L B 2 are described as follows:  
Step 1: Set parameters concerning the population-based algorithm: B,E,h, termination criteria for the search.
Step 1.1: Set termination criteria for the search: set amount of iterations to perform L B 2 .
Step 2: Set parameters concerning the learning model: scheme, probabilities, β .
Step 2.1: Set schemes for intensification and diversification.
Step 2.2: Set the probabilities for each scheme to be selected by the selection mechanism.
Step 2.3: Set the value for threshold β .
Step 3: Generation of the initial population size to perform in the search.
Step 4: while the termination criteria is not met.
Step 5: For each agent:
Step 5.1: Selection mechanism on intensification: the scheme is selected and the exploitation is carried out.
Step 5.2: Management of dynamic data generated.
Step 5.3: Selection mechanism on diversification: the scheme is selected and the exploration is carried out.
Step 5.4: Management of dynamic data generated.
Step 6: Update parameters concerning the population-based algorithm: B,E,h.
Step 7: Check if the threshold β has been met.
Step 7.1: Perform regression analysis.
Step 7.2: Management of the knowledge generated: update scheme probabilities.

3.2. Methodology

Firstly, we need to define the schemes to perform through the search, in this first attempt designing L B 2 , three levels where proposed and illustrated in Table 1. They define the amount of work that needs to be performed in each iteration, the selection issue is tackled by the means of probabilities, and they are defined as follows.
for intensification : p i = 1 IS soft + 1 IS medium + 1 IS hard = 1
for diversification : p d = 1 DS soft + 1 DS medium + 1 DS hard = 1
where the probability p i and p d will be modified by the learning model every β amount of iterations. This model is in charge of the regression analysis ruled by the means of linear regression, where the fitted function is of the form:
y = w x + b
where y corresponds to the dependant variable, which is the fitness and the value we want to predict. x represent the independent variable, which correspond to the scheme performed. In this simple linear regression model proposed, we present the close relationship between the fitness and his convergence with the balance of intensification and diversification performed. Regarding our proposed learning-model, we define three fitted functions for each scheme on intensification and three for each scheme on diversification. They are represented as follows:
For intensification:
y i soft = w i x i soft + b i
y i medium = w i x i medium + b i
y i hard = w i x i hard + b i
For diversification:
y d soft = w i x d soft + b d
y d medium = w i x d medium + b d
y d hard = w i x d hard + b d
In order to carry out the analysis, we employ the least squares method which is a well-known approach used. We evaluate the grade of relationship between the works performed by the agents in the amount of intensification and diversification with the best fitness values reached. The model will make the decision based as follows:
W ( x i ) = MIN ( y i soft , y i medium , y i hard ) and
W ( x d ) = MIN ( y d soft , y d medium , y d hard )
where W ( x i ) and W ( x d ) represent the schemes with the highest possibilities to achieve better performance in the next β iterations. The regression model will modify the probabilities of selection for each scheme. Thus, when the threshold is met, the process of selection, carried out in a Monte Carlo roulette fashion, will be influenced. Additionally, we highlight that all benchmark functions are minimisation problems which are aligned with our proposed function MIN.
Regarding the threshold β , important issues need to be considered, such as amount of total iterations, computing capacity, number of agents as population, and number of schemes in the approach. In this work, small test were carried out with β values 200, 500, and 1000. However, we concluded that the best performance was achieved with a value of 1000.
A practical example can be described as follows: At the beginning, in each iteration, the approach will select a scheme using a probabilistic roulette for the intensification and diversification. Thus, for a three way scheme, as displayed in Table 1, the initial probabilities for each scheme to be selected was in a 33.3%–33.3%–33.3% ratio. Additionally, the regression model is always storing and sorting the fitness values and agents on run-time. When the threshold β is met, the model performs a computing process corresponding to the regression analysis. Thus, it is decided which scheme have the highest chance to achieve a high performance over the next β amount of iterations. To do so, the probabilities values of each scheme for intensification and diversification are updated, giving the winning scheme a higher probability to be chosen. For instance, we designed a ratio of 60%–20%–20% ratio, a graphic example is illustrated in Figure 3. Here, the scheme assigned with a blue color had a minimum value in the resulting regression analysis compared with the other two schemes, in this case, the winner is assigned a higher value of probability to be selected, and so on.

3.3. Spotted Hyena Optimiser

In this paper, we instantiate SHO as a means to carry out the search of solutions in order to solve optimisation problems. The movement operators are organised as illustrated in Table 2 with the aim to be employed by L B 2 . The main feature of SHO is the cohesive clustering in his population [12]. The mathematical model concerns diversification methods: encircling prey, hunting, and search for prey. Additionally, intensification method: attacking prey. Additionally, they are described as follows:
  • Encircling prey: Each hyena takes the current best candidate solution as the target prey. They will try to move towards the best position defined.
    D h = B · P p ( x ) P ( x )
    P ( x + 1 ) = P p ( x ) E · D h
    where D h is the distance between the current spotted hyena and the prey, x indicates the current iteration, B and E are coefficient vectors, P p is the position of the prey, and P is the position of the spotted hyena. The vectors B and E are defined as follows:
    B = 2 · rnd 1
    E = 2 h · rnd 2 h
    h = 5 ( Iteration ( 5 / Max iteration ) )
    where I t e r a t i o n = 1, 2, 3, … , Max iteration , rnd 1 and rnd 2 are random vectors in [0, 1].
  • Hunting: The hyenas make a cluster towards the best agent so far to update their positions. The equations are proposed as follows:
    D h = B · P h P k
    P k = P h E · D h
    C h = P k + P k + 1 + . . . + P k + N
    where P h is the best spotted hyena in the population, and P k indicates the position of other spotted hyenas. Here, N is the number of spotted hyenas, which is computed as follows:
    N = count nos ( P h , P h + 1 , P h + 2 , . . . , ( P h + M ) )
    Here, M is a random vector [0.5, 1], nos defines the number of solutions and count all candidate solutions plus M, and C h is a cluster of N number of optimal solutions.
  • Attacking Prey: SHO works around the cluster forcing the spotted hyenas to assault towards the prey. The following equation was proposed:
    P ( x + 1 ) = C h / N
    Here, P ( x + 1 ) updates the positions of each spotted hyenas according to the position of the best search agent and save the best solution.
  • Search for Prey: The agents mostly search the prey based on the position of the cluster of spotted hyenas, which reside in vector C h . SHO makes use of the coefficient vector E and B with random values to force the search agents to move far away from the prey. This mechanism allows the algorithm to search globally.

3.4. Proposed Algorithm

In this subsection, we illustrate the designed algorithm. Algorithm 1 depicts the general framework our proposed approach, where the operators of SHO performs intensification and diversification under the influence and balance of our regression model. Finally, Algorithm 2 presents the work in charge of the regression model. The regression analysis is performed and the vectors with controls values are modified.
Algorithm 1Proposed L B 2
1:
Set initial parameters for SHO
2:
Set initial parameters for regression model
3:
Generate initial population
4:
while ( i MaximumIteration ) do
5:
for each agent in the population do
6:
   StandardIntensification = Select-scheme-by-Roulette
7:
  while ( StandardIntensification ) do
8:
   Perform intensification operators
9:
  end while
10:
  if check if a best value was reached using StandardIntensification  then
11:
   Update data structures with best values reached
12:
  end if
13:
   StandardDiversification = Select-scheme-by-Roulette
14:
  while ( StandardDiversification ) do
15:
   Perform diversification operators
16:
  end while
17:
  if Check if a best value was reached using StandardDiversification  then
18:
   Update data structures with best values reached
19:
  end if
20:
 end for
21:
 if Check threshold β  then
22:
  Call to Algorithm 2: Regression Model
23:
 end if
24:
end while
Algorithm 2Regression Model
1:
while review of dynamic-data for all x i soft  do
2:
 Management of dataframe with dynamic-data
3:
end while
4:
Compute statistical modelling method: y i soft
5:
while review of dynamic-data for all x i medium  do
6:
 Management of dataframe with dynamic-data
7:
end while
8:
Compute statistical modelling method: y i medium
9:
while review of dynamic-data for all x i hard  do
10:
 Management of dataframe with dynamic-data
11:
end while
12:
Compute statistical modelling method: y i hard
13:
while review of dynamic-data for all x d soft  do
14:
 Management of dataframe with dynamic-data
15:
end while
16:
Compute statistical modelling method: y d soft
17:
while review of dynamic-data for all x d medium  do
18:
 Management of dataframe with dynamic-data
19:
end while
20:
Compute statistical modelling method: y d medium
21:
while review of dynamic-data for all x d hard  do
22:
 Management of dataframe with dynamic-data
23:
end while
24:
Compute statistical modelling method: y d hard
25:
Data structures with regression analysis are updated
26:
Check MIN ( y i soft , y i medium , y i hard )
27:
Check MIN ( y d soft , y d medium , y d hard )
28:
Update probabilities for intensification scheme
29:
Update probabilities for diversification scheme

4. Experimental Results

This section describes the experimentation process to evaluate the performance of our proposed L B 2 . In this work we make use of 15 standard benchmark test functions. These benchmark are described in Section 4.1, and the experimental setup is described in Section 4.2 along the respective analysis in Section 4.3.

4.1. Benchmark Test Functions

In order to test the performance and demonstrate the efficiency of our proposed hybrid approach, we applied 15 well-known benchmark function, Table 3. These function are divided into three main categories, such as unimodal [45] represented in Equations (11)–(14) and Figure 4 and Figure 5, multimodal [46] represented in Equations (15)–(19) and Figure 6 and Figure 7, and fixed-dimension multimodal [45,46], Equations (20)–(25) and Figure 8 and Figure 9. Regarding the features of these functions, f 1 to f 9 are high-dimensional problems. On the other hand, f 10 to f 15 comprehends low-dimensional problems. Additionally, all test functions reflect different degrees of complexity, f 1 to f 4 are convex, f 7 , f 11 , and f 13 are non-convex, f 5 , f 6 , and f 8 are non-linear functions. Regarding the justification behind the selection of this set of functions, f 1 to f 4 have only one global optimum and has no local optima, which makes this first group of functions highly appropriate to study the convergence rate and intensification ability of our proposed approach. Additionally, f 5 to f 15 concerns large search space and multiple local solutions besides the global optimum. Thus, they are useful evaluating how efficient the approach is avoiding local optima and the diversification abilities. Additionally, it is well-known that functions from the second group, f 5 to f 9 , correspond to a group of very difficult problems to solve for optimisation algorithms, where there is an exponentially increase in number of dimensions [47]. Finally, all these functions are minimisation problems.
Unimodal functions:
Sphere Function
f 1 ( x ) = f ( x 1 , x 2 , . . . , x n ) = i = 1 n x i 2
Schwefel’s Function No. 2.22
f 2 ( x ) = i = 1 n | x i | + i = 1 n | x i |
Schwefel’s Function No. 1.2
f 3 ( x ) = i = 1 n j = 1 i x j 2
Generalised Rosenbrock’s Function
f 4 ( x ) = i = 1 n 1 100 ( x i 2 x i + 1 ) 2 + ( 1 x i ) 2
Multimodal functions:
Generalised Schwefel’s Function No. 2.26
f 5 ( x ) = i = 1 n x i sin ( | x i | )
Generalised Rastrigin’s Function
f 6 ( x ) = 10 n + i = 1 n ( x i 2 10 cos ( 2 π x i ) )
Ackley’s Function
f 7 ( x ) = 20 exp ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + exp ( 1 )
Generalised Griewank’s Function
f 8 ( x ) = 1 + i = 1 n x i 2 4000 i = 1 n cos ( x i i )
Generalised Penalised Function
f 9 ( x ) = π n × 10 sin 2 π y 1 + i = 1 n 1 y i 1 2 1 + 10 sin 2 π y i + 1 + y n 1 2 + i = 1 n u x i , 10 , 100 , 4
where u ( x i , a , k , m ) is equal to
  • k ( x i a ) m if x i > a
  • 0 if a x i a
  • k ( x i a ) m if x i < a
and
  • y i = 1 + 1 4 ( x i + 1 )
Multimodal functions with fixed dimensions:
Shekel’s Foxholes Function
f 10 ( x ) = 1 500 + j = 1 25 1 j + i = 1 2 x i a i , j 6 1
where:
a i , j = 32 16 0 16 32 32 . . . 0 16 32 32 32 32 32 32 16 . . . 32 32 32
Six-hump Camel Back Function
f 11 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4
Branin’s Function
f 12 ( x ) = x 2 5.1 x 1 2 4 π 2 + 5 x 1 π 6 2 + 10 1 1 8 π cos ( x 1 ) + 10
Goldstein-Price Function
f 13 ( x ) = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 30 + 2 x 1 3 x 2 2 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2
Hartman’s Function No.1
f 14 ( x ) = i = 1 4 c i e j = 1 3 a i , j x j p i , j 2
where the values of a, c, and p are tabulated in Table 4
Hartman’s Function No.2
f 15 ( x ) = i = 1 4 c i e j = 1 6 a i , j x j p i , j 2
where the values of a, c and p are tabulated in Table 5.

4.2. Algorithms Used for Comparison and Experimental Setup

In order to compare the results obtained, we designed this step in three phases. First phase, we compare against state-of-the-art optimisation methods reported in [18,19,48], such as particle swarm optimisation (PSO) [15], gravitational search algorithm (GSA) [16], differential evolution (DE) [17], whale optimisation algorithm (WOA) [18], vapour–liquid equilibrium (VLE) [19], and an hybrid between Nelder–Mead algorithm and dragonfly algorithm (INMDA) [20]. In the second phase we compare against SPO [21], which is a heuristic that combines classical and modern statistical techniques to improve the performance of search algorithms. Finally, we take a closer look at the performance achieved by the traditional SHO, a neural network (NN) [22], and a sine cosine algorithm (SCA) [23] approach solving the benchmark functions in comparison with our proposed approach. Regarding the implementation of traditional SHO, the number of search agents was set to 30, control parameter h with values in range of [5, 0], the constant M in the range of [0.5, 1], and the value for number of generations was 10,000. Regarding the neural network, the design was defined as follows: For each benchmark function, a total of one million randomly generated solutions were created. On the other hand, we designed a multi-layer perceptron. The main components comprehend an input node, 7 hidden layers of 50 nodes, and an output equal to the number of dimensions for each function. The training was carried out employing the gradient descent method [49] over 1000 iterations for each randomly generated solution. The main objective behind the NN proposed is the prediction of better function values on run-time. The implementation was performed in python 3.7 and run in an environment windows 10 with 64 bits on Core i-5 processor with 2.40 GHz and 8 GB memory. Finally, regarding the experimentation phase, for each benchmark function the algorithm utilises 30 independent runs.

4.3. Performance Comparison

In this subsection, we illustrate and demonstrate the performance of our proposed L B 2 tackling the benchmark functions described in the Section 4.1.

4.3.1. First Experimentation Phase

First, all results obtained by SHO and L B 2 were rounded to four decimals. The results published for PSO, GSA, DE, WOA, and VLE, were rounded in Table 6, Table 7 and Table 8 to four decimals, using scientific notation, only for presentation purposes. However, all computations were carried out using the reported decimals by their respective authors. Regarding the performance on unimodal functions, Table 6 illustrates the results and comparison in f 1 to f 4 , the average (Avg) and standard deviation (StdDev) are presented and compared, and we highlight in bold the best values reached. Additionally, it is well-known that unimodal functions can help us to measure the exploitation capabilities of our proposed approach. In this regard, it is surprising how good L B 2 performed. It was the second most efficient algorithm tackling this set of benchmark function just behind INMDA. Additionally, the small values reached corresponding to the StdDev shows that it is a very solid algorithm. Concerning the multimodal functions and multimodal functions with fixed dimensions, both sets can help us to evaluate the potential of our algorithm in carrying out the exploration. Table 7 and Table 8 illustrates the results and comparison on functions f 5 to f 9 and f 10 to f 15 correspondingly, the average (Avg) and standard deviation (StdDev) are presented, and we highlight in bold the best values reached. Surprisingly, the L B 2 attained really good results and small Avg and StdDev values once again, proving to be a competitive approach able to tackle continuous problems. Moreover, having a relatively good performance in these three previous set of benchmark test functions, we can conclude that this first attempt corresponding to the L B 2 has the potential to be a competitive approach.

4.3.2. Second Experimentation Phase

In this subsection, we compare against SPO, which has proved to be a good and competitive option in the field of parameter tuning. In this work, we compare the results obtained by our L B 2 against the works reported and implemented in [21]. They implemented a PSO and a PSO + SPO approach, they solve 4 benchmark test functions. Table 9 illustrates the best values reached, where the first column, named problem, represent the 4 functions solved by the approach reported (2 unimodal and 2 multimodal). Column 2, 3, and 4, represent the best values achieved by PSO and PSO + SPO (both implemented by the authors), and our proposed L B 2 . It is clear the superiority of our approach reaching all 4 optimum values. However, in future work, in order to improve the hybrid methodology proposed in this work, we have as an objective the implementation and comparison of SPO and F-Race approaches in order to bring a more detailed and larger competition between multiple optimisation and tuning tools.

4.3.3. Third Experimentation Phase

In this subsection, we present a comparison of L B 2 against a traditional SHO and a neural network approach, both implementation made by us. Table 10, Table 11 and Table 12 illustrates a summary of the values achieved in the experimentation phase. Column F corresponds to the test function solved, and Opt depicts the global optimum for the given function. Column best, worst, and Avg are the given values for best value reached, worst value reached, the mean value, and the Avg time achieved in 30 executions.
Regarding Table 10, L B 2 achieved 7 optimum values in f 1 f 3 , f 5 , f 6 , f 8 , and f 10 , in comparison of SHO which achieved 5 optimum values in f 1 f 3 , f 6 , f 8 . Additionally, regarding non-optimum values reached, our proposed approach is superior in 5 values in functions f 4 , f 5 , f 9 , f 13 , and f 15 . However, the same goes for SHO in functions f 11 , f 12 , and f 14 . Regarding Table 9, L B 2 achieve better values than the NN approach implemented. However, in functions f 13 and f 14 the performance of our proposed approach falls behind considerably. Regarding Table 12, small differences can be observed, L B 2 reached 1 more optimum value. However, the biggest difference concerns the robustness in the overall performance illustrated on columns Avg and StdDev. This can be observed in functions f 4 , f 5 , and f 10 .
Regarding the average time achieved in the three illustrated tables, significant difference can be observed between NN, SCA, and L B 2 . In the hardest test function, multimodal, and multimodal with fixed-dimension, NN falls significantly behind against SCA and L B 2 in solving time, which is the strong point on these types of algorithms. Additionally, we highlight the drawback behind a NN approach, the costly process of training and tuning of the model. Nevertheless, the objective behind this test, presented in Table 11, concerns the future incorporation of new learning methods to L B 2 .
Regarding the room for improvements observed in the performance, values achieved in column StdDev for f 5 , f 11 , f 14 , and f 15 can be interpreted as the approach being trapped in local optima. The discussion follows two possible issues: the value employed as β and the scheme values for the diversification process. Firstly, the proposed value for threshold β is static through the search, the consequence can be interpreted as the approach expecting a more balanced and timely feedback from the learning model. Thus, when a local optima is detected, a proper answer can be delivered and carried out on run-time. Nevertheless, the incorporation of a learning-based component managing a dynamic β value on run-time will be proposed in order to tackle this issue. On the other hand, regarding the scheme values for diversification, the employment of static values through the search can be a critical issue. The amount and frequency on which diversification is carried out will be our next focus as a balanced exploration in the search space needs to be performed.
In order to further analyse and demonstrate the improvement in the performance of the hybridisation in optimisation tools, a statistical analysis is carried out. To this end we compare convergence and we analyse the 30 executions performed for each function through the Kolmogorov Smirnov Lilliefors (Lilliefors 1967) [50] and Wilcoxon’s signed rank (Mann and Whitney 1947) [51] statistical tests. Additionally, in order to carry the statistical analysis of this phase, we make use of the RStudio software to conduct both tests.
The process is as follows, samples were tested for normality using Kolmogorov Smirnov Lilliefors test, having failed it (p-values > 0.05). Therefore, the non-parametric Mann-Whitney test subsequently used to compare the quality of SHO and L B 2 results. We need to take in consideration the next two hypothesis:
H 0 : μ S H O = μ L B 2
H 1 : μ L B 2 μ S H O
where μ S H O and μ L B 2 are the arithmetic median of fitness values achieved corresponding to SHO and our proposed L B 2 . Again, at this next step we take into consideration that the significance level is also established to 0.05, thus, smaller values that 0.05 defines that H 0 cannot be assumed. In this regard, Table 13 illustrate the comparison between the two implementations, we highlight in bold the values where there is a statistically significant winner.

5. Conclusions and Future Work

In this paper, a novel learning-based framework was proposed. Well-known methods and techniques are employed to design a competitive hybrid approach capable to tackle on optimisation problems. The proposed framework performs under a population-based strategy, multiple agents explore, learn, and evolve in the search space. In this regard, two main components were employed: a population-based algorithm, named spotted hyena optimiser, and a learning model which is based in a statistical modelling method.
Regarding the results achieved solving the benchmark functions, L B 2 demonstrated to be a competitive method and a promising alternative to tackle optimisation problems. However, some issues remains and improvements can be proposed. Firstly, L B 2 needs to be tested tackling benchmark functions with higher difficulty. In this regard, we are considering more complex functions with higher dimensionality, such as CEC 2021’s composite functions. Additionally, the incorporation of hard optimisation problems, such as set covering problem (SCP), manufacturing cell design problem (MCDP) are being considered as future testing objectives. On another hand, results illustrated in the third experimentation phase can be interpreted as L B 2 being trapped in local optima for certain functions. Nevertheless, improvements can be carried out in order to tackle this issue. In this regard, new learning-based components will be proposed. The main objective is to dynamically adjust parameters, such as threshold β and the scheme for diversification and intensification. The idea is to keep the balance in the feedback of dynamic data and knowledge generated between the population and the learning model on run-time. Finally, new learning methods will be implemented, the objective concerns the viability, certainty, and confidence in the generated knowledge. Thus, a more complex component will be designed in order to measure the profit behind the knowledge for a better decision making through the search.

Author Contributions

Formal analysis, E.V., J.P., and R.S.; investigation, E.V., J.P., R.S., B.C., and C.C.; resource, R.S.; software, E.V. and J.P.; validation, B.C. and C.C.; writing–original draft, E.V., R.S., and B.C.; writing–review and editing, E.V. and R.S. All authors have read and agreed to the published version of the manuscript.

Funding

Ricardo Soto is supported by Grant CONICYT/FONDECYT/REGULAR/1190129. Broderick Crawford is supported by Grant ANID/FONDECYT/REGULAR/1210810, and Emanuel Vega is supported by National Agency for Research and Development ANID/Scholarship Program/DOCTORADO NACIONAL/2020-21202527.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analysed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Talbi, E.G. Combining metaheuristics with mathematical programming, constraint programming and machine learning. Ann. Oper. Res. 2016, 240, 171–215. [Google Scholar] [CrossRef]
  2. Gendreau, M.; Potvin, J.Y. Handbook of Metaheuristics, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  3. Hussain, K.; Salleh, M.N.M.; Cheng, S.; Shi, Y. On the exploration and exploitation in popular swarm-based metaheuristic algorithms. Neural Comput. Appl. 2019, 31, 7665–7683. [Google Scholar] [CrossRef]
  4. Chu, X.; Wu, T.; Weir, J.D.; Shi, Y.; Niu, B.; Li, L. Learning–interaction–diversification framework for swarm intelligence optimizers: A unified perspective. Neural Comput. Appl. 2020, 32, 1789–1809. [Google Scholar] [CrossRef]
  5. Boussaïd, I.; Lepagnot, J.; Siarry, P. A survey on optimization metaheuristics. Inf. Sci. 2013, 237, 82–117. [Google Scholar] [CrossRef]
  6. Tapia, D.; Crawford, B.; Soto, R.; Cisternas-Caneo, F.; Lemus-Romani, J.; Castillo, M.; García, J.; Palma, W.; Paredes, F.; Misra, S. A Q-Learning Hyperheuristic Binarization Framework to Balance Exploration and Exploitation. In International Conference on Applied Informatics; Springer: Cham, Switzerland, 2020; pp. 14–28. [Google Scholar]
  7. Parsons, S. Introduction to Machine Learning by Ethem Alpaydin. In The Knowledge Engineering Review; MIT Press: Cambridge, MA, USA, 2005; Volume 20, pp. 432–433. [Google Scholar]
  8. Song, H.; Triguero, I.; Özcan, E. A review on the self and dual interactions between machine learning and optimisation. Prog. Artif. Intell. 2019, 8, 143–165. [Google Scholar] [CrossRef] [Green Version]
  9. Barber, D. Bayesian Reasoning and Machine Learning; Cambridge University Press: New York, NY, USA, 2012. [Google Scholar]
  10. Lantz, B. Machine Learning with R; Packt Publishing: Birmingham, UK, 2013. [Google Scholar]
  11. Dietterich, T. Machine Learning. ACM Comput. Surv. 1996, 28, 3. [Google Scholar] [CrossRef]
  12. Dhiman, G.; Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 2017, 114, 48–70. [Google Scholar] [CrossRef]
  13. Soto, R.; Crawford, B.; Vega, E.; Gómez, A.; Gómez-Pulido, J.A. Solving the Set Covering Problem Using Spotted Hyena Optimizer and Autonomous Search. Advances and Trends in Artificial Intelligence. From Theory to Practice. In IEA/AIE 2019; Springer: Cham, Swizterland, 2019; Volume 11606. [Google Scholar]
  14. Luo, Q.; Li, J.; Zhou, Y.; Liao, L. Using spotted hyena optimizer for training feedforward neural networks. Cogn. Syst. Res. 2021, 65, 1–16. [Google Scholar] [CrossRef]
  15. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  16. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  17. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  18. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  19. Cortés-Toro, E.M.; Crawford, B.; Gómez-Pulido, J.A.; Soto, R.; Lanza-Gutiérrez, J.M. A New Metaheuristic Inspired by the Vapour-Liquid Equilibrium for Continuous Optimization. Appl. Sci. 2018, 8, 2080. [Google Scholar] [CrossRef] [Green Version]
  20. Xu, J.; Yan, F. Hybrid Nelder–Mead algorithm and dragonfly algorithm for function optimization and the training of a multilayer perceptron. Arab. J. Sci. Eng. 2019, 44, 3473–3487. [Google Scholar] [CrossRef]
  21. Bartz-Beielstein, T.; Lasarczyk, C.W.G.; Preuss, M. Sequential parameter optimization. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; Volume 1, pp. 773–780. [Google Scholar]
  22. Wang, R.L.; Tang, Z.; Cao, Q.P. A learning method in Hopfield neural network for combinatorial optimization problem. Neurocomputing 2002, 48, 1021–1024. [Google Scholar] [CrossRef]
  23. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  24. Talbi, E.G. Combining metaheuristics with mathematical programming, constraint programming and machine learning. 4OR Q. J. Belg. Fr. Ital. Oper. Res. Soc. 2013, 11, 101–150. [Google Scholar] [CrossRef]
  25. Talbi, E.G. Machine Learning into Metaheuristics: A Survey and Taxonomy of Data-Driven Metaheuristics, Working Paper or Preprint. June 2020.
  26. Escalante, H.J.; Ponce-López, V.; Escalera, S.; Baró, X.; Morales-Reyes, A.; Martínez-Carranza, J. Evolving weighting schemes for the bag of visual words. Neural Comput. Appl. 2016, 28, 925–939. [Google Scholar] [CrossRef]
  27. Stein, G.; Chen, B.; Wu, A.S.; Hua, K.A. Decision tree classifier for network intrusion detection with GA-based feature selection. In Proceedings of the 43rd Annual Southeast Regional Conference, Kennesaw, GA, USA, 18 March 2005; Volume 2, pp. 136–141. [Google Scholar]
  28. Sörensen, K.; Janssens, G.K. Data mining with genetic algorithms on binary trees. Eur. J. Oper. Res. 2003, 151, 253–264. [Google Scholar] [CrossRef]
  29. Fernández Caballero, J.C.; Martinez, F.J.; Hervas, C.; Gutierrez, P.A. Sensitivity versus accuracy in multiclass problems using memetic pareto evolutionary neural networks. IEEE Trans. Neural Netw. 2010, 21, 750–770. [Google Scholar] [CrossRef]
  30. Huang, C.L.; Wang, C.J. A GA-based feature selection and parameters optimization for support vector machines. Expert Syst. Appl. 2006, 31, 231–240. [Google Scholar] [CrossRef]
  31. Glover, F.; Hao, J.K. Diversification-based learning in computing and optimization. J. Heuristics 2019, 25, 521–537. [Google Scholar] [CrossRef] [Green Version]
  32. Máximo, V.R.; Nascimento, M.C. Intensification, learning and diversification in a hybrid metaheuristic: An efficient unification. J. Heuristics 2019, 25, 539–564. [Google Scholar] [CrossRef]
  33. Lessmann, S.; Caserta, M.; Arango, I.M. Tuning metaheuristics: A data mining based approach for particle swarm optimization. Expert Syst. Appl. 2011, 38, 12826–12838. [Google Scholar] [CrossRef]
  34. Zennaki, M.; Ech-Cherif, A. A new machine learning based approach for tuning metaheuristics for the solution of hard combinatorial optimization problems. J. Appl. Sci. 2010, 10, 1991–2000. [Google Scholar] [CrossRef]
  35. Porumbel, D.C.; Hao, J.K.; Kuntz, P. A search space “cartography” for guiding graph coloring heuristics. Comput. Oper. Res. 2010, 37, 769–778. [Google Scholar] [CrossRef] [Green Version]
  36. Ribeiro, M.H.; Plastino, A.; Martins, S.L. Hybridization of GRASP metaheuristic with data mining techniques. J. Math. Model. Algorithms 2006, 5, 23–41. [Google Scholar] [CrossRef]
  37. Dalboni, F.L.; Ochi, L.S.; Drummond, L.M.A. On improving evolutionary algorithms by using data mining for the oil collector vehicle routing problem. In Proceedings of the International Network Optimization Conference, Rio de Janeiro, Brazil, 22 April 2003; pp. 182–188. [Google Scholar]
  38. Amor, H.B.; Rettinger, A. Intelligent exploration for genetic algorithms: Using self-organizing maps in evolutionary computation. In Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, Washington DC, USA, 25–29 June 2005; pp. 1531–1538. [Google Scholar]
  39. Yuen, S.Y.; Chow, C.K. A genetic algorithm that adaptively mutates and never revisits. IEEE Trans. Evol. Comput. 2008, 13, 454–472. [Google Scholar] [CrossRef]
  40. Dhaenens, C.; Jourdan, L. Metaheuristics for Big Data; Wiley: Hoboken, NJ, USA, 2016; ISBN 9781119347606. [Google Scholar]
  41. Yang, L.; Shami, A. On Hyperparameter Optimization of Machine Learning Algorithms: Theory and Practice. Neurocomputing 2020, 415, 295–316. [Google Scholar] [CrossRef]
  42. Caruana, R.; Niculescu-Mizil, A. An empirical comparison of supervised learning algorithms. ACM Int. Conf. Proc. Ser. 2006, 148, 161–168. [Google Scholar]
  43. Article, R. Linear Regression Analysis. Dtsch. äRzteblatt Int. 2010, 107, 776–782. [Google Scholar]
  44. Almeida, A.M.D.; Castel-Branco, M.M.; Falcao, A.C. Linear regression for calibration lines revisited: Weighting schemes for bioanalytical methods. J. Chromatogr. B 2002, 774, 215–222. [Google Scholar] [CrossRef]
  45. Digalakis, J.; Margaritis, K. On benchmarking functions for genetic algorithms. Int. J. Comput. Math 2001, 77, 481–506. [Google Scholar] [CrossRef]
  46. Yang, X. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio-Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  47. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar]
  48. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  49. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  50. Lilliefors, H. On the kolmogorov–smirnov test for normality with mean and variance unknown. J. Am. Stat. Assoc. 1967, 62, 399–402. [Google Scholar] [CrossRef]
  51. Mann, H.; Whitney, D. On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Stat. 1947, 18, 50–60. [Google Scholar] [CrossRef]
Figure 1. Graphic example of the search space illustrating green dots possible best solutions, yellow dots less possible best solutions, and red dots bad solutions.
Figure 1. Graphic example of the search space illustrating green dots possible best solutions, yellow dots less possible best solutions, and red dots bad solutions.
Mathematics 09 01976 g001
Figure 2. Graphic example of the search process.
Figure 2. Graphic example of the search process.
Mathematics 09 01976 g002
Figure 3. Graphic example of the modification of probabilities by the model.
Figure 3. Graphic example of the modification of probabilities by the model.
Mathematics 09 01976 g003
Figure 4. Unimodal benchmark mathematical functions f 1 and f 2 in a 3D view.
Figure 4. Unimodal benchmark mathematical functions f 1 and f 2 in a 3D view.
Mathematics 09 01976 g004
Figure 5. Unimodal benchmark mathematical functions f 3 and f 4 in a 3D view.
Figure 5. Unimodal benchmark mathematical functions f 3 and f 4 in a 3D view.
Mathematics 09 01976 g005
Figure 6. Multimodal benchmark mathematical functions f 5 and f 6 in a 3D view.
Figure 6. Multimodal benchmark mathematical functions f 5 and f 6 in a 3D view.
Mathematics 09 01976 g006
Figure 7. Multimodal benchmark mathematical functions f 8 and f 9 in a 3D view.
Figure 7. Multimodal benchmark mathematical functions f 8 and f 9 in a 3D view.
Mathematics 09 01976 g007
Figure 8. Multimodal functions with fixed dimensions f 10 and f 11 in a 3D view.
Figure 8. Multimodal functions with fixed dimensions f 10 and f 11 in a 3D view.
Mathematics 09 01976 g008
Figure 9. Multimodal functions with fixed dimensions f 12 and f 13 in a 3D view.
Figure 9. Multimodal functions with fixed dimensions f 12 and f 13 in a 3D view.
Mathematics 09 01976 g009
Table 1. Example of the standard work to be completed by the approach.
Table 1. Example of the standard work to be completed by the approach.
SchemeAmount of
Intensification
Amount of
Diversification
Soft11
Medium22
Hard33
Table 2. Organisation example of the pool of movement operators from metaheuristics.
Table 2. Organisation example of the pool of movement operators from metaheuristics.
Pool of Operators
IntensificationDiversification
Exploitation movement 1Exploration movement 1
Exploitation movement 2Exploration movement 2
::
Table 3. Optimum values reported for the benchmark functions in the literature, with their corresponding solutions and search subsets.
Table 3. Optimum values reported for the benchmark functions in the literature, with their corresponding solutions and search subsets.
FunctionSearch SubsetsOptSol
f 1 (x) [ 100 , 100 ] 30 0 [ 0 ] 30
f 2 (x) [ 10 , 10 ] 30 0 [ 0 ] 30
f 3 (x) [ 100 , 100 ] 30 0 [ 0 ] 30
f 4 (x) [ 30 , 30 ] 30 0 [ 1 ] 30
f 5 (x) [ 500 , 500 ] 30 −12,596.487 [ 420.9687 ] 30
f 6 (x) [ 5.12 , 5.12 ] 30 0 [ 0 ] 30
f 7 (x) [ 32 , 32 ] 30 0 [ 0 ] 30
f 8 (x) [ 600 , 600 ] 30 0 [ 0 ] 30
f 9 (x) [ 50 , 50 ] 30 0 [ 1 ] 30
f 10 (x) [ 65.536 , 65.536 ] 2 1 [ 32 ] 2
f 11 (x) [ 5 , 5 ] 2 −1.0316285(0.08983, −0.7126) and
(−0.08983, 0.7126)
f 12 (x) [ 5 , 10 ] for x 1
and [ 0 , 15 ] for x 2
0.397887(−3.142, 12.275), (3.142, 2.275),
and (9.425, 2.425)
f 13 (x) [ 2 , 2 ] 2 3(0, −1)
f 14 (x) [ 0 , 1 ] 3 −3.86(0.114, 0.556, 0.852)
f 15 (x) [ 0 , 1 ] 6 −3.32(0.201, 0.150, 0.477, 0.275, 0.275, 0.377, 0.657)
Table 4. Values of a i j , c i , and p i j for function f 1 4 (x); n = 3 and j = 1, 2, 3.
Table 4. Values of a i j , c i , and p i j for function f 1 4 (x); n = 3 and j = 1, 2, 3.
i a ij c i p ij
13103010.36890.11700.2673
20.110351.20.46990.43870.7470
33103030.10910.87320.5547
40.110303.20.038150.57430.8828
Table 5. Values of a i j , c i , and p i j for function f 1 5 (x); n = 6 and j = 1, 2, ..., 6.
Table 5. Values of a i j , c i , and p i j for function f 1 5 (x); n = 6 and j = 1, 2, ..., 6.
i a ij c i p ij
1103173.51.7810.1310.1690.5560.0120.8280.588
20.0510170.18141.20.2320.4130.8300.3730.1000.999
333.51.71017830.2340.1410.3520.2880.3040.665
41780.05100.1143.20.4040.8820.8730.5740.1090.038
Table 6. Results comparison in unimodal benchmark functions.
Table 6. Results comparison in unimodal benchmark functions.
FSHO LB 2 WOADEGSAPSOVLEINMDA
AvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDev
f 1 0.00060.00050.00000.00000.00000.0000 8.2000 × 10 14 5.9000 × 10 14 2.5300 × 10 16 0.0000 1.3600 × 10 4 2.0200 × 10 4 4.4989 × 10 7 1.413 × 10 6 0.00000.0000
f 2 0.00000.00000.00000.00000.00000.0000 1.5000 × 10 9 9.9000 × 10 10 5.5655 × 10 2 0.1941 4.2144 × 10 2 4.5421 × 10 2 3.0840 × 10 6 6.0498 × 10 6 0.00000.0000
f 3 0.00070.00050.00000.0000 5.3900 × 10 7 2.9300 × 10 6 6.8000 × 10 11 7.4000 × 10 11 8.9353 × 10 2 3.1896 × 10 2 70.12622.1195.20200.79860.00000.0000
f 4 2.75110.0502 6.7549 × 10 7 5.4204 × 10 7 27.8660.76360.00000.000067.54362.22596.71860.11679.19937.4000.00000.0000
Table 7. Results comparison in multimodal benchmark functions.
Table 7. Results comparison in multimodal benchmark functions.
FSHO LB 2 WOADEGSAPSOVLEINMDA
AvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDev
f 5 1.0867 × 10 4 0.5059 1 . 2569 × 10 4 0.0014 5.0808 × 10 3 6.9580 × 10 2 1.1080 × 10 4 5.7470 × 10 2 2.8211 × 10 3 4.9304 × 10 2 4.8413 × 10 3 1.1528 × 10 3 1.2566 × 10 4 68.705−2245.15002.8400
f 6 0.00000.00000.00000.00000.00000.000069.20038.80025.9687.470146.70411.62934.583017.88600.00000.0000
f 7 4.4408 × 10 16 0.0000 4.4409 × 10 16 0.00007.40439.8976 9.7000 × 10 8 4.2000 × 10 8 6.2087 × 10 2 0.236280.276020.509013.17043.92110.0000 1.6200 × 10 16
f 8 0.00000.00000.00000.0000 2.8900 × 10 4 1.5860 × 10 3 0.00000.000027.7025.0403 9.2150 × 10 3 7.7240 × 10 3 0.50740.50410.00000.0000
f 9 1.9060.08651.8286 1.5985 × 10 9 0.33970.2149 7.9000 × 10 15 8.0000 × 10 15 1.79960.95114 6.9170 × 10 3 2.6301 × 10 2 0.23690.28770.00000.0000
Table 8. Results comparison in multimodal benchmark functions with fixed-dimension.
Table 8. Results comparison in multimodal benchmark functions with fixed-dimension.
FSHO LB 2 WOADEGSAPSOVLEINMDA
AvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDevAvgStdDev
f 10 2.1326 × 10 8 5.0161 × 10 10 1.00000.00002.11202.49860.99800 3.3000 × 10 16 5.85983.83133.62722.56080.99800 2.5294 × 10 7 N/AN/A
f 11 0.00000.00000.00000.0000−1.0316 4.2000 × 10 7 −1.0316 3.1000 × 10 13 −1.0316 4.8800 × 10 16 −1.0316 6.2500 × 10 16 −1.0315 1.8408 × 10 4 N/AN/A
f 12 0.87180.05021.54360.42230.39791 2.7000 × 10 5 0.39789 9.9000 × 10 9 0.397890.00000.397890.00000.39815 4.5697 × 10 4 N/AN/A
f 13 36.07164.160732.6845 1.4854 × 10 8 3.0000 4.2200 × 10 15 3.0000 2.0000 × 10 15 3.0000 4.1700 × 10 15 3.0000 1.3300 × 10 15 3.0097 1.6256 × 10 2 N/AN/A
f 14 −2.12110.1284−2.0081 5.0800 × 10 10 −3.8562 2.7060 × 10 3 N/AN/A−3.8628 2.2900 × 10 15 −3.8628 2.5800 × 10 15 −3.8628 6.6880 × 10 5 N/AN/A
f 15 −0.85150.3541−1.58700.5016−2.98110.37665N/AN/A−3.3178 2.3081 × 10 2 −3.2663 6.0516 × 10 2 −3.3179 2.1311 × 10 2 N/AN/A
Table 9. Comparison results of SPO against L B 2 proposed.
Table 9. Comparison results of SPO against L B 2 proposed.
ProblemPSOPSO + SPO LB 2
Sphere 2.82 × 10 9 1.66 × 10 21 0
Rosenbrock148.844.200
Rastrigin10.430.980
Griewangk0.120.070
Table 10. Results comparison of SHO vs. L B 2 .
Table 10. Results comparison of SHO vs. L B 2 .
FOptSHO LB 2
BestWorstAvgStdDevAvg Time(s)BestWorstAvgStdDevAvg Time(s)
f 1 000.00210.00060.000551.6432000050.2377
f 2 0000078.9275000080.7524
f 3 000.00090.00070.000595.5684000096.3627
f 4 02.70912.93512.75110.050275.8810 1.59197 × 10 7 1.2262 × 10 6 6.7549 × 10 7 5.4204 × 10 7 71.0024
f 5 −12,569.487 1.1318 × 10 4 0.9653 × 10 4 1.0867 × 10 4 0.5059121.3511 1.2570 × 10 4 1.2567 × 10 4 1.2569 × 10 4 0.0014110.3354
f 6 00000172.9312000060.6482
f 7 0 4.4408 × 10 16 4.4408 × 10 16 4.4408 × 10 16 0256.8700 4.4408 × 10 16 4.4409 × 10 16 4.4409 × 10 16 024.9122
f 8 00000198.8546000021.7758
f 9 01.82902.56421.9060.0865256.87071.82851.82861.8286 1.5985 × 10 9 24.9172
f 10 1 2.1745 × 10 8 2.0745 × 10 8 2.1326 × 10 8 5.0161 × 10 10 130.3552111017.5661
f 11 −1.0316000029.158200007.5244
f 12 0.39790.82980.95230.87180.050222.57781.19052.03251.54360.42234.5528
f 13 332.684544.456236.07164.160735.778932.684532.684532.6845 1.4854 × 10 8 3.6846
f 14 −3.86−2.4301−2.0081−2.2110.128453.2235−2.0081−2.0080−2.0081 5.0800 × 10 10 7.1120
f 15 −3.32−1.1676−0.4676−0.85150.354180.4755−2.1676−2.1676−2.167608.1145
Table 11. Results comparison of NN vs. L B 2 .
Table 11. Results comparison of NN vs. L B 2 .
FOptNN LB 2
BestWorstAvgStdDevAvg Time(s)BestWorstAvgStdDevAvg Time(s)
f 1 00.06390.22230.10680.0435347.4073000050.2377
f 2 01.24265.88274.40040.8284375.2112000080.7524
f 3 00.00010.03790.01030.0108377.0420000096.3627
f 4 0211.42533037.63631376.64721041.5627375.6543 1.59197 × 10 7 1.2262 × 10 6 6.7549 × 10 7 5.4204 × 10 7 71.0024
f 5 −12,569.487 1.2557 × 10 4 1.7363 × 10 4 1.2057 × 10 4 2056.9973357.8577 1.2570 × 10 4 1.2567 × 10 4 1.2569 × 10 4 0.0014110.3354
f 6 01.86727.80284.26641.5939357.7703000060.6482
f 7 00.26870.51690.39050.0689350.7136 4.4408 × 10 16 4.4409 × 10 16 4.4409 × 10 16 024.9122
f 8 00.04161.12380.80170.1986354.9282000021.7758
f 9 029,752,063.6629,800,464.5229,792,019.739855.6445357.14111.82851.82861.8286 1.5985 × 10 9 24.9172
f 10 10.0160495.8931214.7364200.8041343.9354111017.5661
f 11 −1.0316−0.00790.01030.00050.0041344.455900007.5244
f 12 0.397910.000416.339312.27661.3941369.17121.19052.03251.54360.42234.5528
f 13 33.02275.40625.30441.5586369.870532.684532.684532.6845 1.4854 × 10 8 3.6846
f 14 −3.86−3.8417−3.5163−3.71770.0913376.6538−2.0081−2.0080−2.0081 5.0800 × 10 10 7.1120
f 15 −3.32−1.4809−0.7560−1.04090.2019342.6122−2.1676−2.1676−2.167608.1145
Table 12. Results comparison of SCA vs. L B 2 .
Table 12. Results comparison of SCA vs. L B 2 .
FOptSCA LB 2
BestWorstAvgStdDevAvg Time(s)BestWorstAvgStdDevAvg time(s)
f 1 0000010.4656000050.2377
f 2 0000017.3875000080.7524
f 3 0000074.3906000096.3627
f 4 0 1.33 × 10 10 2917.400014.975513.4296 1.59197 × 10 7 1.2262 × 10 6 6.7549 × 10 7 5.4204 × 10 7 71.0024
f 5 −12,569.487 2.51 × 10 7 0.06960.01810.02519.5828 1.2570 × 10 4 1.2567 × 10 4 1.2569 × 10 4 0.0014110.3354
f 6 0000011.2296000060.6482
f 7 0 4.44 × 10 16 4.44 × 10 16 4.44 × 10 16 015.1140 4.4408 × 10 16 4.4409 × 10 16 4.4409 × 10 16 024.9122
f 8 0000013.3500000021.7758
f 9 01.82851.84161.83040.004278.20461.82851.82861.8286 1.5985 × 10 9 24.9172
f 10 10.00034.93011.00102.070528.9609111017.5661
f 11 −1.03161.03161.03161.0316 2.3406 × 10 16 2.709300007.5244
f 12 0.39790.15553.95031.10091.37592.97651.19052.03251.54360.42234.5528
f 13 329.684530.054729.74440.12294.176532.684532.684532.6845 1.4854 × 10 8 3.6846
f 14 −3.861.85191.85191.851907.0265−2.0081−2.0080−2.0081 5.0800 × 10 10 7.1120
f 15 −3.322.15232.15242.152409.5578−2.1676−2.1676−2.167608.1145
Table 13. Exact p values obtained on the benchmark test functions.
Table 13. Exact p values obtained on the benchmark test functions.
F SHO LB 2
f 1 SHO->0.05
L B 2 >0.05-
f 2 SHO->0.05
L B 2 >0.05-
f 3 SHO->0.05
L B 2 >0.05-
f 4 SHO- 2 . 35 × 10 18
L B 2 >0.05-
f 5 SHO- 6 . 611 × 10 7
L B 2 >0.05-
f 6 SHO->0.05
L B 2 >0.05-
f 7 SHO->0.05
L B 2 >0.05-
f 8 SHO->0.05
L B 2 >0.05-
f 9 SHO- 7 . 01 × 10 7
L B 2 >0.05-
f 10 SHO- 1 . 1 × 10 7
L B 2 >0.05-
f 11 SHO-0.02067
L B 2 >0.05-
f 12 SHO->0.05
L B 2 0.04-
f 13 SHO->0.05
L B 2 >0.05-
f 14 SHO- 1 . 395 × 10 6
L B 2 >0.05-
f 15 SHO- 1 . 863 × 10 9
L B 2 >0.05-
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vega, E.; Soto, R.; Crawford, B.; Peña, J.; Castro, C. A Learning-Based Hybrid Framework for Dynamic Balancing of Exploration-Exploitation: Combining Regression Analysis and Metaheuristics. Mathematics 2021, 9, 1976. https://doi.org/10.3390/math9161976

AMA Style

Vega E, Soto R, Crawford B, Peña J, Castro C. A Learning-Based Hybrid Framework for Dynamic Balancing of Exploration-Exploitation: Combining Regression Analysis and Metaheuristics. Mathematics. 2021; 9(16):1976. https://doi.org/10.3390/math9161976

Chicago/Turabian Style

Vega, Emanuel, Ricardo Soto, Broderick Crawford, Javier Peña, and Carlos Castro. 2021. "A Learning-Based Hybrid Framework for Dynamic Balancing of Exploration-Exploitation: Combining Regression Analysis and Metaheuristics" Mathematics 9, no. 16: 1976. https://doi.org/10.3390/math9161976

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop