Next Article in Journal
Spline Function Simulation Data Generation for Walking Motion Using Foot-Mounted Inertial Sensors
Previous Article in Journal
Integration of Optical and Thermal Models for Organic Light-Emitting Diodes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Subpopulation Based Parallel TLBO Optimization Algorithms

by
Alejandro García-Monzó
1,
Héctor Migallón
1,*,
Antonio Jimeno-Morenilla
2,
José-Luis Sánchez-Romero
2,
Héctor Rico
2 and
Ravipudi Venkata Rao
3
1
Department of Physics and Computer Architecture, Miguel Hernández University, E-03202 Alicante, Spain
2
Department of Computer Technology, University of Alicante, E-03071 Alicante, Spain
3
Sardar Vallabhbhai National Institute of Technology, Surat 395 007, Gujarat State, India
*
Author to whom correspondence should be addressed.
Electronics 2019, 8(1), 19; https://doi.org/10.3390/electronics8010019
Submission received: 27 November 2018 / Revised: 11 December 2018 / Accepted: 21 December 2018 / Published: 23 December 2018
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
A numerous group of optimization algorithms based on heuristic techniques have been proposed in recent years. Most of them are based on phenomena in nature and require the correct tuning of some parameters, which are specific to the algorithm. Heuristic algorithms allow problems to be solved more quickly than deterministic methods. The computational time required to obtain the optimum (or near optimum) value of a cost function is a critical aspect of scientific applications in countless fields of knowledge. Therefore, we proposed efficient algorithms parallel to Teaching-learning-based optimization algorithms. TLBO is efficient and free from specific parameters to be tuned. The parallel proposals were designed with two levels of parallelization, one for shared memory platforms and the other for distributed memory platforms, obtaining good parallel performance in both types of parallel architectures and on heterogeneous memory parallel platforms.

1. Introduction

The purpose of optimization algorithms is to find the optimal value for a particular cost function. Cost functions, depending on the application in which they are used, can be highly complex, it may be necessary to repeatedly obtain a new optimum value, and they may present different numbers of parameters (or design variables). Moreover, if cost functions have local minimums, the search for the optimum value becomes more complicated.
When deterministic methods have been applied to obtain the optimal value of a function, a sequence of points tending to the global optimum value is generated considering the analytical properties of the problem under consideration. In other words, the search for the optimum is treated as a problem of linear algebra, often based on the gradient of the function. The optimal value, or a value very close to it, of a cost function can be obtained using deterministic methods (see [1]). In some cases, however, the efforts involved can be considerable, for example in non-convex or large-scale optimization problems. When deterministic methods can be applied, the results obtained are unequivocable and replicable, but the computational cost can make it useless. Several heuristic methods have been proposed to address these drawbacks, many of them based on phenomena found in nature, leading to acceptable solutions while reducing the required efforts. Two main groups of this type of algorithm, evolutionary algorithms and swarm intelligence, include the major heuristic algorithms.
On the one hand, metaheuristic methods are able to accelerate convergence even when local minima exist, and on the other, they can be used in functions whose characteristics prevent the use of deterministic methods, for example non-differentiable functions. In most cases, metaheuristic methods employ guided search techniques, in which some random processes are involved to solve the problem, although it cannot be formally proven that the optimal value obtained is the solution to the problem. In particular, the Teaching-learning-based optimization (TLBO) algorithm, presented in [2], has proven its effectiveness in a wide range of applications. For example in [3], it is used for the optimal coordination of directional overcurrent relays in a looped power system; in [4], a multi-objective TLBO is used to solve the optimal location of automatic voltage regulators in distribution systems in the presence of distributed generators; in [5], an improved multi-objective TLBO is applied to optimize an assembly line to produce large-sized high-volume products such as cars, trucks and engineering machinery; in [6], a load shedding algorithm for alleviating line overloads employs a TLBO algorithm; in [7], a TLBO algorithm is used to optimize feedback gains and the switching vector of an output feedback sliding mode controller for a multi area multi-source interconnected power system; in [8], the TLBO method is used to train and accelerate the learning rate of a model designed to forecast both wind power generation in Ireland and that of a single wind farm, in order to demonstrate the effectiveness of the proposed method; in [9] Cetane number estimation of biodiesel with a fatty acid methyl esters composition was performed using a hybrid optimization method including a TLBO algorithm; in [10], a residential demand side management scheme based on electricity cost and peak to average ratio alleviation with maximum user satisfaction is proposed using a hybrid technique based on TLBO and enhanced differential evolution (EDE) algorithms; in [11] a TLBO algorithm is used in Transmission Expansion Planning (TEP) that involves determining if and how transmission lines should be added to the power grid, considering power generation costs, power loss, and line construction costs among others.
Among the well-known metaheuristic optimization algorithms based on natural phenomena, it is worth mentioning: Particle Swarm Optimization (PSO) and its variants, Artificial Bee Colony (ABC), Shuffled Frog Leaping (SFL), Ant Colony Optimization (ACO), Evolutionary Strategy (ES), Evolutionary Programming (EP), Genetic Programming (GP), the Fire Fly (FF) algorithm, the Gravitational Search Algorithm (GSA), Biogeography-Based Optimization (BBO), the Grenade Explosion Method (GEM), Genetic Algorithms (GA) and its variants, Differential Evolution (DE) and its variants, Simulated Annealing (SA) algorithm and the Tabu Search (TS) algorithm can be mentioned.
In most of these algorithms, it is necessary to adjust one or more parameters first, for example, GA needs crossover probability, mutation probability, selection operator, etc. to be set correctly; the SA algorithm needs the initial annealing temperature and cooling schedule to be tuned; PSO’s specific parameters are inertia weight and social and cognitive parameters; HSA needs the harmony memory consideration rate, the number of improvisations, etc. to be set correctly; and the immigration rate, emigration rate, etc., need to be tuned for BBO. The population-based heuristic algorithm used in this work, the Teacher-Learner Based Optimization (TLBO) [12] overcomes the problem of tuning algorithm-specific parameters. Specifically, the TLBO algorithm only needs general parameters to be set, such as the number of iterations, population size and stopping criterion.
Some recent works applied TLBO algorithm parallelization techniques. For example, authors in [13] implemented a TLBO algorithm on a multicore processor within an OpenMP environment. The OpenMP strategy emulated the sequential TLBO algorithm exactly, so the calculation of fitness, calculation of mean, calculation of best, and comparison of fitness functions remained the same, while small changes were introduced to achieve better results. A set of 10 test functions were evaluated when running the algorithm on a single core architecture, and were then compared on architectures ranging from 2 to 32 cores. Average speed-up values of 4.9 x and 6.4 x with 16 and 32 processors were obtained respectively, corresponding to efficiencies of 30 % and 20 % respectively. In [14], the authors propose a parallel TLBO procedure for automatic heliostat aiming, obtaining good speed-up values for this extremely expensive problem using up to 32 processes; parallel performance, however, worsened when using functions that were not so computationally expensive.
Other parallel proposals for different heuristic optimization algorithms have been proposed. For example, authors in [15] implemented the Dual Population Genetic Algorithm (DPGA) on a parallel architecture obtaining average speed-up values of 1.64 x using both 16 and 32 processors. The authors in [16] propose a parallel version of the ACO metaheuristic algorithm obtaining a maximum speed-up of 5.94 x using 8 processors, going down to 5.45 x when using 16 processors. In addition, other proposals use hardware accelerators. For example, in [17], the PSO algorithm is accelerated using FPGAs and in [18], the Jaya algorithm is accelerated through the use of GPUs.
In Section 2, we present the TLBO optimization algorithm and describe the parallel algorithms in Section 3. In Section 4, we analyse the latter in terms of parallel performance and optimization behaviour, and some conclusions are drawn in Section 5.

2. The TLBO Algorithm

The Teaching-Learning-Based Optimization (TLBO) algorithm, like all evolutionary and swarm intelligence-based algorithms, requires common controlling parameters, but does not require algorithm-specific control parameters. Both these algorithms and TLBO are population-based and probabilistic algorithms, therefore TLBO needs to set only the size of the populations and number of generations.
The TLBO algorithm is based on common teaching and learning processes of a group of students, whose learning process is influenced both by the teacher and by interactions within the group of students. Each source of advancement of knowledge (that allows to approach the solution to the problem) is associated with a different phase of the TLBO algorithm, the first phase is the teacher phase and the second is the learner phase.
As mentioned previously, the TLBO is a population-based heuristic algorithm, therefore the first step is the creation of the initial population (line 1 of Algorithm 1). A population is a set of m individuals; each individual is composed of k variables (design variables) and the value of k depends on the cost function ( F c o s t ) to be optimized. Each individual in the initial population is created as shown in Equation (1), where r i , j are uniformely distributed random numbers, and m i n V a r j and m a x V a r j specify the domain size of each variable.
X i , j = m i n V a r j + ( m a x V a r j m i n V a r j ) r i , j
Once the population is created, the teacher phase begins by identifying the individual that will act as teacher (line 6 of Algorithm 1). The teacher will be the individual possessing the greatest amount of knowledge, i.e., the individual whose solution is the best among all individuals in the population. In the learner phase, the teacher tries to improve students’ knowledge. To model this interaction, the mean of each design variable ( M j ) is calculated considering all individuals in the population, and the interaction is performed considering the mean values computed: the teacher ( X t e a c h e r ), the teaching factor ( T F ), as well as a random factor ( r j ). The teaching factor is an integer random value in the range of [ 1 , 2 ] , while the random factor is a random real value in the range of [ 0 , 1 ] . In other words, the teaching factor is an integer value equal to 1 or equal to 2 that is randomly chosen for each teacher phase, i.e., teaching factor is not a parameter to be tuned. While r j are k floating-point random numbers uniformely distributed between 0 and 1.
Each individual is influenced by the teacher (line 12 of Algorithm 1). If the influence is positive, i.e., if it improves the student, the new student replaces the previous student in the population. Whorthy of note, in line 14 of Algorithm 1 a minimization problem is considered. The resulting population at the end of the teacher phase will be the initial population used in the learner phase ( Y i , j in Algorithm 2).
Algorithm 1 Teacher phase of TLBO algorithm.
1:
Create Initial Population: X i , j
2:
i identifies the individual i = 1 m
3:
j identifies the design variable j = 1 k
4:
Teacher phase:
5:
{
6:
Identify the best individual or teacher ( X t e a c h e r )
7:
Compute the mean of all design variables M j
8:
Compute the teaching factor ( T F )
9:
Compute the random factors ( r j )
10:
for i = 1 to m do
11:
for j = 1 to k do
12:
   X i , j = X i , j + r j ( X t e a c h e r , j T F × M j )
13:
end for
14:
if F c o s t ( X i ) < F c o s t ( X i ) then
15:
  Replace X i by X i
16:
end if
17:
end for
18:
}
Algorithm 2 Learner phase of TLBO algorithm.
1:
Initial population in the learner phase: Y i , j
2:
i identifies the individual i = 1 m
3:
j identifies the design variable j = 1 k
4:
Learner phase:
5:
{
6:
for i = 1 to m do
7:
 Randomly identify another student with whom to interact (p)
8:
if F c o s t ( Y i ) < F c o s t ( Y p ) then
9:
  for j = 1 to k do
10:
    Y i , j = Y i , j + r i , j ( Y i , j Y p , j )
11:
  end for
12:
else
13:
  for j = 1 to k do
14:
    Y i , j = Y i , j + r i , j ( Y p , j Y i , j )
15:
  end for
16:
end if
17:
if F c o s t ( Y i ) < F c o s t ( Y i ) then
18:
   Z i = Y i
19:
else
20:
   Z i = Y i
21:
end if
22:
end for
23:
Output population in learner phase: Z i , j
24:
i identifies the individual i = 1 m
25:
j identifies the design variable j = 1 k
26:
}
In the second stage, the learner phase, the students’ knowledge can improve due to the influence of the students themselves, i.e., by the interaction between them. In the learner phase, shown in Algorithm 2, each student (or individual) interacts with another student, who is randomly chosen. Worthy of note, the initial population ( Y i , j ) is the resulting population at the end of the teacher phase. Once both students are identified the interaction between them depends on the most learned student, i.e., it depends on the evaluation of the cost function for the two interacting students (lines 8–16 of Algorithm 2). The result of this interaction is an individual who is evaluated and compared with the initial individual, so the best among them is transferred to the population resulting from the learner phase ( Z i , j ). Worthy of note, in the teacher phase algorithm, a minimization problem is considered in line 17 of Algorithm 2.
The teacher and learner phases are repeated until the stop criterion is met. The number of repetitions (determined by the “Iterations” parameter) specifies the number of generations to be created. Significantly, the resulting population of the learner phase ( Z i , j ) is the initial population for the teacher phase in the next iteration. All random numbers used in Algorithms 1 and 2 ( r j and r i , j ) are uniformely distributed random numbers in the range of [ 0 , 1 ] .

3. Parallel Approaches

We propose hybrid OpenMP/MPI parallel algorithms to exploit heterogeneous memory platforms. The whole sequential TLBO algorithm is shown in Algorithm 3. The “Runs” parameter corresponds to the number of independent executions performed. Therefore, in line 21 of Algorithm 3, “Runs” different solutions should be evaluated. In each independent execution both teacher and learner phases are repeated “Iterations” times. The parallel approach to exploit distributed memory platforms is applied to independent executions (line 5 of Algorithm 3), while the parallel approaches to exploit shared memory platforms are applied using subpopulations in teacher and learner phases as well as in the duplicate removal phase. The elimination of duplicates is necessary to avoid premature convergence.
Algorithm 3 Skeleton of the sequential TLBO algorithm.
1:
Define function to minimize ( F c o s t )
2:
Set R u n s parameter
3:
Set I t e r a t i o n s parameter
4:
Set m parameter (used in Algorithms 1 and 2
5:
for l = 1 to R u n s do
6:
 Create New Population ( Z 1 ):
7:
for q = 1 to I t e r a t i o n s do
8:
  Teacher phase:
9:
   (Input: Population Z q )
10:
   (Output: Population Y q )
11:
  Learner phase:
12:
   (Input: Population Y q )
13:
   (Output: Population Z q + 1 )
14:
  Duplicate removal phase:
15:
   (Input: Population Z q + 1 )
16:
   (Output: Population Z q + 1 )
17:
end for
18:
 Store Solution
19:
 Delete Population
20:
end for
21:
Obtain Best Solution and Statistical Data
We developed two parallel proposals in order to exploit shared memory platforms. Both proposals distribute the work load associated with teacher and learner phases by considering subpopulations. The size of the whole population is equal to m; if the number of parallel threads (or processes) is n t , we consider n t subpopulations of sizes m n t , where m n t = m . In the first proposal, called SPG_ParTLBO, the whole population is partitioned into subpopulations (SP) that are stored in global (G) memory. While in the second proposal, called SPP_ParTLBO, the whole population is also partitioned into subpopulations (SP), but they are stored in private (P) memory.
Algorithm 4 shows the parallel teacher phase for the SPG_ParTLBO algorithm. In line 5 all threads compute the initial subpopulation and store it in global memory; in line 12, the best individual of each subpopulation is identified, and the teacher (the global best individual) is sequentially identified in line 14. Following a similar strategy, the means of the design variables of each subpopulation are calculated in line 16, and in line 18 the global value of these mean values are obtained sequentially. Finally, the influence of the teacher is applied to each individual in parallel, introducing those who have improved their knowledge into the population (line 27). The parallel teacher phase shown in Algorithm 4 does not modify the optimization procedure of the sequential algorithm shown in Algorithm 1.
Algorithm 4 Teacher phase of SPG_TLBO algorithm.
1:
Set population size parameter (m)
2:
Obtain the number of parallel threads ( n t )
3:
Compute the size of subpopulations ( m n t )
4:
In parallel s = 1 to n t do
5:
 Create Initial Subpopulation: X i s
6:
end for
7:
{Whole Population is: X i , j }
8:
Teacher phase:
9:
{
10:
Compute the teaching factor ( T F )
11:
In parallel s = 1 to n t do
12:
 Identify the best individual of subpopulation ( X b e s t s )
13:
end for
14:
Compute the global teacher: X t e a c h e r = B e s t o f ( X b e s t s )
15:
In parallel s = 1 to n t do
16:
 Compute the partial mean of all design variables M j s
17:
end for
18:
Compute the global mean of all design variables M j
19:
In parallel s = 1 to n t do
20:
 Compute the random factors ( r j s )
21:
end for
22:
In parallel s = 1 to n t do
23:
for i = 1 to m n t do
24:
  for j = 1 to k do
25:
    X i s , j = X i s , j + r j s ( X t e a c h e r , j T F × M j )
26:
  end for
27:
  if F c o s t ( X i s ) < F c o s t ( X i s ) then
28:
   Replace X i s by X i s
29:
  end if
30:
end for
31:
end for
32:
}
Algorithm 5 shows the parallel learner phase for the SPG_ParTLBO algorithm. Each process, for each student in its subpopulation, randomly chooses another student with whom to interact, who can be located in any subpopulation since the whole population is stored in global memory (line 5). The rest of the code (lines 6–20) remains unchanged with respect to the sequential algorithm shown in Algorithm 2.
Algorithm 5 Learner phase of SPG_TLBO algorithm.
1:
Learner phase:
2:
{
3:
In parallel s = 1 to n t do
4:
for i = 1 to m n t do
5:
  Randomly identify another student with whom to interact ( p [ 1 , m ] )
6:
  if F c o s t ( Y i ) < F c o s t ( Y p ) then
7:
   for j = 1 to k do
8:
     Y i , j = Y i , j + r i , j ( Y i , j Y p , j )
9:
   end for
10:
  else
11:
   for j = 1 to k do
12:
     Y i , j = Y i , j + r i , j ( Y p , j Y i , j )
13:
   end for
14:
  end if
15:
  if F c o s t ( Y i ) < F c o s t ( Y i ) then
16:
    Z i = Y i
17:
  else
18:
    Z i = Y i
19:
  end if
20:
end for
21:
end for
22:
}
The duplicate removal phase for the SPG_ParTLBO algorithm, shown in Algorithm 6, performs the same procedure as the sequential procedure in parallel. Worthy of note, when a duplicate is found, a random design variable is chosen to be modified.
Algorithm 6 Duplicate removal phase of SPG_TLBO algorithm.
1:
Duplicate removal phase:
2:
{
3:
In parallel s = 1 to n t do
4:
for i m n t do
5:
  for j = i + 1 to m do
6:
   if Z i = Z j then
7:
    Select randomly one design variable ( s [ 1 , k ] )
8:
    Randomly change Z i s
9:
   end if
10:
  end for
11:
end for
12:
end for
13:
Z = Z
14:
}
To increase parallel efficiency, we developed the second proposal called SPP_ParTLBO, in which subpopulations are stored in private memory at each thread. However, the subpopulations are not isolated structures. Algorithm 7 shows the parallel learner phase for the SPP_ParTLBO algorithm. As can be seen, after identifying the best individual (i.e., the teacher) the thread that stored it in its subpopulation copies it into the global memory, so all the threads use the same teacher (lines 11–17). In contrast, the means of the design variables used in each subpopulation are obtained only with the individuals of the subpopulation (line 20). The rest of the teacher phase remains unchanged.
Algorithm 7 Teacher phase of SPP_TLBO algorithm.
1:
Set population size parameter (m)
2:
Obtain the number of parallel threads ( n t )
3:
Compute the size of subpopulations ( m n t )
4:
In parallel s = 1 to n t do
5:
 Create Initial Subpopulation: X i s
6:
end for
7:
{Whole Population is: X i , j }
8:
Teacher phase:
9:
{
10:
Compute the teaching factor ( T F )
11:
In parallel s = 1 to n t do
12:
 Identify the best individual of subpopulation ( X b e s t s )
13:
end for
14:
Calculate the best global individual (teacher) and its owner thread
15:
if Is the owner of the teacher then
16:
 Copy teacher to global memory B e s t g l o b a l
17:
end if
18:
Sync BARRIER
19:
In parallel s = 1 to n t do
20:
 Compute the mean of design variables in subpopulation M j s
21:
end for
22:
In parallel s = 1 to n t do
23:
 Compute the random factors ( r j s )
24:
end for
25:
In parallel s = 1 to n t do
26:
for i = 1 to m n t do
27:
  for j = 1 to k do
28:
    X i s , j = X i s , j + r j s ( B e s t g l o b a l , j T F × M j s )
29:
  end for
30:
  if F c o s t ( X i s ) < F c o s t ( X i s ) then
31:
   Replace X i s by X i s
32:
  end if
33:
end for
34:
end for
35:
}
Parallel learner phase for the SPP_ParTLBO algorithm is shown in Algorithm 8. In this algorithm, the student’s search range with which each student interacts is restricted to the subpopulation, not to the entire population (line 5), while the rest of the teacher’ phase remains unchanged.
Algorithm 8 Learner phase of SPP_TLBO algorithm.
1:
Learner phase:
2:
{
3:
In parallel s = 1 to n t do
4:
for i = 1 to m n t do
5:
  Randomly identify another student with whom to interact ( p [ 1 , m n t ] )
6:
  if F c o s t ( Y i ) < F c o s t ( Y p ) then
7:
   for j = 1 to k do
8:
     Y i , j = Y i , j + r i , j ( Y i , j Y p , j )
9:
   end for
10:
  else
11:
   for j = 1 to k do
12:
     Y i , j = Y i , j + r i , j ( Y p , j Y i , j )
13:
   end for
14:
  end if
15:
  if F c o s t ( Y i ) < F c o s t ( Y i ) then
16:
    Z i = Y i
17:
  else
18:
    Z i = Y i
19:
  end if
20:
end for
21:
end for
22:
}
In the SPP_ParTLBO algorithm, the duplicate removal phase shown in Algorithm 9, changes with respect to the sequential procedure, by restricting the search to the subpopulation, which is stored in private memory.
Algorithm 9 Duplicate removal phase of SPP_TLBO algorithm.
1:
Duplicate removal phase:
2:
{
3:
In parallel s = 1 to n t do
4:
for i = 1 to m n t do
5:
  for j = i + 1 to m n t do
6:
   if Z i = Z j then
7:
    Select randomly one design variable ( s [ 1 , k ] )
8:
    Randomly change Z i s
9:
   end if
10:
  end for
11:
end for
12:
end for
13:
Z = Z
14:
}
To use heterogeneous memory platforms (clusters) we need to develop a hybrid memory model algorithm. As explained in Section 2, and as can be seen in Algorithm 3, the TLBO algorithm performs several fully independent executions (“Runs”). Therefore, we developed a parallel algorithm, at a higher level, for distributed memory platforms, load balance being a key aspect. The high level parallel algorithm needed to include load balance mechanisms and be able to include parallel algorithms previously described, developed for shared memory platforms.
The high level parallel TLBO algorithm focuses on the fact that all iterations in line 5 in Algorithm 3 are actually independent executions. Therefore, the total number of executions (“Runs”) to be performed is divided among np available processes, taking into account that it cannot be distributed statically. The high level parallel algorithm must be designed for distributed memory platforms using MPI. On the one hand, we must develop a load balance procedure, and on the other, a final data gathering process (data collection from all processes) must be performed.
The developed hybrid MPI/OpenMP algorithm is shown in Algorithm 10. In this algorithm, if the number of desired worker processes is equal to n p , the total number of distributed memory processes will be n p + 1 . This is because a critical process (distributed memory process) will be in charge of distributing the computing work among the n p available working processes. We call this process the work dispatcher. Although the work dispatcher process is critical, it will be running in one of the nodes with worker processes, because no significant overhead is introduced in the overall parallel algorithm performance. The work dispatcher will be waiting to receive a work request signal from an idle worker process. When a particular worker process requests new work (independent execution), the dispatcher will assign a new independent execution or send an end of work signal.
Algorithm 10 Heterogeneous memory parallel TLBO algorithm.
1:
n p : number of distributed memory worker processes
2:
Dispatcher process:
3:
{
4:
for l = 1 to R u n s do
5:
 Receive idle signal
6:
 Send work signal
7:
end for
8:
for l = 1 to n p do
9:
 Receive idle signal
10:
 Send end of work signal
11:
end for
12:
}
13:
Worker processes:
14:
{
15:
while true do
16:
 Send idle signal
17:
if end of work signal then
18:
  Break while
19:
else
20:
   n t : number threads
21:
  Compute 1 run of SPG_TLBO or SPP_TLBO algorithm
22:
  Store Solution
23:
end if
24:
end while
25:
}
26:
Collect all the solutions and obtain Best Solution
The computational load of the dispatcher process is negligible, as can be observed in lines 4 to 11 of Algorithm 10. In line 21 one of the two parallel proposals of the TLBO algorithm is used, i.e., SPG_TLBO or SPP_TLBO. The total number of processes is equal to t p = n p n t , where n p is the number of distributed memory worker processes (MPI processes) and n t is the number of shared memory processes (OpenMP processes or threads).

4. Numerical Results

In this section, we analyse the parallel TLBO algorithms, presented in Section 3. To perform the tests, we developed the reference algorithm, presented in [2], in C language to implement the parallel algorithms, and used the GCC v.4.8.5 compiler [19]. We chose MPI v2.2 [20] for the high level parallel approach and OpenMP API v3.1 [21] for the shared memory parallel algorithms. The parallel platform used was composed of HP Proliant SL390 G7 nodes, where each node was equipped with two Intel Xeon X5660 processors. Each X5660 included six processing cores at 2.8 GHz, and QDR Infiniband was used as the communication network. The performance was analysed using 30 unconstrained functions, listed and described in Table 1 and Table 2.
We will now analyse parallel behaviour of the parallel algorithm SPG_TLBO, described in Algorithms 4–6, i.e., the shared memory parallel algorithm that stores the whole population in shared (or global) memory. Table 3 shows the parallel efficiencies for all functions of the benchmark test, using a number of threads (NoT) between 2 to 10. In this table, we can see that good efficiencies are obtained for almost all functions using up to 6 threads. However, in very low computational cost functions, efficiency decreases rather considerably when increasing the number of threads. In such cases, to be able to increase the number of processes efficiently, the heterogeneous memory parallel TLBO algorithm should be used.
Table 4 and Table 5 show the parallel efficiencies for the heterogeneous memory parallel TLBO algorithm using SPG_TLBO, setting the number of total processes (NoTP) to 4 and 10, when the number of (MPI) processes (NoP) is equal to 1 the SPG_TLBO algorithm is used. We compare the SPG_TLBO parallel algorithm with respect the hybrid MPI/OpenMP algorithm using the same number of total processes (NoTP). Since the MPI algorithm is independent of the OpenMP algorithm, the same behaviour is obtained when the SPP_TLBO algorithm is used instead of the SPG_TLBO.
As can be seen, using the hybrid MPI/OpenMP algorithm can significantly increase scalability of the parallel algorithm. Table 6 shows the efficiencies for highest computational cost functions, increasing the total number of processes (NoTP) to 30 and the number of iterations to 10 , 000 , and in which the good behaviour of the efficiency can be verified.
Table 7 shows the parallel efficiencies for the SPP_TLBO algorithm using the same sequential reference algorithm as the one used in Table 7, i.e., the sequential TLBO algorithm. Worthy of note, the parallel algorithm SPP_TLBO does not emulate the sequential algorithm TLBO literally. The use of subpopulations in the SPP_TLBO algorithm causes modifications in some procedures, such as the calculation of the mean of the variables; it also reduces the working population in some procedures, such as the detection of duplicates. This means that on the one hand, efficiency results generally improve with respect to the SPG_TLBO algorithm, and on the other, in some cases, the efficiency exceeds the theoretical upper limit when comparing exactly the same algorithms. In particular the duplicate removal procedure for very low cost computational cost functions becomes a very important aspect in the overall cost of the algorithm.
As shown in Table 3, Table 4 and Table 5 and Table 7, the parallel methods proposed obtain good efficiencies. However in [14], the authors’ parallel proposal for the particular problem under study, also achieves very good efficiencies, the cost function having a high computational cost. In Table 8, we compare the method proposed in [14] to both proposed methods, SPG_TLBO and SPP_TLBO, for the first function of the benchmark test (provided by the reference software in https://gitlab.hpca.ual.es/ncc911/ParallelTLBO), i.e., the Sphere function, using between 2 to 10 threads (NoT), i.e., OpenMP processes. Results presented in Table 8 were obtained by running the reference code on the same parallel platform where the results for the SPG_TLBO and SPP_TLBO algorithms have been obtained. As shown, the efficiencies for both proposed algorithms, SPG_TLBO and SPP_TLBO, improve those obtained by the reference algorithm, especially by increasing the number of threads used. Worthy of note the TLBO parallel proposal presented in [13] obtains efficiencies of only between 20 % and 30 % for 16 and 32 processes respectively, and other parallel proposals applied to the state-of-the-art algorithms DPGA and ACO, obtain worse efficiency results and serious scalability problems.
Finally, the effectiveness of the optimization, especially of the SPP_TLBO algorithm, should be checked, as it modifies the procedure carried out in the sequential TLBO algorithm, while the SPG_TLBO algorithm performs a processing that is analogous to the sequential processing. Table 9 show the number of iterations (N. It.) needed to achieve an optimal value with an error of less than 1 e 3 . This table shows the data of the original sequentially executed TLBO algorithm and the data of the parallel SPP_TLBO algorithms. Please note that the number of iterations shown in Table 9 is the average of the functions iterations needed to achieve an optimal value with an error of less than 1 e 3 , this average has been computed over the 30 values obtained from the 30 independent runs performed, both for the sequential and parallel algorithms. On the other hand, both the subpopulation size and the population size for the sequential algorithm are equal to 120. Whorty of note, the number of iterations when using the SPG_TLBO parallel algorithm is similar to the sequential reference algorithm, due to the sequential procedure has not been modified. While the number of iterations, shown in Table 9, when using the SPP_TLBO parallel algorithm shows that our parallel proposal outperforms the sequential TLBO algorithm, i.e., convergence is accelerated. Therefore, the strategy of using subpopulations connected by the best global individual, used in the SPP_TLBO algorithm, offers improvements both at the computational level and regarding convergence speed. Table 9 does not include those functions of faster convergence.

5. Conclusions

The TLBO heuristic optimization algorithm is an effective optimization algorithm that though recent, has been tested and compared. In this work, we presented efficient parallel algorithms for heterogeneous parallel platforms. We proposed a hybrid MPI/OpenMP algorithm, exploiting inherent parallelism at different levels. Moreover we proposed two different algorithms for shared memory architectures, using OpenMP, called SPG_TLBO and SPP_TLBO. The first is an efficient parallel implementation of the TLBO sequential algorithm without any changes to the sequential procedure. In the second, SPP_TLBO, we proposed a different strategy that improves both computational performance and optimization behaviour. Significantly, the parallel proposals achieved good parallel performance regardless of the intrinsic characteristics of the functions to be optimized, in particular with regard to the computational cost of the function to be optimized. On the other hand, the high level parallel proposal included an intrinsic load balancing mechanism allowing the use of non-dedicated computing platforms.

Author Contributions

H.M., A.J.-M., J.-L.S.-R. and R.V.R. conceived the parallel algorithms; R.V.R. conceived the sequential algorithm; A.G.-M. and H.M. designed parallel algorithms; H.M. and A.G.-M. codified the parallel algorithms; A.G.-M. and H.R. performed numerical experiments; H.M. and A.J.-M. analyzed the data; H.M. and J.-L.S.-R. wrote the paper.

Funding

This research was supported by the Spanish Ministry of Economy and Competitiveness under Grants TIN2015-66972-C5-4-R and TIN2017-89266-R, co-financed by FEDER funds. (MINECO/FEDER/UE).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lin, M.H.; Tsai, J.F.; Yu, C.S. A Review of Deterministic Optimization Methods in Engineering and Management. Math. Probl. Eng. 2012, 2012, 756023. [Google Scholar] [CrossRef]
  2. Rao, R.V.; Patel, V. Comparative performance of an elitist teaching-learning-based optimization algorithm for solving unconstrained optimization problems. Int. J. Ind. Eng. Comput. 2013, 4, 29–50. [Google Scholar] [CrossRef]
  3. Singh, M.; Panigrahi, B.; Abhyankar, A. Optimal coordination of directional over-current relays using Teaching Learning-Based Optimization (TLBO) algorithm. Int. J. Electr. Power Energy Syst. 2013, 50, 33–41. [Google Scholar] [CrossRef]
  4. Niknam, T.; Azizipanah-Abarghooee, R.; Narimani, M.R. A new multi objective optimization approach based on TLBO for location of automatic voltage regulators in distribution systems. Eng. Appl. Artif. Intell. 2012, 25, 1577–1588. [Google Scholar] [CrossRef]
  5. Li, D.; Zhang, C.; Shao, X.; Lin, W. A multi-objective TLBO algorithm for balancing two-sided assembly line with multiple constraints. J. Intell. Manuf. 2016, 27, 725–739. [Google Scholar] [CrossRef]
  6. Arya, L.; Koshti, A. Anticipatory load shedding for line overload alleviation using Teaching learning based optimization (TLBO). Int. J. Electr. Power Energy Syst. 2014, 63, 862–877. [Google Scholar] [CrossRef]
  7. Mohanty, B. TLBO optimized sliding mode controller for multi-area multi-source nonlinear interconnected AGC system. Int. J. Electr. Power Energy Syst. 2015, 73, 872–881. [Google Scholar] [CrossRef]
  8. Yan, J.; Li, K.; Bai, E.; Yang, Z.; Foley, A. Time series wind power forecasting based on variant Gaussian Process and TLBO. Neurocomputing 2016, 189, 135–144. [Google Scholar] [CrossRef]
  9. Baghban, A.; Kardani, M.N.; Mohammadi, A.H. Improved estimation of Cetane number of fatty acid methyl esters (FAMEs) based biodiesels using TLBO-NN and PSO-NN models. Fuel 2018, 232, 620–631. [Google Scholar] [CrossRef]
  10. Javaid, N.; Ahmed, A.; Iqbal, S.; Ashraf, M. Day Ahead Real Time Pricing and Critical Peak Pricing Based Power Scheduling for Smart Homes with Different Duty Cycles. Energies 2018, 11, 1464. [Google Scholar] [CrossRef]
  11. Zakeri, A.S.; Askarian Abyaneh, H. Transmission Expansion Planning Using TLBO Algorithm in the Presence of Demand Response Resources. Energies 2017, 10, 1376. [Google Scholar] [CrossRef]
  12. Rao, R.V.; Savsani, V.; Vakharia, D. Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  13. Umbarkar, A.J.; Rothe, N.M.; Sathe, A. OpenMP Teaching-Learning Based Optimization Algorithm over Multi-Core System. Int. J. Intell. Syst. Appl. 2015, 7, 19–34. [Google Scholar] [CrossRef]
  14. Cruz, N.C.; Redondo, J.L.; Álvarez, J.D.; Berenguel, M.; Ortigosa, P.M. A parallel Teaching–Learning-Based Optimization procedure for automatic heliostat aiming. J. Supercomput. 2017, 73, 591–606. [Google Scholar] [CrossRef]
  15. Umbarkar, A.J.; Joshi, M.S.; Sheth, P.D. OpenMP Dual Population Genetic Algorithm for Solving Constrained Optimization Problems. Int. J. Inf. Eng. Electron. Bus. 2015, 1, 59–65. [Google Scholar] [CrossRef]
  16. Delisle, P.; Krajecki, M.; Gravel, M.; Gagné, C. Parallel implementation of an ant colony optimization metaheuristic with OpenMP. In Proceedings of the 3rd European Workshop on OpenMP, Barcelona, Spain, 8–9 September 2001; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  17. Lee, H.; Kim, K.; Kwon, Y.; Hong, E. Real-Time Particle Swarm Optimization on FPGA for the Optimal Message-Chain Structure. Electronics 2018, 7, 274. [Google Scholar] [CrossRef]
  18. Jimeno-Morenilla, A.; Sánchez-Romero, J.L.; Migallón, H.; Mora-Mora, H. Jaya optimization algorithm with GPU acceleration. J. Supercomput. 2018. [Google Scholar] [CrossRef]
  19. Free Software Foundation, Inc. GCC, the GNU Compiler Collection. Available online: https://www.gnu.org/software/gcc/index.html (accessed on 2 November 2016).
  20. MPI Forum. MPI: A Message-Passing Interface Standard. Version 2.2. 2009. Available online: http://www.mpi-forum.org (accessed on 15 December 2016).
  21. OpenMP Architecture Review Board. OpenMP Application Program Interface, Version 3.1. 2011. Available online: http://www.openmp.org (accessed on 2 November 2016).
Table 1. Benchmark functions.
Table 1. Benchmark functions.
Id.NameDim. (V)Domain (Min, Max)
F1Sphere30 100 , 100
F2SumSquares30 10 , 10
F3Beale2 4.5 , 4.5
F4Easom2 100 , 100
F5Matyas2 10 , 10
F6Colville4 10 , 10
F7Trid 66 V 2 , V 2
F8Trid 1010 V 2 , V 2
F9Zakharov10 5 , 10
F10Schwefel problem 1.230 100 , 100
F11Rosenbrock30 30 , 30
F12Dixon-Price5 10 , 10
F13Foxholes2 2 16 , 2 16
F14Branin2 x 1 : 5 , 10 ; x 2 : 0 , 15
F15Bohachevsky_12 100 , 100
F16Booth2 10 , 10
F17Michalewicz_22 0 , π
F18Michalewicz_55 0 , π
F19Bohachevsky_22 100 , 100
F20Bohachevsky_32 100 , 100
F21GoldStein-Price2 2 , 2
F22Perm4 V , V
F23Hartman_33 0 , 1
F24Ackley30 32 , 32
F25Penalized_230 50 , 50
F26Langermann_22 0 , 10
F27Langermann_55 0 , 10
F28Langermann_1010 0 , 10
F29Fletcher-Powell_55 x i , α i : π , π ; a i j , b i j : 100 , 100
F30Fletcher-Powell_1010 x i , α i : π , π ; a i j , b i j : 100 , 100
Table 2. Benchmark functions.
Table 2. Benchmark functions.
Id.Function
F1 f = i = 1 V x i 2
F2 f = i = 1 V i x i 2
F3 f = ( 1.5 x 1 + x 1 x 2 ) 2 + ( 2.25 x 1 + x 1 x 2 2 ) 2
+ ( 2.625 x 1 + x 1 x 2 3 ) 2
F4 f = cos ( x 1 ) cos ( x 2 ) exp ( x 1 π ) 2 ( x 2 π ) 2
F5 f = 0.26 ( x 1 2 + x 2 2 ) 0.48 x 1 x 2
F6 f = 100 ( x 1 2 x 2 ) 2 + ( x 1 1 ) 2 + ( x 3 1 ) 2 + 90 ( x 3 2 x 4 ) 2
+ 10.1 ( x 2 1 ) 2 + ( x 4 1 ) 2 + 19.8 ( x 2 1 ) ( x 4 1 )
F7
F8
f = i = 1 V ( x i 1 ) 2 i = 2 V x i x i 1
F9 f = i = 1 V x i 2 + i = 1 V 0.5 i x i 2 + i = 1 V 0.5 i x i 4
F10 f = i = 1 V j = 1 i x j 2
F11 f = i = 1 V 1 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2
F12 f = ( x 1 1 ) 2 + i = 2 V i 2 x i 2 x i 1 2
F13 f = 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 1
F14 f = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10
F15 f = x 1 2 + 2 x 2 2 0.3 cos ( 3 π x 1 ) 0.4 cos ( 4 π x 2 ) + 0.7
F16 f = ( x 1 2 x 2 7 ) 2 + ( 2 x 1 + x 2 5 ) 2
F17
F18
f = i = 1 V sin x i sin i x i 2 π 20
F19 f = x 1 2 + 2 x 2 2 0.3 cos ( 3 π x 1 ) cos ( 4 π x 2 ) + 0.3
F20 f = x 1 2 + 2 x 2 2 0.3 cos ( 3 π x 1 + 4 π x 2 ) + 0.3
F21 f = 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 )
30 + ( 2 x 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 )
F22 f = j = 1 V i = 1 i ( i j + β ) x i i j 1 2
F23 f = i = 1 4 c i exp j = 1 3 a i j ( x j p i j ) 2
F24 f = 20 exp 0 . 2 1 V i = 1 V x i 2 exp 1 V i = 1 V cos ( 2 π x i ) + 20 + e
F25 f = 0 . 1 { sin 2 ( 3 π x 1 ) + i = 1 V 1 ( x i 1 ) 2 1 + sin 2 ( 3 π x i + 1 )
+ ( x V 1 ) 2 1 + sin 2 ( 2 π x V ) } + i = 1 V u ( x i , 5 , 100 , 4 ) ,
u ( x i , a , k , m ) =
k ( x i a ) m , x i > a ; 0 , a x i a ; k ( x i a ) m , x i < a .
F26
F27
F28
f = i = 1 5 c i exp 1 π j = 1 V ( x j a i j ) 2 cos π j = 1 V ( x j a i j ) 2
F29 f = i = 1 V C i D i 2 ;
C i = j = 1 V a i j sin α j + b i j cos α j ,
D i = j = 1 V a i j sin x j + b i j cos x j ,
a i , j , b i , j = 79 56 62 9 92 91 9 18 59 99 38 8 12 73 40 78 18 49 65 66 1 43 93 18 76 ,
α j = 2.791 2.5623 1.0429 0.5097 2.8096 .
F30 f = i = 1 V C i D i 2 ;
C i = j = 1 V a i j sin α j + b i j cos α j ,
D i = j = 1 V a i j sin x j + b i j cos x j ,
a i , j , b i , j = 79 56 62 9 92 48 22 34 39 40 91 9 18 59 99 45 88 14 29 26 38 8 12 73 40 26 64 29 82 32 78 18 49 65 66 40 88 95 57 10 1 43 93 18 76 68 42 22 46 14 34 96 26 56 36 85 62 13 93 78 52 46 69 99 47 72 11 55 55 91 81 47 35 55 67 13 33 14 83 42 5 43 45 46 56 94 62 52 66 55 50 66 47 75 89 16 82 6 85 62 ,
α j = 2.791 2.5623 1.0429 0.5097 2.8096 1.1883 2.0771 2.9926 0.0715 0.4142 .
Table 3. Efficiencies for SPG_TLBO, Runs = 30, Population size = 240, Iterations = 1000.
Table 3. Efficiencies for SPG_TLBO, Runs = 30, Population size = 240, Iterations = 1000.
FunctionNoTEfficiencyFunctionNoTEfficiencyFunctionNoTEfficiency
F1280%F11281%F21271%
464%466%450%
658%659%636%
852%854%826%
1047%1049%1022%
F2282%F12263%F22293%
469%437%485%
660%626%673%
855%819%865%
1050%1015%1059%
F3271%F13282%F23270%
445%479%457%
631%674%646%
822%870%842%
1017%1067%1037%
F4270%F14266%F24266%
456%445%457%
646%630%654%
837%821%850%
1032%1016%1050%
F5276%F15265%F25267%
464%452%462%
654%641%659%
842%833%849%
1038%1027%1049%
F6263%F16265%F26277%
436%434%459%
621%621%645%
816%814%836%
1012%1011%1031%
F7268%F17277%F27276%
458%459%467%
652%644%662%
849%833%855%
1046%1027%1053%
F8264%F18274%F28265%
455%464%450%
648%658%656%
845%855%860%
1042%1053%1046%
F9273%F19263%F29296%
459%450%477%
642%639%664%
836%831%862%
1031%1027%1049%
F10290%F20267%F30293%
481%451%488%
673%641%682%
870%833%881%
1065%1028%1081%
Table 4. Efficiencies for hybrid MPI/OpenMP parallel algorithm, Runs = 30, Population size = 240, Iterations = 1000.
Table 4. Efficiencies for hybrid MPI/OpenMP parallel algorithm, Runs = 30, Population size = 240, Iterations = 1000.
Func.NoPNoTNoTPEff.NoPNoTNoTPEff.
F114464%1101047%
2273%5273%
F214469%1101050%
2277%5274%
F314445%1101017%
2264%5264%
F414456%1101032%
2264%5261%
F514453%1101038%
2264%5258%
F614436%1101012%
2266%5262%
F714458%1101046%
2266%5261%
F814455%1101042%
2260%5259%
F914459%1101031%
2262%5264%
F1014451%1101065%
2286%5280%
F1114466%1101049%
2277%5273%
F1214437%1101015%
2255%5259%
F1314479%1101067%
2285%5291%
F1414445%1101016%
2260%5263%
F1514452%1101027%
2262%5262%
Table 5. Efficiencies for hybrid MPI/OpenMP parallel algorithm, Runs = 30, Population size = 240, Iterations = 1000.
Table 5. Efficiencies for hybrid MPI/OpenMP parallel algorithm, Runs = 30, Population size = 240, Iterations = 1000.
Func.NoPNoTNoTPEff.NoPNoTNoTPEff.
F1614464%1101047%
2273%5273%
F1714464%1101047%
2273%5273%
F1814464%1101047%
2273%5273%
F1914464%1101047%
2273%5273%
F2014464%1101047%
2273%5273%
F2114464%1101047%
2273%5273%
F2214464%1101047%
2273%5273%
F2314464%1101047%
2273%5273%
F2414464%1101047%
2263%5273%
F2514464%1101047%
2273%5273%
F2614464%1101047%
2273%5273%
F2714464%1101047%
2273%5273%
F2814464%1101047%
2273%5273%
F2914464%1101047%
2273%5273%
F3014464%1101047%
2273%5273%
Table 6. Efficiencies for hybrid MPI/OpenMP parallel algorithm, Runs = 30, Population size = 240, Iterations = 10,000, NoP = 15, NoT = 2, NoTP = 30.
Table 6. Efficiencies for hybrid MPI/OpenMP parallel algorithm, Runs = 30, Population size = 240, Iterations = 10,000, NoP = 15, NoT = 2, NoTP = 30.
FunctionEfficiencyFunctionEfficiencyFunctionEfficiency
F464%F1564%F2590%
F571%F1871%F2695%
F765%F1966%F2769%
F866%F2293%F2870%
F1085%F2368%F2986%
F1392%F2463%F3093%
Table 7. Efficiencies for SPP_TLBO, Runs = 30, Population size = 120, Iterations = 1000.
Table 7. Efficiencies for SPP_TLBO, Runs = 30, Population size = 120, Iterations = 1000.
FunctionNoTEfficiencyFunctionNoTEfficiencyFunctionNoTEfficiency
F1297%F11296%F212137%
487%487%4122%
680%681%6107%
873%876%883%
1069%1070%1065%
F2296%F122107%F22297%
486%482%491%
681%669%686%
875%856%884%
1071%1044%1082%
F32105%F13291%F232121%
489%486%4121%
678%685%6111%
865%884%8110%
1052%1084%1094%
F42135%F142122%F24246%
4143%4106%450%
6139%696%673%
8118%876%880%
10102%1061%1079%
F52113%F152158%F25294%
480%4175%488%
663%6166%683%
846%8135%880%
1034%10110%1077%
F62109%F162112%F262106%
481%478%4102%
667%664%6100%
854%847%894%
1041%1035%1088%
F72136%F172115%F272121%
4130%4114%4117%
6132%6107%6111%
8107%8100%8107%
1094%1090%1098%
F82105%F182115%F282107%
486%4111%4105%
678%6113%6102%
866%8109%898%
1056%10106%1094%
F92100%F192114%F29281%
481%4113%480%
672%6109%674%
867%8103%866%
1060%1095%1070%
F10296%F202121%F30297%
488%4115%492%
684%6108%684%
881%8104%890%
1078%1098%1086%
Table 8. Comparison of SPG_TLBO and SPP_TLBO respect to algorithm presented in [14].
Table 8. Comparison of SPG_TLBO and SPP_TLBO respect to algorithm presented in [14].
IterationsPop. SizeNoTAlg. RefSPG\_TLBOSPP\_TLBO
100060261%77%89%
433%60%73%
622%45%61%
816%35%47%
1012%28%38%
120265%76%97%
438%64%87%
626%54%80%
819%43%73%
1015%36%64%
240261%80%100%
434%64%91%
623%58%87%
818%52%80%
1014%47%75%
500060261%74%89%
435%56%68%
622%42%56%
817%33%44%
1013%26%36%
120264%77%96%
437%63%78%
626%52%68%
819%42%58%
1015%34%51%
240261%79%107%
435%64%97%
624%57%88%
819%52%80%
1015%47%74%
10,00060262%74%89%
435%56%67%
622%42%55%
817%33%44%
1013%26%36%
120264%77%96%
438%64%82%
626%52%67%
819%42%58%
1015%35%51%
240261%80%111%
434%64%101%
623%58%88%
818%52%80%
1014%47%75%
Table 9. Average number of function iterations for SPP_TLBO, Runs = 30, Subpopulation size = 120.
Table 9. Average number of function iterations for SPP_TLBO, Runs = 30, Subpopulation size = 120.
FunctionNoTN. It.FunctionNoTN. It.FunctionNoTN. It.
F1Seq.432F10Seq.2001F22Seq.947
2474223332538
4426419924368
6398622896219
8388817708219
1038710209610427
F2Seq.507F11Seq.14,059F24Seq.300
2443210,9432269
4443412,6854281
6474611,2326262
8455813,8348247
104471014,25910255
F4Seq.32F12Seq.54F25Seq.427
2252472333
4394424347
6336326300
8338448324
1026103310279
F6Seq.285F13Seq.277F26Seq.35
22692268210
41174423420
61066398618
8101828588
109610285106
F7Seq.41F15Seq.22F27Seq.62
238217269
434420445
637618640
835818841
103310151035
F8Seq.282F18Seq.54F29Seq.67
22622132258
4206474473
6307666645
8146891841
101761075103
F9Seq.209F19Seq.16F30Seq.657
21762192216
41644184410
61736166176
81588188221
10148101610155

Share and Cite

MDPI and ACS Style

García-Monzó, A.; Migallón, H.; Jimeno-Morenilla, A.; Sánchez-Romero, J.-L.; Rico, H.; Rao, R.V. Efficient Subpopulation Based Parallel TLBO Optimization Algorithms. Electronics 2019, 8, 19. https://doi.org/10.3390/electronics8010019

AMA Style

García-Monzó A, Migallón H, Jimeno-Morenilla A, Sánchez-Romero J-L, Rico H, Rao RV. Efficient Subpopulation Based Parallel TLBO Optimization Algorithms. Electronics. 2019; 8(1):19. https://doi.org/10.3390/electronics8010019

Chicago/Turabian Style

García-Monzó, Alejandro, Héctor Migallón, Antonio Jimeno-Morenilla, José-Luis Sánchez-Romero, Héctor Rico, and Ravipudi Venkata Rao. 2019. "Efficient Subpopulation Based Parallel TLBO Optimization Algorithms" Electronics 8, no. 1: 19. https://doi.org/10.3390/electronics8010019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop