Next Article in Journal
A Source Domain Extension Method for Inductive Transfer Learning Based on Flipping Output
Previous Article in Journal
Power Control and Channel Allocation Algorithm for Energy Harvesting D2D Communications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Cyclical Non-Linear Inertia-Weighted Teaching–Learning-Based Optimization Algorithm

1
School of Computer Science, Xianyang Normal University, Xianyang 712000, China
2
School of Information Engineering, Xizang Minzu University, Xianyang 712000, China
*
Author to whom correspondence should be addressed.
Algorithms 2019, 12(5), 94; https://doi.org/10.3390/a12050094
Submission received: 6 March 2019 / Revised: 17 April 2019 / Accepted: 23 April 2019 / Published: 3 May 2019

Abstract

:
After the teaching–learning-based optimization (TLBO) algorithm was proposed, many improved algorithms have been presented in recent years, which simulate the teaching–learning phenomenon of a classroom to effectively solve global optimization problems. In this paper, a cyclical non-linear inertia-weighted teaching–learning-based optimization (CNIWTLBO) algorithm is presented. This algorithm introduces a cyclical non-linear inertia weighted factor into the basic TLBO to control the memory rate of learners, and uses a non-linear mutation factor to control the learner’s mutation randomly during the learning process. In order to prove the significant performance of the proposed algorithm, it is tested on some classical benchmark functions and the comparison results are provided against the basic TLBO, some variants of TLBO and some other well-known optimization algorithms. The experimental results show that the proposed algorithm has better global search ability and higher search accuracy than the basic TLBO, some variants of TLBO and some other algorithms as well, and can escape from the local minimum easily, while keeping a faster convergence rate.

1. Introduction

As is well known, the research and application of swarm intelligence optimization mostly focus on nature-inspired algorithms. In the past decades, many classical nature-inspired optimization algorithms based on population have been proposed. These algorithms have been proven effective in solving global optimization problems and specific types of engineering optimization problems, such as GA (genetic algorithm) [1,2], ACO (ant colony optimization) [3,4], PSO (particle swarm optimization) [5,6], ABC (artificial bee colony) [7,8,9] and DE (differential evolution) [10,11,12], etc. However, any algorithm has its own merits and demerits in solving diverse problems. In order to solve the demerits, such as easily being trapped into a local optimum and slow convergence, numerous improved algorithms for various swarm intelligence algorithms are presented, including the variants of various algorithms or their hybrid algorithms. In general, the quality of an optimization algorithm depends on three basic factors, i.e., the ability obtaining the true global optima value, the fast convergence speed and a minimum of control parameters. Therefore, our ultimate aim in the practical optimization applications is that the optimization algorithms should have higher calculation accuracy, faster convergence speed and a minimum of control parameters for ease of use in program.
The teaching–learning-based optimization (TLBO) algorithm is a swarm intelligence algorithm, which simulates the phenomenon of teaching and learning in a class. A few years ago, the TLBO algorithm was first proposed by Rao et al. [13,14]. The TLBO is a parameter-less algorithm [15] requiring only the common control parameters, such as population size and numbers of generation, and does not need any other specific control parameters. Therefore, there is no burden of tuning control parameters in the TLBO algorithm. So, the TLBO algorithm is simpler, more effective, and its computational cost is relatively less. Because the TLBO algorithm has the ability to achieve better results at faster convergence speed than other algorithms mentioned above, it has been successfully applied in many diverse optimization fields [16,17,18,19,20]. Of course, TLBO also has some disadvantages too. Although TLBO has high search accuracy and fast convergence speed, but it has poor exploiting ability and is easy to fall into the local optimum and appeared premature convergence in multimodal function. Therefore, many improved algorithms for TLBO have been presented so far. To improve the performance of TLBO, some variants of the TLBO have been presented. Rao et al. proposed an ETLBO (elitist TLBO) algorithm [15] for solving complex constrained optimization problems, and applied a modified TLBO algorithm [17] to the multi-objective optimization problem. Aiming at neural network training in portable AI (artificial intelligence) devices, Yang et al. [21] proposed the CTLBO (compact teaching–learning-based optimization) algorithm to solve global continuous problems, which can reduce the memory requirement while maintaining the high performance. Wang et al. [22] presented an improved TLBO algorithm. In order to balance the diversity and convergence, an efficient subpopulation was employed in the teacher phase and a ranking differential vector was used in the learner phase. To improve the convergence solution, the neighbor learning and differential mutation is introduced into the basic TLBO by Kumar Shukla et al. [23]. Combining TLBO and ABC, Chen et al. [24] propose a new hybrid teaching–learning-based artificial bee colony (TLABC) to solve the parameters estimation problems of solar photovoltaic. Furthermore, some other improved TLBO algorithms [25,26,27,28] have been presented for solving the global function optimization problem. We have even proposed a non-linear inertia weighted teaching–learning-based optimization algorithm (NIWTLBO) [29], which has a fast convergence rate and high accuracy. However, its exploiting ability is weaker.
To enhance the exploiting ability and avoid the premature phenomenon of NIWTLBO, we propose a new improved TLBO variant which is a cyclical non-linear inertia weighted teaching–learning-based optimization algorithm (called as CNIWTLBO). This algorithm uses a cyclical non-linear inertia weight factor to replace the old one to control the memory rate of learners, and employs a non-linear mutation factor to control the learner’s mutation randomly in teacher and learner phase. Experiments are validated on 21 well-known benchmark problems. Simulation results of CNIWTLBO are compared with the original TLBO and other variants of improved TLBO. As a result, the CNIWTLBO has improved the exploitation ability. Furthermore, it has not only faster convergence speed than the basic TLBO, even providing higher search accuracy for most of these benchmark problems.
The rest of this paper is organized as follows. The basic TLBO algorithm is briefly introduced in Section 2. In Section 3, the proposed CNIWTLBO algorithm will be described in detail. Section 4 provides the simulation results and discussions demonstrating the performance of CNIWTLBO in comparison with other optimization algorithms. Finally, the conclusion and future work are summarized in Section 5.

2. Teaching–Learning-Based Optimization

The basic TLBO working is divided into “Teacher Phase” and “Learner Phase”. Learning from the teacher to make the student’s knowledge level closer to the teacher’s is termed as “Teacher Phase”, and learning through interaction with learners to increase their knowledge is called as “Learner Phase”. In TLBO, population is described as a group of learners. Each learner is considered as an individual of the evolutionary algorithm. The subjects offered to the learners are comparable to the different design variables which are the input parameters of the objective function in the optimization problem. The fitness value of the objective function is the learner’s total result. The learner having the best solution of the optimization problem is considered as the teacher in the entire population.
The class { X 1 , X 2 , , X N P } is composed of one teacher and some learners, where X i = ( X i , 1 , , X i , j , , X i , D ) (i = 1, 2, …, NP) denotes the i-th learner, NP is the number of learners (i.e., population size), D represents the number of major subjects in the class (i.e., dimensions of design variables). X i , j represents the result of the i-th learner on the j-th major subject. The class X (i.e., population) is randomly initialized by a search space bounded by NP × D matrix whose values between the lower bound and upper bound of design variable.

2.1. Teacher Phase

In the teacher phase, the teacher provides knowledge to the learners to increase the mean result of the class. The learner with the best fitness in the current generation is considered as the teacher X t e a c h e r , and the mean result of the learners on a particular subject j (j = 1, 2, …, D) is represented as M j = 1 N P ( i = 1 N P X i , j ) . The mean result of a class may increase from a low level to the teacher’s level. Due to the individual differences and the forgetfulness of memory, it is impossible for the learners to gain all the knowledge of the teacher to reach the teacher’s level. The solution of each learner is updated by Equation (2):
X i , j n e w = X i , j o l d + r ( X t e a c h e r , j T F M j )
T F = r o u n d [ 1 + r a n d ( 0 , 1 ) { 1 2 } ]
where X t e a c h e r , j is the result of the teacher in subject j. The r is a random number in the range [0, 1], TF is the teaching factor, which decides the value of mean to be changed and its value is either 1 or 2. The values of r and TF are generated randomly in the algorithm and both of them are not supplied as input parameters to the algorithm.
Since the optimization problem is a minimization problem, so the optimization goal is to find the minimum of objective function f. If the new value gives a better function value in each iteration, then the old value X i , j o l d is updated with the new value X i , j n e w . The updated formula is given as:
if   f ( X i n e w ) < f ( X i o l d )   X i , j o l d = X i , j n e w end   if
where f ( X i n e w ) and f ( X i o l d ) represent the new and old total result of the i-th student, respectively. All the new values accepted at the end of the teacher phase become the input to the learner phase.

2.2. Learner Phase

During this phase, the learners increase their knowledge through mutual interaction among themselves. A learner interacts randomly with other learners to achieve new knowledge, so that he can achieve the goal of increasing their knowledge level. In this learner phase, let the i-th learner is X i and randomly chosen another learner is X q , iq. The updated formula for the new result of the i-th learner is given as:
X i , j n e w = { X i , j o l d + r ( X i , j X q , j )   if   f ( X i ) < f ( X q ) , X i , j o l d + r ( X q , j X i , j )   otherwise
where r is a random number between 0 and 1, f ( X i ) and f ( X q ) are the best solution of the learners Xi and Xq, respectively. If the objective function has the better fitness value with new value, then accept the new value. Similarly, the learner will be updated by using Equation (3).

3. Cyclical Non-Linear Inertia-Weighted Teaching–Learning-Based Optimization (NIWTLBO) Algorithm

3.1. Algorithm Description

In the basic TLBO algorithm, the teacher tries to shift the mean of the learners towards himself or herself by teaching in the teacher phase and the learners improve their knowledge by interaction among themselves in the learner phase. The learners improve their level by accumulating knowledge in the learning process, i.e., they learn new knowledge based on their existing knowledge. The teacher tends to hope that his students should achieve the knowledge equal to him as soon as possible. But it is impossible because of the forgetting characteristics of the student.
In the NIWTLBO, a phenomenon was described that a student usually forgets a part of existing knowledge due to the physiological phenomena of the brain [29]. Moreover, the learning curve and the forgetting curve presented by Ebbinghaus were introduced. As we know, new knowledge needs to be learned many times before it can be firmly remembered. Over time, we will forget a part of what we have learned. Therefore, we need to review the old knowledge again and again to keep our level of knowledge periodically. In order to simulating this learning process, a cyclical memory weight factor is added to the existing knowledge of the student. This weight factor is non-linear inertia, which control the memory rate of learners cyclically. So we introduce the cyclical non-linear inertia weight factor wc into Equations (1) and (4) based on the basic TLBO, which scale the existing knowledge of the learner for calculating the new value. Compared with the TLBO algorithm, the previous knowledge accumulation of learners is determined by the weight factor wc and which is used to calculate new values.
Let T to be the number of iteration of the algorithm in one learning cycle. So the cyclical non-linear inertia weight factor is defined as Equation (5).
w c = 1 exp ( ( mod ( i t e r , T ) ) 2 2 × ( T / 8 ) 2 ) × ( 1 w c m i n ) , T M A X I T E R
where iter is the current iteration number of the algorithm, and MAXITER is the maximum number of allowable iterations which is an integral multiple of the T. The wcmin is the minimum value of cyclical non-linear inertia weight factor wc and its value should be between 0.5 and 1. The value wcmin should not be too small, otherwise the individuals are worse due to remembering too little existing knowledge in one iteration. If wcmin is too small, it is difficult for the algorithm to converge to the true global optimal solution. In our experiment, the value is 0.6. The wc is called as cyclical memory rate, and its curve is shown as Figure 1. The cyclical non-linear inertia weight factor wc is applied to the new equations shown as Equations (7) and (8). During a learning cycle in this improved TLBO, individuals attempt to search diverse areas of the search space at an early stage. In the latter stage, the individuals move in a small range to adjust the trial solution slightly so as to be able to explore relatively small local space. Then repeat the learning cycle over and over again.
In order to get a new set of better learners (i.e., individuals), the difference between the existing mean result and the corresponding result of the teacher is added to the existing learners in the teacher phase. Similarly, in the learner phase, the difference between the existing result of a learner and the corresponding result of another learner selected randomly is added to the existing learner. As Equations (1) and (4) show, the value added to the existing learner is formed from the difference of result and the random number r . So the difference value is largely determined by the random number r in the teacher and learner phases. In our proposed method, the random number r in the basic TLBO is modified as follows:
r = 1 + r a n d ( 0 , 1 ) 2
where rand(0, 1) is a uniformly distributed random number within the range [0, 1]. Equation (6) generates a random number in the range [0.5, 1] which is similar to the method proposed by Satapathy [27]. The r′ was called as dynamic inertia weight proposed by Eberhart [30]. Thus, the mean value of the original random number r is increased from 0.5 to 0.75, so the probability of stochastic variations is increased and the difference value added to the existing learners is enlarged. Meanwhile, the wc increases from small to large in one learning cycle. Under the combined effects of the wc and r′, the proposed algorithm will not generate premature convergence. Instead, it can improve population diversity, avoid prematurity in the search process and increase the ability of the basic TLBO to escape from local optima. In this way, the algorithm performance is enhanced. On the surface of some multimodal functions, the original random number r may result in some populations clustering near a local optimal point. With the new dynamic inertia weight r′, the population has more chances to escape from the local optima and continuously move towards the global optimum point until reaching the true global optimum.
Now, the cyclical non-linear inertia weight factor and the dynamic inertia weight factor are applied to the basic TLBO algorithm. In the teacher phase, the new set of improved learners can be expressed by the equation as follows:
X i , j n e w = w c X i , j o l d + r ( X t e a c h e r , j T F M j )
And in the learner phase, the new set of improved learners can be expressed by the equation as follows:
X i , j n e w = { w c X i , j o l d + r ( X i , j X q , j )   if   f ( X i ) < f ( X q ) , w c X i , j o l d + r ( X q , j X i , j )   otherwise  
where w c is given by Equation (5), and r is given by Equation (6).
In order to keep the diversity of the algorithm and enhance the global searching ability at the beginning of each iteration cycle, two individuals with the worst solution will be randomly mutated into new individuals. The probability of mutation is expressed by Equation (9). If Pc > Rand(0, 1) in each iteration, the worst two individuals will be randomly mutated. The mutation process is very simple, that is, the design variables of the two individuals will be initialized randomly in the search space.
P c = 0.5 exp ( i t e r 2 2 × ( M A X I T E R / 8 ) 2 )
In this way, it will expand the diversity of populations and restrain the prematurity of the algorithm.

3.2. Behavior Parameter Analysis

In the CNIWTLBO algorithm, there are two parameters which are wcmin and T. The wcmin is the minimum value of cyclical non-linear inertia weight factor wc. Its value is in the range [0.5, 1]. So, the wc is between 0.5 and 1. As is proved in the experiment, the algorithm performance is better when the wcmin value is 0.6 (i.e., the wc will increase from 0.6 to 1 in one learning cycle). If wcmin is selected at a very small value, the initial value of wc will become small in one learning cycle which results in the individuals remembering too little existing knowledge in the beginning phase. In this case, it is difficult for the algorithm to converge to the true global optimal solution. Due to the mean value of the r′ is increased from 0.5 to 0.75, if wcmin is selected a large value, it will generate premature convergence.
The T is the number of iteration of the algorithm in one learning cycle. For the complex function which is difficult to convergence, it needs 4 learning cycles at least. There is no strict limit to the simple function. The T value depends on the function complexity. The higher the function complexity, the greater the T value. The maximum number of allowable iterations must be an integral multiple of the T. A larger number of experiments proved that the T should be set around 200 which can get better results.

3.3. Framework of CNIWTLBO

The framework of the CNIWTLBO algorithm is described as follows (Algorithm 1):
Algorithm 1 The Framework of CNIWTLBO
Step 1:
Initialize the parameters of the algorithm. Set the number of population (NP, i.e., the number of students), the dimension of decision variables (D, i.e., the number of subjects), the generation number iter = 1, and the maximum number of iterations (Maxiter, i.e., the maximum generation number).
Step 2:
Initialize the population. Generate a random population with NP solutions within the range of specified values, P = { X 1 , X 2 , , X N P } . Calculate the population fitness value of the objective function f(x).
Step 3:
Calculated the cyclical non-linear inertia weight factor w c and the dynamic inertia weight r according to the Equations (5) and (6), respectively.
Step 4:
Choose the individual with the best fitness in the population as the teacher Xteacher, calculate the average result of each subject Mj.
Step 5:
Execute the teacher phase. Calculate the new marks of the learners using Equation (7); Evaluate all learners by calculating the fitness value of the objective function and update the old values of the individuals using Equation (3).
Step 6:
Execute the learner phase. Calculate the new values of the students using Equation (8); Re-calculating the fitness value of the objective function and update the old values of the individuals according to Equation (3) in the same way.
Step 7:
Execute mutation strategy. Calculate the probability of variation Pc using Equation (9). If Pc > rand(0, 1), then the two individuals with the worst solution will be randomly mutated into new individuals.
Step 8:
Algorithm Termination: If the terminating condition is satisfied, i.e., Iter > Maxiter, then stop the algorithm procedure and output the best solution. Otherwise, go to Step 3.

4. Benchmark Tests

In this section, CNIWTLBO is tested on some benchmark functions to evaluate its performance by comparing with the basic TLBO and NIWTLBO, as well as with other optimization algorithms mentioned in literature. All algorithms are coded using the Matlab programming language and run in Matlab 2017a environment on a laptop having Intel core i7 2.60 GHz processor and 8 GB RAM.
In the experiments, 21 well-known benchmark functions with unimodality/multimodality characteristics are adopted. The details of these functions are shown in Table 1. The “C” denotes the characteristic of function; “D” is the dimensions of function; “Range” is the boundary of the variables in each function; And “MinFunVal” is the theoretical global minimum solution.

4.1. CNIWTLBO vs. Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Differential Evolution (DE) and Teaching–Learning-Based Optimization (TLBO)

In order to identify the ability of the proposed algorithm to achieve the global optimum value, 20 different benchmark functions in Table 1 are tested using the PSO, ABC, DE, TLBO and CNIWTLBO algorithms. To maintain the consistency in the comparison, all algorithms are experimented with using the same maximum function evaluations (FES) and the same values of common control parameters such as population size. In general, compared with the other algorithms, the algorithm requiring fewer number of function evaluations to get the same best solution can be considered as better. It should be noted that the FES is significantly different between TLBO variants and other meta-heuristic algorithms, which has been discussed in [28]. So, 2 FES are counted in each iteration for original TLBO, CNIWTLBO and most of TLBO variants. In this experiment, the maximum fitness function evaluation is 80,000 for all benchmark functions, and the other specific parameters of algorithms are given in Table 2.
Each benchmark function in Table 1 is independently experimented 30 times with PSO, ABC, DE, TLBO and CNIWTLBO algorithms and comparative results. Each algorithm is terminated after running for 80,000 FEs or when it reached the global minimum value in advance. The results, in the form of mean value and average standard deviation (SD) of objective function obtained after 30 independent runs, are shown in Table 3.
Moreover, the number of function fitness evaluations produced by the experiments when the algorithms reach the true global optimum solution, in the form of mean value and average standard deviation, are reported in Table 4.
It is observed from the comparative results in Table 3 that the performance of CNIWTLBO outperforms PSO, ABC, DE and TLBO for functions f1f7, f15, f16, f18, f20. For functions f9f14, the performance of CNIWTLBO, PSO, ABC, DE and TLBO is very similar that all the algorithms can obtain the global optimal value. Moreover, the TLBO is better than PSO, ABC, DE for functions f1f7 and f15. The performance of CNIWTLBO, DE and TLBO is similar and better than PSO and ABC for f17(Multimod) and f19(Griewank). For f8(Rosenbrock), the ABC is better than others. For f21(Weierstrass), the performance of NIWTLBO and TLBO is alike and outperform PSO, ABC and DE.
From Table 4 it can be seen that the number of fitness evaluations is smaller, it is indicated that the algorithm obtains the true global optimum value more quickly. In other words, the smaller the number of fitness evaluations, the faster the convergence rate of the algorithm. It is obvious that the CNIWTLBO algorithm requires less number of function evaluations than the basic TLBO, PSO, ABC, DE to achieve the true global optimum value for most of the benchmark functions. Therefore, the convergence rate of the CNIWTLBO algorithm is faster than TLBO, PSO, ABC, DE for most of the benchmark functions in Table 2 except f13(Six Hump Camel Back) and f14(Goldstein-Price).

4.2. CNIWTLBO vs. the Variants of PSO

In order to compare the ability of the CNIWTLBO algorithm with other variant PSO algorithms such as PSO-w [31], PSO-cf [32], CPSO-H [33] and CLPSO [34] to obtain the global optimal value, 8 different unimodal and multimodal benchmark functions in Table 1 are tested in this experiment. To maintain the consistency, the CNIWTLBO algorithm and the variants of PSO are performed with the same maximum function evaluations (30,000 FEs) and dimensions (10D). In the same way, CNIWTLBO algorithm is independently run 30 times for each benchmark function. The comparative results obtained after 30 independent runs of the algorithms on each benchmark function, in the form of the mean value and average standard deviation, are shown in Table 5. In this experiment, the results of the algorithms except CNIWTLBO are taken from literature [28,35], and the population size of each algorithm is 10.
From the results in Table 5, it is observed obviously that the performance of CNIWTLBO and TLBO algorithms outperform PSO-w, PSO-cf, CPSO-H and CLPSO algorithms for f1(Sphere), f15(Ackley) and f19(Griewank). The performance of CNIWTLBO and CLPSO is similar for Rastrigin, NCRastrigin and Weierstrass. For Rosenbrock and Schwefel2.26, the CNIWTLBO algorithm does not perform well compared with other algorithms.

4.3. CNIWTLBO vs. the Variants of ABC, DE

The experiment in this section is aimed at identifying the performance of the CNIWTLBO algorithm to achieve the global optimum value by comparing with the variants of ABC and DE on 7 benchmark functions shown in Table 1. The variants of ABC, such as the gbest-guided artificial bee colony GABC algorithm [36] and the improved artificial bee colony IABC algorithm [37], and the variants of DE, such as SaDE and JADE, are used in this experiment. To be fair in the comparison, the parameters of the algorithms are same as in the literature [27], where the population size is 20 and dimension is 30. Alike other algorithms, TLBO and CNIWTLBO are tested with the same function evaluations listed in Table 6. The comparative results, in the form of mean value and average standard deviation, are listed in Table 6. The results of GABC, IABC, SaDE and JADE are taken from the literature [27] directly. The results of TLBO and CNIWTLBO are obtained after 30 independent runs on each benchmark function in the same way.
It can be observed from the results that the performance of CNIWTLBO performs much better than GABC, SaDE and JADE on all the benchmark functions in Table 6, and outperforms the IABC algorithm for f1(Sphere), f4(Schwefel 1.2), f5(Schwefel 2.22), f6(Schwefel 2.21) and f15(Ackley). Furthermore, CNIWTLBO is better than TLBO for f15(Ackley) and f18(Rastrigin). So, it is indicated that the CNIWTLBO algorithm has a good performance.

4.4. CNIWTLBO vs. the Variants of TLBO in Different Dimensions

In order to identify the performance of the CNIWTLBO, the experiments are carried out to compare the CNIWTLBO algorithm with some other variants of TLBO in different dimensions. The variants of the TLBO algorithm including WTLBO [25], ITLBO [26], I-TLBO [28], and NIWTIBO [29] are adopted. In the experiments, 9 unimodal and multimodal benchmark functions listed in Table 1 are used to evaluate the performance of the algorithms. In order to make fair comparisons, the CNIWTIBO and all adopted TLBO variants use the same parameters. In this work, evolutionary generation is used to evaluate the performance of CNIWTLBO and the TLBO variants. The population size is set to 30 and the number of evolutionary generations is set to 2000. In I-TLBO algorithm, the number of teachers is 4. The learning cycle T in CNIWTLBO is 500. The CNIWTIBO and the TLBO variants are tested on 9 benchmark functions with 20, 50 and 100 dimensions, respectively. To eliminate the randomness impact on the results, 30 independent runs for the algorithms are conducted. The experimental results are reported in Table 7, which is in form of the mean solution.
From the Table 7, it could be observed that the TLBO and its different variants perform identically for functions f1, f5, f8, f15, f16 and f18f21. The performance of the CNIWTLBO is better than rest of the algorithms for f5 (Schwefel2.22), f15(Ackley), f16(Schwefel2.26) function in 20, 50 and 100 dimensions. Moreover, it outperforms other algorithms for f20(NCRastrigin) in 100 dimensions and f21(Weierstrass) in 50 and 100 dimensions. For the f1(Sphere), f18(Rastrigin) and f19(Griewank), the performance of CNIWTLBO and NIWTLBO is similar. But the CNIWTLBO performs worse than other algorithms for the f8(Rosenbrock) function.

5. Conclusions

A modified TLBO algorithm called CNIWTLBO has been proposed in this paper for solving global optimization problems. Two learning factors are introduced in the proposed algorithm. A cyclical non-linear inertia weight factor is introduced into the basic TLBO to control the memory rate of learners, and a non-linear mutation factor is introduced into the basic TLBO to control the learner’s mutation randomly during the learning process. With the modifications on the basic TLBO, the CNIWTLBO algorithm has stronger exploration capacities. Furthermore, the proposed algorithm is effectively prevented from falling into a local minimum and ensures assessment accuracy. In the experiments, 21 classical benchmark functions have been tested to evaluate the performance of the CNIWTLBO, and the experimental results are compared with the other meta-heuristic algorithms and their variants available in the literatures. Moreover, the comparison results between the CNIWTLBO and the other variants of TLBO are reported in this paper. The experimental results show that the performance of the CNIWTLBO for solving global optimization problems is satisfactory.
In future work, the CNIWTLBO algorithm will be tested on some engineering benchmarks. To verify the efficiency, the proposed method will be applied to handle constrained engineering optimization problems. Furthermore, the hybrid method combining the proposed algorithm and the other classic intelligent algorithms will be researched to improve the performance of the TLBO algorithm.

Author Contributions

Methodology, Z.W.; software, Z.W.; validation, R.X.; data curation, Z.W.; writing—original draft preparation, Z.W.; writing—review and editing, R.X.

Funding

This research was funded by the Natural Science Foundation in Xizang Province of China(No. XZ2018ZR G-64), the Natural Science Foundation of in Shaanxi Province of China (No. 17JK0826), the Science Basic Research Program in Xianyang Normal University of China (No. XSYK18012) and the Education Scientific Program of 13th Five-year Plan in Shaanxi Province of China (No. SGH18H373).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Thakur, M. A new genetic algorithm for global optimization of multimodal continuous functions. J. Comput. Sci. 2014, 5, 298–311. [Google Scholar] [CrossRef]
  2. Hussein, H.A.; Demiroglu, I.; Johnston, R.L. Application of a parallel genetic algorithm to the global optimization of medium-sized Au–Pd sub-nanometre clusters. Eur. Phys. J. B 2018, 91, 34. [Google Scholar] [CrossRef] [Green Version]
  3. Chandra Mohan, B.; Baskaran, R. A survey: Ant colony optimization based recent research and implementation on several engineering domain. Expert Syst. Appl. 2012, 39, 4618–4627. [Google Scholar] [CrossRef]
  4. Chen, Z.; Wang, R.L. Ant colony optimization with different crossover schemes for global optimization. Clust. Comput. 2017, 20, 1247–1257. [Google Scholar] [CrossRef]
  5. Ali, M.M.; Kaelo, P. Improved particle swarm algorithms for global optimization. Appl. Math. Comput. 2008, 196, 578–593. [Google Scholar] [CrossRef]
  6. Liu, P.; Jing, L. Multi-leader PSO (MLPSO): A new PSO variant for solving global optimization problems. Appl. Soft Comput. 2017, 61, 256–263. [Google Scholar] [CrossRef]
  7. Kang, F.; Li, J.; Ma, Z. Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions. Inf. Sci. 2011, 181, 3508–3531. [Google Scholar] [CrossRef]
  8. Karaboga, D.; Gorkemli, B.; Ozturk, C.; Karaboga, N. A comprehensive survey: Artificial bee colony (ABC) algorithm and applications. Artif. Intell. Rev. 2014, 42, 21–57. [Google Scholar] [CrossRef]
  9. You, X.; Ma, Y.; Liu, Z.; Xie, M. An ABC Algorithm with Recombination. Int. J. Comput. Commun. Control 2018, 13, 590–601. [Google Scholar] [CrossRef]
  10. Mohamed, A.K.; Mohamed, A.W.; Elfeky, E.Z.; Saleh, M. Solving constrained non-linear integer and mixed-integer global optimization problems using enhanced directed differential evolution algorithm. In Machine Learning Paradigms: Theory and Application; Springer: Cham, Switzerland, 2019. [Google Scholar]
  11. Mohamed, A.W. Solving large-scale global optimization problems using enhanced adaptive differential evolution algorithm. Complex Intell. Syst. 2017, 3, 205–231. [Google Scholar] [CrossRef]
  12. Nouioua, M.; Li, Z. Using differential evolution strategies in chemical reaction optimization for global numerical optimization. Appl. Intell. 2017, 47, 935–961. [Google Scholar] [CrossRef]
  13. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  14. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: An optimization method for continuous non-linear large scale problems. Inf. Sci. 2012, 183, 1–15. [Google Scholar] [CrossRef]
  15. Rao, R.V.; Patel, V. An elitist teaching-learning-based optimization algorithm for solving complex constrained optimization problems. Int. J. Ind. Eng. Comput. 2012, 3, 535–560. [Google Scholar] [CrossRef]
  16. Venkata Rao, R.; Kalyankar, V.D. Parameter optimization of modern machining processes using teaching–learning-based optimization algorithm. Eng. Appl. Artif. Intell. 2013, 26, 524–531. [Google Scholar] [CrossRef]
  17. Rao, R.V.; Patel, V. Multi-objective optimization of heat exchangers using a modified teaching-learning-based optimization algorithm. Appl. Math. Model. 2013, 37, 1147–1162. [Google Scholar] [CrossRef]
  18. Shabanpour-Haghighi, A.; Seifi, A.R.; Niknam, T. A modified teaching–learning based optimization for multi-objective optimal power flow problem. Energy Convers. Manag. 2014, 77, 597–607. [Google Scholar] [CrossRef]
  19. Sultana, S.; Roy, P.K. Multi-objective quasi-oppositional teaching learning based optimization for optimal location of distributed generator in radial distribution systems. Int. J. Electr. Power Energy Syst. 2014, 63, 534–545. [Google Scholar] [CrossRef]
  20. Ghasemi, M.; Ghavidel, S.; Gitizadeh, M.; Akbari, E. An improved teaching–learning-based optimization algorithm using Lévy mutation strategy for non-smooth optimal power flow. Int. J. Electr. Power Energy Syst. 2015, 65, 375–384. [Google Scholar] [CrossRef]
  21. Yang, Z.; Li, K.; Guo, Y.; Ma, H.; Zheng, M. Compact real-valued teaching-learning based optimization with the applications to neural network training. Knowl.-Based Syst. 2018, 159, 51–62. [Google Scholar] [CrossRef]
  22. Wang, B.C.; Li, H.X.; Feng, Y. An Improved teaching-learning-based optimization for constrained evolutionary optimization. Inf. Sci. 2018, 456, 131–144. [Google Scholar] [CrossRef]
  23. Kumar Shukla, A.; Singh, P.; Vardhan, M. Neighbour teaching learning based optimization for global optimization problems. J. Intell. Fuzzy Syst. 2018, 34, 1583–1594. [Google Scholar] [CrossRef]
  24. Chen, X.; Xu, B.; Mei, C.; Ding, Y.; Li, K. Teaching–learning–based artificial bee colony for solar photovoltaic parameter estimation. Appl. Energy 2018, 212, 1578–1588. [Google Scholar] [CrossRef]
  25. Satapathy, S.C. Weighted Teaching-learning-based optimization for global function optimization. Appl. Math. 2013, 4, 429–439. [Google Scholar] [CrossRef]
  26. Chen, D.; Zou, F.; Li, Z.; Wang, J.; Li, S. An improved teaching-learning-based optimization algorithm for solving global optimization problem. Inf. Sci. 2015, 297, 171–190. [Google Scholar] [CrossRef]
  27. Satapathy, S.C. Improved teaching learning based optimization for global function optimization. Decis. Sci. Lett. 2013, 2, 23–34. [Google Scholar] [CrossRef] [Green Version]
  28. Rao, R.V.; Patel, V. An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems. Sci. Iran. 2013, 20, 710–720. [Google Scholar] [CrossRef]
  29. Wu, Z.-S.; Fu, W.-P.; Xue, R. Nonlinear Inertia weighted teaching-learning-based optimization for solving global optimization problem. Comput. Intell. Neurosci. 2015, 2015, 292576. [Google Scholar] [CrossRef]
  30. Eberhart, R.C.; Yuhui, S. Tracking and optimizing dynamic systems with particle swarms. In Proceedings of the 2001 Congress on Evolutionary Computation, Seoul, Korea, 27–30 May 2001; pp. 94–100. [Google Scholar]
  31. Shi, Y.; Eberhart, R.C. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings, IEEE World Congress on Computational Intelligence, Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar]
  32. Clerc, M.; Kennedy, J. The particle swarm—Explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evolut. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef]
  33. Frans, V.D.B.; Engelbrecht, A.P. A Cooperative approach to particle swarm optimization. IEEE Trans. Evolut. Comput. 2004, 8, 225–239. [Google Scholar]
  34. Liang, J.J.; Suganthan, P.N.; Qin, A.K.; Baska, S. Comprehensive Learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evolut. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  35. Akay, B.; Karaboga, D. A modified artificial bee colony algorithm for real-parameter optimization. Inf. Sci. 2012, 192, 120–142. [Google Scholar] [CrossRef]
  36. Zhu, G.P.; Kwong, S. Gbest-guided artificial bee colony algorithm for numerical function. Appl. Soft Comput. 2010, 10, 445–456. [Google Scholar] [CrossRef]
  37. Gao, W.; Liu, S. Improved artificial bee colony algorithm for global optimization. Inf. Proc. Lett. 2011, 111, 871–882. [Google Scholar] [CrossRef]
Figure 1. The cyclical memory rate curve.
Figure 1. The cyclical memory rate curve.
Algorithms 12 00094 g001
Table 1. The benchmark functions adopted in the paper.
Table 1. The benchmark functions adopted in the paper.
No.FunctionFormulationCDRangeMinFunVal
f1Sphere f ( x ) = i = 1 D x i 2 U30[−100, 100]0
f2SumSquares f ( x ) = i = 1 D i x i 2 U30[−100, 100]0
f3Tablet f ( x ) = 10 6 x 1 2 + i = 1 D x 2 U30[−100, 100]0
f4Schwefel 1.2 f ( x ) = i = 1 D ( j = 1 i x j ) 2 U30[−100, 100]0
f5Schwefel 2.22 f ( x ) = i = 1 D | x i | + i = 1 D | x i | U30[−10, 10]0
f6Schwefel 2.21 f ( x ) = max i D { | x i | } , 1 i D U30[−100, 100]0
f7Zakharov f ( x ) = i = 1 D x i 2 + ( i = 1 D 0.5 i x i ) 2 + ( i = 1 D 0.5 i x i ) 4 U30[−5, 10]0
f8Rosenbrock f ( x ) = i = 1 D 1 [ 100 ( x i + 1 x i 2 ) 2 + ( 1 x i ) 2 ] U30[−4, 4]0
f9Schaffer f ( x ) = sin 2 ( x 1 2 + x 2 2 ) 0.5 ( 1 + 0.001 ( x 1 2 + x 2 2 ) ) 2 0.5 M2[−10, 10]−1
f10Dropwave f ( x ) = 1 + cos ( 12 x 1 2 + x 2 2 ) 0.5 ( x 1 2 + x 2 2 ) + 2 M2[−2, 2]−1
f11Bohachevsky1 f ( x ) = x 1 2 + 2 x 2 2 0.3 cos ( 3 π x 1 ) 0.4 cos ( 4 π x 2 ) + 0.7 M2[−100, 100]0
f12Bohachevsky2 f ( x ) = x 1 2 + 2 x 2 2 0.3 cos ( 3 π x 1 ) cos ( 4 π x 2 ) + 0.3 M2[−100, 100]0
f13Six Hump Camel Back f ( x ) = 4 x 1 2 2.1 x 1 4 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 M2[−5, 5]−1.03163
f14Goldstein-Price f ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] M2[−2, 2]3
f15Ackley f ( x ) = 20 exp ( 0.2 1 D i = 1 D x i 2 ) exp ( 1 D i = 1 D cos ( 2 π x i ) ) + 20 + e M30[−32, 32]0
f16Schwefel 2.26 f ( x ) = i = 1 D ( x i sin ( | x i | ) ) M30[−500, 500]−837.9658
f17Multimod f ( x ) = i = 1 D | x i | i = 1 D | x i | M30[−10, 10]0
f18Rastrigin f ( x ) = i = 1 D ( x i 2 10 cos ( 2 π x i ) + 10 ) M30[−5.12, 5.12]0
f19Griewank f ( x ) = 1 4000 i = 1 D x i 2 i = 1 D cos ( x i i ) + 1 M30[−600, 600]0
f20NCRastrigin f ( x ) = i = 1 D ( y i 2 10 cos ( 2 π y i ) + 10 ) , where   y i = { x i | x i | < 0.5 0.5 r o u n d ( 2 x i ) | x i | 0.5 M30[−5.12, 5.12]0
f21Weierstrass f ( x ) = i = 1 D ( k = 0 k m [ a k cos ( 2 π b k ( x i + 0.5 ) ) ] ) D k = 0 k m [ a k cos ( 2 π b k 0.5 ) ] where   a = 0.5 , b = 3 , k m = 20 M30[−0.5, 0.5]0
U: Unimodal, M: Multimodal.
Table 2. Parameter settings for particle swarm optimization (PSO), artificial bee colony (ABC), differential evolution (DE), teaching–learning-based optimization (TLBO) and cyclical non-linear inertia-weighted teaching–learning-based optimization (CNIWTLBO) algorithms.
Table 2. Parameter settings for particle swarm optimization (PSO), artificial bee colony (ABC), differential evolution (DE), teaching–learning-based optimization (TLBO) and cyclical non-linear inertia-weighted teaching–learning-based optimization (CNIWTLBO) algorithms.
AlgorithmParameter Settings
PSOPopulation size NP = 40, Cognitive attraction C1 = 2, Social attraction C2 = 2, Inertia weight w = 0.9
ABCNP = 40
DENP = 40, mutation rate F = 0.5, crossover rate R = 0.4
TLBONP = 40
CNIWTLBONP = 40, wcmin = 0.6, T = 250
Table 3. Performance comparisons of PSO, ABC, DE, TLBO, CNIWTLBO in term of fitness value.
Table 3. Performance comparisons of PSO, ABC, DE, TLBO, CNIWTLBO in term of fitness value.
No.PSO
(Mean ± SD)
ABC
(Mean ± SD)
DE
(Mean ± SD)
TLBO
(Mean ± SD)
CNIWTLBO
(Mean ± SD)
f18.68E-12 ± 6.15E-129.53E-16 ± 5.13E-167.28E-27 ± 2.35E-263.42E-287 ± 0.000.00E+00 ± 0.00E+00
f22.14E-10 ± 1.26E-107.38E-16 ± 1.32E-168.52E-26 ± 2.66E-268.74E-286 ± 0.000.00E+00 ± 0.00E+00
f33.26E-08 ± 1.52E-088.62E-16 ± 1.29E-163.02E-26 ± 1.38E-266.28E-285 ± 0.000.00E+00 ± 0.00E+00
f42.25E+05 ± 1.16E+057.52E+03 ± 1.46E+032.32E+04 ± 2.34E+032.48E-84 ± 1.29E-840.00E+00 ± 0.00E+00
f55.02E-03 ± 3.26E-032.36E-14 ± 1.22E-144.57E-16 ± 1.16E-161.65E-143 ± 1.32E-1434.62e-323 ± 0.00E+00
f61.23E+00 ± 5.24E-013.67E+01 ± 1.05E+011.19E-02 ± 2.15E-037.68E-120 ± 3.82E-1202.64E-315 ± 0.00E+00
f71.58E+02 ± 4.72E+012.45E+02 ± 2.16E+015.63E+01 ± 6.74E+006.04E-51 ± 4.62E-511.82E-319 ± 0.00E+00
f83.05E+01 ± 2.26E+011.12E+01 ± 2.43E+002.51E+01 ± 3.47E+001.32E+01 ± 4.36E+001.78E+01 ± 5.13E+00
f9−1.00E+00 ± 0.00E+00−1.00E+00 ± 0.00E+00−1.00E+00 ± 0.00E+00−1.00E+00 ± 0.00E+00−1.00E+00 ± 0.00E+00
f10−1.00E+00 ± 0.00E+00−1.00E+00 ± 0.00E+00−1.00E+00 ± 0.00E+00−1.00E+00 ± 0.00E+00−1.00E+00 ± 0.00E+00
f110.00E+00 ± 0.00E+000.00E+00 ± 0.00E+000.00E+00 ± 0.00E+000.00E+00 ± 0.00E+000.00E+00 ± 0.00E+00
f120.00E+00 ± 0.00E+000.00E+00 ± 0.00E+000.00E+00 ± 0.00E+000.00E+00 ± 0.00E+000.00E+00 ± 0.00E+00
f13−1.03163 ± 0.00−1.03163 ± 0.00−1.03163 ± 0.00−1.03163 ± 0.00−1.03163 ± 0.00
f143.00 ± 8.23E-153.00 ± 5.02E-153.00 ± 1.53E-153.00 ± 6.18E-163.00 ± 5.82E-16
f151.48E+00 ± 3.85E-013.14E-13 ± 3.06E-142.56E-14 ± 6.07E-154.45E-15 ± 2.85E-168.88E-16 ± 0.00
f16−8.79E+03 ± 4.27E+02−1.24E+04 ± 1.81E+02−1.15E+04 ± 1.58E+03−9.18E+03 ± 7.65E+02−7.33E+03 ± 1.66E+02
f177.55E-67 ± 1.63E-666.83E-19 ± 1.62E-195.03-311 ± 0.000.00E+00 ± 0.00E+000.00E+00 ± 0.00E+00
f181.12E+02 ± 2.24E+011.32E-13 ± 2.48E-139.02E+01 ± 8.57E+007.21E+00 ± 5.78E+000.00E+00 ± 0.00E+00
f196.69E-03 ± 5.46E-047.15E-03 ± 6.86E-040.00E+00 ± 0.00E+000.00E+00 ± 0.00E+000.00E+00 ± 0.00E+00
f201.78E+02 ± 3.22E+012.01E-14 ± 1.67E-146.82E+01 ± 8.75E+001.48E+01 ± 2.76E+000.00E+00 ± 0.00E+00
f216.13E+01 ± 2.12E+011.36E-02 ± 8.06E-031.38E+01 ± 5.84E-010.00E+00 ± 0.00E+000.00E+00 ± 0.00E+00
Population Size: 40, D: 30(except f9~f14 2D), Max. Eval.: 80,000 FEs.
Table 4. Convergence comparisons in term of fitness evaluations.
Table 4. Convergence comparisons in term of fitness evaluations.
No.PSO
(Mean ± SD)
ABC
(Mean ± SD)
DE
(Mean ± SD)
TLBO
(Mean ± SD)
CNIWTLBO
(Mean ± SD)
f180,000 ± 0.0080,000 ± 0.0080,000 ± 0.0080,000 ± 0.0060,720 ± 2.15E+02
f280,000 ± 0.0080,000 ± 0.0080,000 ± 0.0080,000 ± 0.0058,920 ± 2.08E+02
f380,000 ± 0.0080,000 ± 0.0080,000 ± 0.0080,000 ± 0.0061,160 ± 2.26E+02
f480,000 ± 0.0080,000 ± 0.0080,000 ± 0.0080,000 ± 0.0072,280 ± 1.38E+02
f580,000 ± 0.0080,000 ± 0.0080,000 ± 0.0080,000 ± 0.0080,000 ± 0.00
f680,000 ± 0.0080,000 ± 0.0080,000 ± 0.0080,000 ± 0.0080,000 ± 0.00
f780,000 ± 0.0080,000 ± 0.0080,000 ± 0.0080,000 ± 0.0080,000 ± 0.00
f880,000 ± 0.0080,000 ± 0.0080,000 ± 0.0080,000 ± 0.0080,000 ± 0.00
f912,315 ± 3.26E+0243,502 ± 3.12E+028594 ± 2.13E+029596 ± 2.04E+022840 ± 2.86E+02
f1011,406 ± 3.18E+0113,749 ± 1.13E+025412 ± 1.46E+023002 ± 2.32E+021215 ± 3.18E+01
f119356 ± 2.18E+023178 ± 7.85E+013916 ± 8.39E+012258 ± 3.64E+011232 ± 2.13E+01
f129503 ± 1.25E+024685 ± 9.14E+014221 ± 1.14E+022556 ± 2.23E+011284 ± 2.38E+01
f131915 ± 1.29E+021357 ± 1.26E+021735 ± 1.19E+02718 ± 6.02E+011130 ± 8.16E+01
f142102 ± 1.32E+021838 ± 1.51E+021702 ± 2.13E+021232 ± 6.67E+012810 ± 2.05E+02
f1580,000 ± 0.0080,000 ± 0.0080,000 ± 0.0080,000 ± 0.0080,000 ± 0.00
f1680,000 ± 0.0080,000 ± 0.0080,000 ± 0.0080,000 ± 0.0080,000 ± 0.00
f1780,000 ± 0.0080,000 ± 0.0080,000 ± 0.0028,215 ± 1.12E+028040 ± 8.44E+01
f1880,000 ± 0.0080,000 ± 0.0080,000 ± 0.0080,000 ± 0.008120 ± 5.12E+01
f1980,000 ± 0.0080,000 ± 0.0052,326 ± 6.21E+0212,003 ± 8.73E+028160 ± 3.86E+01
f2080,000 ± 0.0080,000 ± 0.0080,000 ± 0.0080,000 ± 0.008080 ± 1.22E+02
f2180,000 ± 0.0080,000 ± 0.0080,000 ± 0.0012,625 ± 1.13E+029040 ± 2.48E+02
Population Size: 40, D: 30(except f9~f14: 2D), Max. Eval.: 80,000 FEs.
Table 5. Comparative results of CNIWTLBO and other variants of PSO.
Table 5. Comparative results of CNIWTLBO and other variants of PSO.
No.Function PSO-wPSO-cfCPSO-HCLPSOTLBOCNIWTLBO
f1SphereMean7.96E-519.84E-1054.98E-455.15E-290.00E+000.00E+00
SD3.56E-504.21E-1041.00E-442.16E-280.00E+000.00E+00
f8RosenbrockMean3.08E+006.98E-011.53E+002.46E+004.21E+006.68E+00
SD7.69E-011.46E+001.70E+001.70E+006.83E-016.32E-01
f15AckleyMean1.58E-149.18E-011.49E-144.32E-103.55E-158.44E-16
SD1.60E-141.01E+006.97E-152.55E-148.32E-315.24E-17
f16Schwefel2.26Mean3.20E+029.87E+022.13E+020.00E+00−4.01E+03−4.07E+03
SD1.85E+022.76E+021.41E+020.00E+001.85E+021.14E+02
f18RastriginMean5.82E+001.25E+012.12E+000.00E+006.77E-080.00E+00
SD2.96E+005.17E+001.33E+000.00E+003.68E-070.00E+00
f19GriewankMean9.69E-021.19E-014.07E-024.56E-030.00E+000.00E+00
SD5.01E-027.11E-022.80E-024.81E-030.00E+000.00E+00
f20NCRastrigin Mean4.05E+001.20E+012.00E-010.00E+002.65E-080.00E+00
SD2.58E+004.99E+004.10E-010.00E+001.23E-070.00E+00
f21WeierstrassMean2.28E-036.69E-011.07E-150.00E+002.42E-050.00E+00
SD7.04E-037.17E-011.67E-150.00E+001.38E-200.00E+00
Population Size: 10, D: 10, Max. Eval.: 30,000 FEs. Source: Results of algorithms except CNIWTLBO are taken from References [28,35].
Table 6. Comparative results of CNIWTLBO and the variants of ABC, DE.
Table 6. Comparative results of CNIWTLBO and the variants of ABC, DE.
No.Function GABCIABCSaDEJADETLBOCNIWTLBO
f1SphereMean3.6E-635.34E-1784.5E-201.8E-600.00E+000.00E+00
FEs:1.5 × 105SD5.7E-6301.9E-148.4E-600.00E+000.00E+00
f4Schwefel 1.2Mean4.3E+021.78E-659.0E-375.7E-610.00E+000.00E+00
FEs:5.0 × 105SD8.0E+022.21E-655.4E-362.7E-600.00E+000.00E+00
f5Schwefel 2.22Mean4.8E-458.82E-1271.9E-141.8E-250.00E+000.00E+00
FEs:2.0 × 105SD1.4E-453.49E-1261.1E-148.8E-250.00E+000.00E+00
f6Schwefel 2.21Mean3.6E-064.98E-387.4E-118.2E-240.00E+000.00E+00
FEs:5.0 × 105SD7.6E-078.59E-381.82E-104.0E-230.00E+000.00E+00
f15AckleyMean1.8E-093.87E-142.7E-038.2E-104.44E-158.46E-16
FEs:5.0 × 104SD7.7E-108.52E-155.1E-046.9E-102.58E-302.15E-31
f18RastriginMean1.5E-100.00E+001.2E-031.0E-041.88E+010.00E+00
FEs:1.0 × 105SD2.7E-100.00E+006.5E-046.0E-054.65E+000.00E+00
f19GriewankMean6.0E-130.00E+007.8E-049.9E-080.00E+000.00E+00
FEs:5.0 × 105SD7.7E-130.00E+001.2E-036.0E-070.00E+000.00E+00
Population Size: 20, D: 30, Source: Results of algorithms except TLBO and CNIWTLBO are taken from Reference [27].
Table 7. Comparative results of CNIWTLBO and different variants of TLBO algorithms.
Table 7. Comparative results of CNIWTLBO and different variants of TLBO algorithms.
No.FunctionDTLBOWTLBOITLBOI-TLBO (NT = 4)NIWTLBOCNIWTLBO
f1Sphere201.43E-3150.00E+000.00E+000.00E+000.00E+000.00E+00
501.35E-2740.00E+000.00E+000.00E+000.00E+000.00E+00
1004.13E-2512.85E-3165.38E-3120.00E+000.00E+000.00E+00
f5Schwefel2.22202.67E-1583.72E-2315.36E-1827.48E-2872.50E-3222.47E-323
509.28E-1385.16E-2082.87E-1683.62E-2814.43E-3181.04E-322
1002.82E-1302.58E-1891.94E-1524.26E-2687.46E-3103.41E-316
f8Rosenbrock201.62E+011.83E+012.14E+011.11E+011.08E+011.12E+01
504.75E+015.02E+015.13E+014.39E+014.52E+014.43E+01
1009.26E+019.78E+011.02E+029.52E+019.63E+019.41E+01
f15Ackley204.44E-153.18E-153.55E-159.12E-168.88E-168.88E-16
504.44E-156.42E-145.28E-142.45E-154.44E-158.88E-16
1007.99E-156.35E-156.87E-152.44E-154.44E-158.88E-16
f16Schwefel2.2620−6.49E+03−6.63E+03−7.52E+03−6.21E+03−5.74E+03−4.67E+03
50−2.05E+04−1.86E+04−2.23E+04−2.06E+04−1.32E+04−1.21E+04
100−2.35E+04−2.41E+04−2.47E+04−2.36E+04−2.39E+04−2.16 E+04
f18Rastrigin201.98E+002.26E-2514.12E-2030.00E+000.00E+000.00E+00
502.38E+015.58E-2351.84E-1880.00E+000.00E+000.00E+00
1004.83E+011.86E-1863.12E-1424.58E-3220.00E+000.00E+00
f19Griewank200.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
500.00E+000.00E+004.64E-3120.00E+000.00E+000.00E+00
1000.00E+005.42E-3242.83E-2881.87E-3230.00E+000.00E+00
f20NCRastrigin201.30E+013.58E-2502.67E-2393.24E-3180.00E+000.00E+00
503.12E+011.86E-2383.18E-2252.55E-3010.00E+000.00E+00
1005.03E+012.64E-2161.83E-2093.62E-2871.35E-3280.00E+00
f21Weierstrass201.02E+002.72E-3083.65E-2856.12E-3120.00E+000.00E+00
501.53E+012.37E-2821.83E-2564.63E-2883.62E-3230.00E+00
1005.02E+014.68E-1982.66E-1862.14E-2162.73E-3120.00E+00
Population Size: 30, Max. generations: 2000, T: 500.

Share and Cite

MDPI and ACS Style

Wu, Z.; Xue, R. A Cyclical Non-Linear Inertia-Weighted Teaching–Learning-Based Optimization Algorithm. Algorithms 2019, 12, 94. https://doi.org/10.3390/a12050094

AMA Style

Wu Z, Xue R. A Cyclical Non-Linear Inertia-Weighted Teaching–Learning-Based Optimization Algorithm. Algorithms. 2019; 12(5):94. https://doi.org/10.3390/a12050094

Chicago/Turabian Style

Wu, Zongsheng, and Ru Xue. 2019. "A Cyclical Non-Linear Inertia-Weighted Teaching–Learning-Based Optimization Algorithm" Algorithms 12, no. 5: 94. https://doi.org/10.3390/a12050094

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop