Settings-Free Hybrid Metaheuristic General Optimization Methods

Several population-based metaheuristic optimization algorithms have been proposed in the last decades, none of which are able either to outperform all existing algorithms or to solve all optimization problems according to the No Free Lunch (NFL) theorem. Many of these algorithms behave effectively, under a correct setting of the control parameter(s), when solving different engineering problems. The optimization behavior of these algorithms is boosted by applying various strategies, which include the hybridization technique and the use of chaotic maps instead of the pseudo-random number generators (PRNGs). The hybrid algorithms are suitable for a large number of engineering applications in which they behave more effectively than the thoroughbred optimization algorithms. However, they increase the difficulty of correctly setting control parameters, and sometimes they are designed to solve particular problems. This paper presents three hybridizations dubbed HYBPOP, HYBSUBPOP, and HYBIND of up to seven algorithms free of control parameters. Each hybrid proposal uses a different strategy to switch the algorithm charged with generating each new individual. These algorithms are Jaya, sine cosine algorithm (SCA), Rao’s algorithms, teaching-learning-based optimization (TLBO), and chaotic Jaya. The experimental results show that the proposed algorithms perform better than the original algorithms, which implies the optimal use of these algorithms according to the problem to be solved. One more advantage of the hybrid algorithms is that no prior process of control parameter tuning is needed.


Introduction
It is well known that metaheuristic optimization methods are widely used to solve problems in several fields of science and engineering. Population-based metaheuristic methods iteratively generate new populations to increase diversity in the current generation. This increases the probability of reaching the optimum for the considered problem. These algorithms are proposed to replace exact optimization algorithms when they are not able to reach an acceptable solution. The inability to provide an adequate solution may be due to either the characteristics of the objective function or the wide search space, which renders a comprehensive search useless. In addition, classical optimization methods, such as greedy-based algorithms, need to consider several assumptions that make it hard to resolve the considered problem.

Preliminaries
As mentioned above, among the best free control parameter algorithms are the Jaya algorithm [16], the SCA algorithm [13], the supply-demand-based optimization method [15], Rao's optimization algorithms [18], the Harris hawks optimization method (HHO) [17], and the teaching-learning-based optimization (TLBO) algorithm [14]. Among these proposals, the HHO algorithm is the most complex. It consists of two phases. During the first phase, the elements of the population are replaced without comparing the fitness of the associated solutions, which is an unwanted strategy for hybrid algorithms. In addition, the SDO algorithm, which offers impressive initial results, works with two populations preventing its integration in our hybrid proposals.
As mentioned earlier, the use of chaotic maps can improve the behavior of some metaheuristic methods. The 2D chaotic map reported in [33] has significantly improved the convergence rate of the Jaya algorithm [33,77]. The generation of the 2D chaotic map is shown in Algorithm 4, where the initial conditions are chA 1 = 0.2, chB 1 = 0.3, k = i, and dimMap = 500. The computed values of chA i and chB i are in [−1, 1]. The chaotic Jaya algorithm (in short, CJaya) is shown in Algorithm 5, where ch x , x ∈ [1 . . . 6] are chaotic values randomly extracted from the 2D chaotic map. Other chaotic maps have been applied to Jaya in [32,78]. However, they do not surmount the chaotic behavior of the aforementioned 2D map.
As they present a similar structure, Algorithms 1-5 are used for designing our hybrid algorithms. Pop k m = MinValue k + (MaxValue k − MinValue k ) * r 1 8: end for 9: Compute and store function fitness F(Pop k m ) 10: end for 11: for iterator = 1 to max_ITs do 12: Search for the current BestPop and WorstPop 13: for m = 0 to popSize do 14

Hybrid Algorithms
The proposed hybrid algorithms are designed using the seven algorithms described in Section 2. These algorithms have been selected thanks to their performance in solving constrained and unconstrained functions, but also because they share a similar structure that allows the implementation of different hybridization strategies.
Algorithm 6 shows the skeleton of the proposed hybrid algorithms, which includes all common and uncommon tasks without any updating procedure of the current population. Since the TLBO algorithm is a two-phase algorithm, the proposed hybrid algorithms apply these two phases consecutively to each individual. In contrast to the other algorithm where a single-phase is executed, a control parameter Phase is applied to process twice the same individual when the TLBO algorithm is used (see lines 24-29 of Algorithm 6). The algorithm used to obtain a new individual is determined by AlgSelected (see line 17 of Algorithm 6). In Algorithms 6-9, AlgSelected determines the algorithm accountable for producing a new individual.
Given that only algorithms that are free of control parameters have been considered, proposals that require the inclusion of control parameters have been discarded. Following these guidelines, we have designed three hybrid algorithms, an analysis of which is provided in Section 4. The first proposed hybrid algorithm, shown in Algorithm 7, processes the entire population in each iteration using the same algorithm, and is referred to as the HYBPOP algorithm. This is the most straightforward hybridization technique where the requirement to follow the structure given by Algorithm 6 is not mandatory on all algorithms. In Algorithms 7-9, NumO f Algorithms is the number of algorithms free of control parameters involved in the hybrid proposals. Set the scaling factor S F and teaching factor T F (an integer random value ∈ [1, 2]) 8: for k = 1 to numDesignVars do

9:
AveragePop k = ∑ m 1 Pop k /numDesignVars 10: end for 11: The second algorithm, named HYBSUBPOP, is described through Algorithm 8. It logically splits the population into sub-populations. During the optimization process, each sub-population will be processed by one of the seven algorithms mentioned previously.  It is worth noting that the aim of the proposed hybrid algorithms is not to improve the convergence ratio of the used algorithms separately, nor to perform optimally for a particular problem. It is to show outstanding performance for a large number of problems without adjusting any control parameters of the considered algorithms.

Numerical Experiments
In this section, the performance of the proposed hybrid algorithms is analyzed through solving 28 well-known unconstrained functions (see Table 1), the definitions of which can be seen in [77]. The proposed algorithms were implemented in the C language, using the GCC v.4.4.7 [79], and an Intel Xeon E5-2620 v2 processor at 2.1 GHz. The hybrid proposals, along with the original algorithms, have been implemented and tested using C language. The C implementations of the original algorithms used are not available through the Internet. However, their Java/Matlab implementations are commonly available.
The data collected from the experimental analysis are as follows: • NoR-AI: the total number of replacements for any individual.
• NoR-BI: the total number of replacements for the current best individual.
• NoR-BwT: the total number of replacements for the current best individual with an error of less than 0.001. • LtI-AI: the last iteration (iterator) in which a replacement of any individual occurs.
• LtI-BI: the last iteration (iterator) in which a replacement of the best individual occurs.
Three of the five analyzed data (NoR-) indicate the number of times the current individual (Pop m ) is replaced by a new individual (newPop m ), which provides a better fitness function (see line 21 of Algoritm 6), while the remaining two (LtI-) refer to the last generation (iterator) in which at least one individual has been replaced.
All data given below have been obtained under 50 runs, 50, 000 iterations (max_ITs = 50, 000) and two population sizes (popSize = 140 and 210). The maximum values of the analyzed data are listed in Table 2.  Tables 3-5 show the data of all the considered algorithms independently, i.e., without hybridization. As expected, the behavior of the different algorithms does not follow a familiar pattern. In addition, it depends on the objective function. Regarding a global convergence analysis, both TLBO and CJaya behave better but with a higher order of complexity (see [77,80]). Moreover, it is noted that when using TLBO, two new individuals are generated in each iteration; one in the teacher phase and the other one in the learner phase. The values in brackets in Tables 3-5 refer to the standard deviation of the data under 50 runs. Note that heuristic optimization algorithms are partially based on randomness, which leads to high values of standard deviation. The average standard deviations are approximately equal to 16%, 22%, 15%, 30%, 23%, 23%, and 22% for Jaya, Chaotic Jaya, SCA, RAO1, RAO2, RAO3, and TLBO, respectively. An important aspect, not shown in Tables 3-5, is whether the solution obtained by each algorithm is acceptable or not. In particular, the original algorithms fail to obtain a solution tolerance of less than 0.001 for 3, 8, 2, 4, 7, 5, and 2 functions for Jaya, CJaya, SCA, RAO1, RAO2, RAO3, and TLBO, respectively. Therefore, considering only original algorithms, there is no algorithm whose behavior is always the best, which justifies the development of a generalist hybrid system that can solve a large number of benchmark functions and engineering problems.
Comparing the quality of the solutions obtained from the proposed hybrid algorithms, it can be concluded that the HYBSUBPOP algorithm is the worst one because the same thoroughbred algorithm is always applied to the same sub-population, which degrades the algorithm's performance for a small population. Contrary to HYBSUBPOP, the HYBPOP and HYBIND algorithms apply the selected algorithms to all individuals, which leads to better-exploiting hybridizations. The HYBSUPOP algorithm fails to obtain a solution tolerance of less than 0.001 in 3 functions (F11, F23, and F27) and the HYBPOP and HYBIND algorithms fail in only one function (F27 and F11, respectively). If the population size is increased to 210, the HYBIND algorithm succeeds with all functions, thus the HYBIND algorithm has a slightly better performance in comparison to HYBPOP.
Local exploration has improved both in the HYBPOP method and especially in the HYBIND method, as stated above. Figures 1 and 2 show the convergence curves of both all the individual methods and the three hybrid methods proposed for the first 1000 and 100 iterations, respectively, for functions F1, F8, F11, and F18. Each point in both figures is the average of the data obtained from 10 runs. As shown in these figures, the curves of the three hybrid methods are similar to the curves of the best single algorithms for each function. Therefore, global exploitation, while not improving all methods, behaves similarly to the best single methods for each function. It should be noted that the hybrid methods behave similarly to the best individual methods for each function, which are not always the same. Table 6 sorts the algorithms according to the number of iterations required to obtain an error of less than 0.001, if an algorithm is missing in a row an acceptable solution is not reached. As seen from this table, no algorithm outperforms all other algorithms. Moreover, a computational cost analysis would be necessary to classify them correctly. Table 7 exhibits the computational cost of different algorithms. It reveals from this table that the hybrid algorithms are mid-ranked in terms of computational cost, and HYBIND is computationally less expensive than HYBPOP.
An analysis of the contribution of each algorithm in the HYBPOP and HYBIND algorithms is exhibited in Tables 8-10. Table 8 indicates the number of times that an individual has been replaced in each algorithm. The replacement is accepted when the new individual improves the fitness of the current solution. As seen from Table 8, the HYBIND algorithm performs more replacements of individuals. In addition, the numbers of replacements per individual for the contributing algorithms are nearly equal, except for the RAO1 algorithm, where the contribution to replacements is limited. The standard deviations of each data (from 50 runs) are being put in brackets. We found that, on average, the standard deviations for HYBPOP and HYBRID algorithms are both equal to 14%. Table 9 shows the last iteration in which each optimization algorithm replaces an individual in the population, i.e., when it no longer brings improvement to the hybrid algorithm. As can be seen from Table 8, the optimization algorithms, except the RAO1 algorithm, work efficiently in the hybrid algorithms. It is also revealed that the considered algorithms contribute to more generations in the HYBIND algorithm. The mean value of the standard deviation rises to 28% and 23% for HYBPOP and HYBIND, respectively, due to the randomness behavior and lower LtI-AI costs.          Finally, Table 10 shows the last iteration in which each algorithm obtains a new optimum. A careful analysis of the results in Table 10 reveals that in the HYBPOP algorithm, the seven algorithms contribute similarly to reaching a better solution as new populations are produced. By contrast, when using the HYBIND algorithm, the powerful algorithms are CJaya and TLBO. It should be noted that the CJaya algorithm extracts random individuals from the population to generate new individuals. The TLBO algorithm collects all the individuals of the population to obtain new individuals. Therefore, these algorithms exploit the results obtained from the rest of the algorithms to converge towards the optimum. This fact is due to the nature of these algorithms, where the best solution correctly guided the individuals. The mean value of the standard deviation is high because the LtI-BI is strongly affected by randomness behavior. It has been found that the HYBSUBPOP algorithm does not reach excellent optimization performance because of the lack of harmony between the original algorithms, so it has left without further analysis. On the other hand, the exploitation phase of the HYBPOP and HYBIND algorithms are similar. In contrast, the HYBIND algorithm outperforms the HYBPOP one in terms of exploitation.
The hybridization of the original algorithms is implemented at the individual level in the HYBIND algorithm, contrary to the HYBPOP algorithm, in which that hybridization is performed at the population level. Finally, the HYBPOP algorithm included algorithms that update the population without analyzing the fitness of the associated solutions, while this restriction is mandatory in the HYBIND algorithm.

Conclusions
This paper proposed a hybridization strategy of seven well-known algorithms. Three hybrid algorithms free of setting parameters dubbed the HYBSUBPOP, HYBPOP, and HYBIND algorithms are designed. These algorithms are derived from a dynamic skeleton allowing the inclusion of any metaheuristic optimization algorithm that exhibits further improvements. The only requirement in merging a new optimization algorithm into the proposed skeleton is to know if the replacement of an individual on that algorithm is based on the enhancement of the cost function or not. Moreover, both chaotic algorithms and multi-phase algorithms have been employed to design the proposed hybrid algorithms, which proves the versatility of the proposed hybridization skeleton. The experimental results show that the HYBPOP and HYBIND algorithms effectively exploit the capabilities of all the considered algorithms. They present an excellent ability to solve a large number of benchmark functions while improving the quality of the solutions obtained. Generally speaking, the hybridization at the individual level is better than that at the population level, which explains why the performance of the HYBSUBPOP algorithm is inferior to the other hybrid algorithms. As future lines of work, we intend to integrate more efficient algorithms into the proposed hybridization skeleton as well as to evaluate new versions of hybridization, and extend the performance analysis of the potential algorithms for solving more complex functions and real-world engineering problems.