Simulated Annealing with Exploratory Sensing for Global Optimization

: Simulated annealing is a well-known search algorithm used with success history in many search problems. However, the random walk of the simulated annealing does not beneﬁt from the memory of visited states, causing excessive random search with no diversiﬁcation history. Unlike memory-based search algorithms such as the tabu search, the search in simulated annealing is dependent on the choice of the initial temperature to explore the search space, which has little indications of how much exploration has been carried out. The lack of exploration eye can affect the quality of the found solutions while the nature of the search in simulated annealing is mainly local. In this work, a methodology of two phases using an automatic diversiﬁcation and intensiﬁcation based on memory and sensing tools is proposed. The proposed method is called Simulated Annealing with Exploratory Sensing. The computational experiments show the efﬁciency of the proposed method in ensuring a good exploration while ﬁnding good solutions within a similar number of iterations.


Introduction
Metaheuristics work on finding fast and acceptable solutions for complex problems by trial and error methods. The problem complexity prevents pursuing every possible solution; therefore, the goal is to acquire a satisfactory and feasible solution within a suitable time. Theoretically, there is no guarantee to find the best solutions, and it is not recognized whether the algorithm will converge or not. Hence, design an efficient and practical algorithm that can mostly find a high-quality solution within a suitable time is crucial. In the past four decades, many applications and studies have applied Metaheuristics algorithms. tabu search (TS) [1], simulated annealing (SA) [2], genetic algorithms (GAs) [3], and other plentiful numbers of metaheuristics algorithms which have attracted much attention compared to each other.
Generally, most of the well-known metaheuristics techniques are memory-less algorithms. Memory-less means that there is no idea about the history of previously found solutions. Thus, based on memory usage, metaheuristics can be classified into algorithms with memory and memory-less ones. The absence of memory in metaheuristics leads to the loss of the information gained in previous iterations. Thus, in many cases, this leads to re-visiting the already visited areas in the search region. It is clear that without a history/memory, a new search would be done in those areas with a chance of repeating the old solution. Thus, the time consumption cost will be high. On the other hand, using memory to build a history for "some" recent solutions in the already visited areas will save the computations' time. However, few pieces of metaheuristics research employed memory in their methods. Therefore, in the advanced metaheuristics, memory should be considered one of the fundamental elements of an efficient metaheuristic. In [4], a recent review of the memory usage and its effect on the performance of the main swarm intelligence metaheuristics. The investigation has been performed for memory, memory-less metaheuristics, and memory characteristics, especially in swarm intelligence metaheuristics. It has been investigated and shown that memory usage is essential to increase the effectiveness of metaheuristics by taking advantage of their previous successful histories.
Simulated annealing (SA) is a well-known search algorithm used successfully in many search and optimization problems. Typically, the random walk of SA does not benefit from the memory of visited states, which can cause excessive random search and redundant behavior, especially with weak configurations of the SA. Moreover, unlike memory-based search algorithms such as tabu search, the search in SA is dependent on the choice of the initial settings to explore the search space and finding an acceptable solution. At the same time, there are limited indications of how much exploration has been carried out. Also, the lack of exploration eye can affect the quality of the final solutions and the termination time, which causes practitioners to use more extended cooling and substantial initial temperatures and increase the number of iterations, especially when the minimum cost is unknown.
A few studies use different ways to overcome these issues. For example, in [5], a hybrid simulated annealing with solutions memory elements to solve university course timetable problems has been proposed. A memory-based methodology that redirects the search to return to unaccepted visited solutions when trapped in local minima has been used to escape from local minima. Authors in [6] proposed a hybrid algorithm that integrates different heuristics features, including using three types of memories, one long-term memory, two short-term memories, and an evolution-based diversification approach. In [7], a memory-based simulated annealing (MSA) algorithm is proposed for the fixed-outline floor planning of blocks. MSA implements a memory pool to keep some historical best solutions during the search. Moreover, MSA uses a real-time monitoring strategy to check whether a solution has been trapped in a local optimum. Moreover, as a solution for the slow convergence problem, an Adaptive simulated annealing genetic algorithm (ASAGA) based on a mutative scale chaos optimization strategy is proposed in [8]. The algorithm can benefit from the parallel searching structure of the GA and the probabilistic jumping property of the SA, besides implementing an adaptive crossover and mutation operators. The results proved finding the global one more quickly. In [9], an enhanced simulated annealing algorithm was developed by integrating "directional search" and "memory" capabilities. The algorithm performance is improved by directing the search based on a better understanding of the configuration space's current neighborhood. A Tabu Search with memory concept and an enhanced annealing method is tested and improved the convergence rate and quality. A multi-objective simulated annealing (MOSA) and an adaptive memory procedure (MOAMP) is proposed in [10]. These algorithms are tested by finding a set of non-dominated solutions of hybrid flow shop scheduling problems concerning both objectives of total setup time and cost. Without memory, the simulated annealing is time-consuming and has difficulty controlling the temperature and transition number. In [11], an annealing framework with a learning memory is implemented. That proposed framework showed reasonable confidence in the solution quality. Moreover, there are several attempts to improve the SA performance through hybridization with different metaheuristics [12][13][14], or thought invoking restart strategies [15][16][17].
In this paper, we propose a two-phase methodology using automatic diversification and intensification based on memory and sensing tools in this work. Actually, one of the challenging issues in two phases hybrid algorithms is the timing of the switch between the diversification-oriented algorithm and the local-oriented search algorithm leading to the need for incorporating diversity measures [51]. Therefore, an efficient metaheuristic approach for finding a global optimum of optimization problems is presented to achieve a wide and deep search. The proposed method is called simulated annealing with exploratory sensing (SAES) that integrates different features of several well-known heuristics. The proposed algorithm's core is a simulated annealing module that is integrated with exploration and diversification schemes to enhance the search process using adaptive search memories. In particular, a Gene Matrix (GM) concept [58,59] is constructed to sample the search space, guide the search process, and to accelerate the method termination. The GM is used as a diversification tool to record a history of space's visited partitions. New solutions will be created in the non-visited partitions; therefore, the search process is directed to visit individuals in these partitions during the diversification process. After having the GM matrix filled with ones giving an indicator of visiting most partitions, an intensification process starts. More intensification as a local search in its region will be carried out from the best-found solution. Finally, a faster local search is applied to help the SA algorithm quickly target optimal solutions. This could improve the search process outputs since the SA-based algorithm is known to be reel around optimal solutions in the final stages of the search. Numerical results of the proposed SAES method verify that the designed procedures and memory elements are efficient, and the proposed methods are competitive with some other types of benchmark methods.
In the rest of this paper, we discuss the related works of the SA module and the concept of sensing search memory in Section 2. Then, our proposed method SAES with sensing memories and operations is presented in Section 3. The numerical simulations are presented and discussed in Section 4. Finally, Section 5 shows concluding remarks.

Simulated Annealing and Sensing Memory
In this section, the structure of the simulated annealing algorithm is sketched, and the main concept of the sensing search memory is highlighted.

Simulated Annealing Algorithm
Annealing theory is based on condensed matter physics, where particles in a physical system control the behaviors of the annealing process [2]. Simulated annealing implements the Metropolis algorithm to emulate metal annealing in metallurgy, where heating and controlled cooling reshape the metal particles. Controlling the metal temperature carefully by increasing it to higher values and decreasing it arranges the particles to minimize the system energy. The standard SA algorithm is a successful randomized local search algorithm for finding minima or near minima solutions of optimization problems [60]. It is a particularly valuable tool for solving high dimensionality problems [61]. Although the SA method can find solutions for extensive range problems, its main drawback is the high computational time [60].
Let s be the current state and alternative states in its neighborhood N(s). One state s ∈ N(s) is selected, and the difference between the current state cost and the selected state energy is computed as D = f (s ) − f (s). Metropolis criterion is used to choose s as the current state in two cases:

•
If D <= 0, means the new state has a smaller or equal cost, then s is chosen as the current state as down-hills is always accepted.

•
If D > 0, and the probability of accepting s is larger than a random value R nd such that e −D/T > R nd then s is chosen as the current state where T is a control parameter known as Temperature.
This parameter is gradually decreased during the search process, making the algorithm greedier as the probability of accepting uphill moves decreases over time. Moreover, R nd is a randomly generated number, where 0 < R nd < 1. Accepting uphill moves is important for the algorithm to avoid being stuck in local minima.
In the last case where D > 0 and the probability is lower than the random value e −d/T <= R nd , no moves are accepted, and the current state s continues to be the current solution. When starting with a large cooling parameter, large deterioration can be accepted. Then, as the temperature decreases, only small deterioration are accepted until the temperature approaches zero when no deterioration is accepted. Therefore, adequate temperature scheduling is essential to optimize the search. SA can be implemented to find the closest possible optimal value within a finite time where the cooling schedule can be specified by four components [60]: • Initial value of temperature. • A function to decrease temperature value gradually.
The length of each homogeneous Markov chains. A Markov chain is a sequence of trials where the trial outcome's probability only depends on the previous trial outcome. It is classified as homogeneous when the transition probabilities do not depend on the trial number [62].
The success of SA depends on how good the choice of its parameters. For instance, the algorithm can be stuck in a local minimum when choosing small initial temperatures since the exploration process must be carried during the first stages. On the other hand, a significant temperature could cause the convergence to be very slow. The same effect can be obtained when applying an inappropriate cooling schedule. Also, adjusting the neighborhood range for SA is essential in continuous optimization problems [63]. All inputs should not have the same step sizes, but these steps should be selected according to their effects on the objective function [64]. The authors in [65] proposed a method to identify the step size during the annealing process, which begins at significant steps and gradually decreases them. The initial temperature can also be chosen within the standard deviation of the mean cost of several moves [66]. When finite Markov chains are used to model SA mathematically, the temperature is reduced once for each Markov chain. In contrast, each chain's length should be related to the size of the neighborhood in the problem [60].

Sensing Search Memory
An adaptive data storage called the sensing search memories is constructed to collect the search data to boost the search process. There are various types of sensing search memories, such as Gene Matrix (GM) and Landmarks Matrix (LM) [22,23,58,59]. These memories can be invoked by collecting data and solution features from different and diverse search space regions. The following GM, as a sensing search memory, is applied in this research.

Gene Matrix (GM)
The GM sensing search memory collects data and features from different and diverse search space regions. The GM memory is invoked to assess the exploration process during the search. The search methods use different coding representations of individuals, which applies every solution x in the search domain to comprise n variables or genes. Specifically, in the GM, each variable range is divided into m sub-range. Then, each sub-range of the i-th gene is associated with the entry of the i-th row of the gene matrix. Therefore, the GM memory is initially an n × m zero matrix. Then it is converted to ones whenever the corresponding sub-ranges are visited during the search process.
The update process of the GM entries from 0 to 1 is controlled with a parameter α to ensure that each sub-range is visited at least αm times. As an example of the GM, Figure 1 shows a 2-dimension GM with α = 0.25. The range of each gene in this example is split into 8 sub-ranges. Two GM are defined; a simple GM which requires only one visit to update its entities from 0 to 1, and the advanced GM 0.25 that requires at least αm(= 2) visits to update its entities. Therefore, both of the GM and GM 0. 25 have zero entities at the third and eighth sub-ranges of gene x 1 since no individual has been generated in these sub-ranges. However, entry (1, 1) in GM 0.25 is equal to 0 as there exists only one individual in the third sub-range. The entry (1, 1) can be updated to 1 if one extra individual has been generated in the consequent generations. The GM is an indicator of an advanced exploration process when there is no zero elements in the GM, and the search process can be terminated. Consequently, the GM is used to provide the search process with reasonable termination criteria. Furthermore, diverse solutions will be afforded to the search space, as will be explained later.

Simulated Annealing with Exploratory Sensing (SAES) Algorithm
The objective of the proposed work is to increase the probability of finding a global optimum by using a memory-based mechanism. The GM works as a sensing search memory where the visited partitions are recorded to ensure the exploration of the search space. The search walker is then enforced to attend non-visited partitions after several iterations (Markov chains) while keeping a record of several best solutions found, as shown in the flowchart in Figure 2. Also, the search is carried out in two phases:

1.
The exploration phase. This stage has several iterations within several Markov chains and aims to explore the constrained search space within the lower and upper limits of the solutions' values without affecting the way of how SA works in each Markov chain. Instead of starting the new Markov chain from the last accepted state, applying the GM directs the search to visit solutions in non-visited partitions of the search space. The cooling schedules and the acceptance criteria are not affected by applying the GM, while the temperature is changed in each Markov chain ones. Therefore, in each Markov chain, an intensification process is carried out in the new partition of the search space, while the search is keeping records of the best-found solutions.
The adjustment of the SA settings should explore the search space in the first phase. Extensive exploration is maintained in our proposed method by a diversification index. The diversification index is a ratio of the number of visited partitions to the number of all partitions in the GM, and it is measured after each Markov chain. For instance, this phase can be ended when the diversification index reaches a pre-defined ratio γ of the partitions have been visited.
The numerical experiments shown later indicate that the appropriate value for the parameter γ is 0.9, and this means visiting a percentage greater than 90% of the GM partitions. Another parameter called a diversification threshold δ is applied to control the direction to un-visited partitions of the search space and this is called diversification sensing. This parameter is a smaller anticipated value of the diversification index. For example, the diversification threshold value δ = 0.04, means that when the search in one Markov chain has not changed the diversification value by this threshold, the diversification process will be called to move to new search partitions. Otherwise, the diversification process is not used as the search is achieving good diversification using the escaping mechanism. 2.
The intensification phase. After reaching a specified level of diversification where most of the searching partitions were visited, the search is directed to start from the best-found state to do a more local search in that region. Although the diversification is supposed to be achieved mainly in the first phase using the GM, it is essential to ensure that the initial temperature is wisely chosen to avoid getting trapped in local minima earlier at the start of this phase. We choose to use a static cooling schedule that allows sufficient temperature after ending the exploration phase. The search is ended when reaching the maximum iteration, which equals the number of Markov chains multiplied by each Markov chain's length. After the two leading search phases are accomplished, a faster local search is called to refine the best solution obtained so far. This could improve the SA algorithm output since this algorithm is known to be reel around optimal solutions in the final stages of the search [67,68]. It is worth noting that the two phases are executed sequentially and not overlapping together to maintain the main structure of the SA algorithm, especially the cooling scheduling. This is also so that the proposed algorithm does not lose the convergence behavior recognized in the standard SA algorithm.
The components of the proposed algorithm are presented and explained in the following subsections.

Neighborhood Representation
The neighborhood representation is defined by moving one or more of these parameters to one or more directions randomly by a defined move class, e.g., by adding a defined step size to one or more of the current parameter's values to a specific direction. One move class is randomly choosing neighboring solutions for a current solution by generating new solutions based on the current solution. This generation process uses a multivariate normal distribution with step sizes equal to the square root of the temperature and uniformly random direction. This choice is known as a typical choice for generating new solutions in SA as noted by [69]. In addition, the initial temperature is set as the standard deviation of different random moves, as presented by [66], this leads to a temperature value related to the size of inputs. Therefore, this is a good choice when testing tens of different problems with different scales to avoid the problem of small or large step sizes that can badly affect the search convergence. Each time a new solution is generated, the boundary conditions are checked, and new bounded values are generated when needed. Then, the cost of the new state is evaluated.

Initial Solution
Simulated annealing requires an initial solution. Typically, the SA performance highly dependents on the initial solution settings, especially with relatively small temperatures or fast cooling schedules. Alternatively, our methodology is not affected by the initial solution values as the diversification process should overcome the initial settings traps. Specifically, the quality of the initial solution does not affect the quality of the obtained solutions due to the proposed diversification mechanism that redirects the search after each Markov chain of iterations into new areas rather than digging in promising areas. Actually, the proposed methodology concerns with increasing intelligent coverage of the visited solutions and exploiting search memory through the proposed diversification. Therefore, a random initial solution is selected in implementing the SAES method.

Objective Function
The objective function is defined for each benchmark function, as shown in Appendices A and B. In general, we concern with the following global optimization problem: where f is a real-valued function defined on search space X ⊆ R n .

Initial Temperature Settings
Setting appropriate initial temperature and cooling schedule is crucial for having an efficient SA method. The initial temperature is set as the standard deviation of 100 random moves, as presented by [66].

Cooling Schedule
An appropriate cooling schedule is important for the success of the SA algorithm as fast cooling can cause the algorithm to become stuck in a local minimum, and slow cooling can make the convergence very slow. A suitable static or dynamic cooling rate will help the SA converge to a global minimum and avoid getting stuck in a local minimum. We choose the static cooling schedule: where ω is a static cooling rate. Setting ω close to 1 allows more exploration for the search space, while smaller setting values allow greedier search and might not explore the whole search space inside each Markov chain. Normally, practitioners use a static cooling rate ω between 0.8-0.99 [60]. The cooling schedule used is based on Boltzmann annealing by updating each iteration's current temperature based on the initial temperature T 0 and the current iteration number k i = 0.95.

Markov Chains Configurations
When using Markov chains to model the SA iterations mathematically, the temperature is reduced once for each Markov chain. The number of Markov chains chosen for this experiment is 60 chains, which we think is enough for this study. The length of each chain should be related to the size of the neighborhood in the problem [60]. Our case's neighborhood size is the number of all optimized parameters involved in the optimization process multiplied by 40.

The Diversification and Intensification Settings
A matrix of search space partitions called Gene Matrix GM is used. The SA algorithm uses the real-coding representation of individuals, which applies individual x in the search space to comprise n variables or genes. Every gene's scope is split into m sub-ranges to check the diversity of the gene values. At that point, a solution counter matrix C of size n × m is built, in which term c ij stands for the number of produced solutions such that gene i lies in the sub-range j, where i = 1, . . . , n, and j = 1, . . . , m. In the initialization stage, the GM is built to be a n × m zero matrix in which every entry of the i-th raw indicates a sub-range of the i-th gene. During the search phase, the GM zero values will be flipped to ones when new values are created in corresponding sub-ranges. The GM is an indicator of an advanced exploration process when there is no zero entry in the GM, and the search process can be terminated. Consequently, GM is used to provide the search process with reasonable termination criteria. The diversification threshold is chosen to be 0.04, where diversification is not called if the algorithm achieved diversification improvement upper than this threshold in the exploration phase. Diversification stops when stopping criteria is observed by one of two conditions. The first is when reaching the diversification index of 0.9, which implies >= 90% of the number of partitions have been visited. The second case when finishing 30% of the number of Markov chain iterations.

Stopping Criterion
Typical stopping criteria of the SA includes stopping at small objective function values, stopping at lower temperature values, stopping after sufficient iterations or Markov chains, and stopping when the changes of energy in the objective function are sufficiently small. The choice of stopping criterion is difficult as the optimal objective function is unknown in many cases [70]. When using any stopping criteria, the objective of the two phases might be watched in which the diversification and the intensification objectives should be achieved. To get good results, we have chosen to stop the search after a certain number of Markov chains represent 30% of the number of all Markov chains.

The SAES Algorithm
Based on the previously explained components, the whole structure of the SAES method and the sequence of its steps are illustrated in Algorithm 1.
2: Set T = T max , generate an initial solution x, and initialize the GM and x best . 3: Evaluate the objective function f (x). 4: repeat 5: repeat 6: Generate a trial solution y in the neighborhood of x.

7:
Evaluate the objective function f (y).

8:
The trial solution y is accepted with probability If y is accepted, then set x = y.

10:
Update the GM and x best . 11: until Markov chain is achieved. 12: Update temperature T. 13: if the diversification index is not increased at least by δ then 14: Generate a new diverse solution x using the GM, and update the GM and x best . 15: end if 16: until the termination criterion of the diversification phase is met. 17: Set x = x best . 18: repeat 19: repeat 20: Generate a trial solution y in the neighborhood of x.

21:
Evaluate the objective function f (y).

22:
The trial solution y is accepted with probability

23:
If y is accepted, then set x = y.

24:
Update the GM and x best . 25: until Markov chain is achieved. 26: Update temperature T. 27: until the termination criterion of the intensification phase is met. 28: Apply local search to improve x best .

Numerical Simulation
In the section, the implementation setting and experimental results are discussed. The proposed algorithm is programmed using MATLAB. The parameter setting and performance analysis of the SAES are first investigated before presenting the results and comparisons.

Test Functions
Two classes of benchmark test functions have been used in the experimental results to discuss the efficiency of the proposed methods. The first class of benchmark functions contains 25 classical test functions f 1 -f 25 [71]. Those functions definitions are given in Appendix A. The other class of benchmark functions contains 25 hard test functions h 1 -h 25 [72,73] which are known as CEC2005 test functions and described in Appendix B.

Parameter Setting and Configuration
The parameters values used in SAES algorithm are set based on the typical setting in the literature, or determined through our preliminary numerical experiments, as shown in Table 1. Based on the configuration of the SAES control parameters, there are three SA-based versions: • The SAES without the diversification phase and final intensification, which is the standard annealing algorithm and denoted by the SA method.

•
The SAES without the final intensification, which is denoted by SAES w .

•
The complete SAES method.
In Table 1, there are two groups of parameters. The first one in the common parameter setting, which is used in all versions of the proposed method. The other group of parameters is used in the SAES w and SAES versions, except the final intensification budget is only used in the later version.  Based on the parameter values shown in Table 1, the cost of the objective function evaluations used in the SAES method can be computed as: Therefore, the total number of function evaluations used by the SAES method is bounded by (2900n + 118) functions evaluations. Likewise, it can be concluded that these numbers in the SA and SAES w versions are bounded by (2400n + 100) and (2400n + 118), respectively.
The local search method used in the final intensification is consists of applying the MATLAB functions "fminsearch.m" and "fminunc.m" starting from the best solutions obtained in the previous search stages. Specifically, the function "fminsearch.m" is first called with the half of the local search budget, then the other MATLAB function is called to improve the output of the first one with the same budget as recommended in [74,75].

Statistical Tests
The non-parametric Wilcoxon rank-sum method [76][77][78][79][80] is used for determining the statistical differences between the comparative methods. The following statistical terms are used: • The positive and negative ranks: where d i is the difference between the ranks of the corresponding results of the compared methods, and • the p-value.
Besides these statistical measures, we add the measure "No. of Beats", which contains two numbers. The first number is related to how often the first method has better results than the other method. Although the other number indicates the number of times, the second method has prevailed over the first method.

Results and Discussion
The proposed method was applied to 50 benchmark problems shown with their details in Appendices A and B. The results of the proposed SA versions are compared to each other to analyze their performance. Moreover, the SAES results are also compared to those of other benchmark methods to assess the proposed method's efficiency.
In the first experiment, the SA, SAES w , and SAES methods are reported in Tables 2 and 3 Table 4. Moreover, the averages of the processing time used by each method are illustrated in Figures 3 and 4 using the classical and hard test functions, respectively. All these results are statistical analyzed in Table 5 and 6. In general, the results are improved according to the complication of the algorithm. Therefore, the results of the SAES w method are better than those of the SA method and the results of the SAES is better than those of the other two methods as shown in Tables 2-4. This proves the efficiency of the main two additive components of the diversification phase and the final intensification. The processing time varies according to the problem dimension and the sophistication of the objective function formula, as shown in Figures 3 and 4. The statistical measures in Tables 5 and 6 indicate the following conclusion:

•
There are no significant differences between the obtained errors in the SA and SAES w methods, although the latter method could beat the first method in almost all used test functions.

•
The SAES is significantly better than the other two methods in terms of obtaining better errors.

•
The processing time of the SAES method is slightly longer than that of the SA and SAES w methods with no significant differences between all methods processing times except in the hard test functions.
As a final note on analyzing the SAES performance in terms of its ability to reach optimal solutions, this has been studied using the relative error gap in the following formula:   Figure 3. Averages of processing time in seconds using the classical test functions.   Figure 4. Averages of processing time in seconds using the hard test functions. In the second experiment, we investigate the diversification index's performance and how it could help the SA-based algorithms gain better performance. The average values of this index are reported in Figures 5 and 6 over 25 independent runs using the classical and hard test functions, respectively, with the same conditions stated in the first experiment. In most of the used test functions, the diversification phase could help the SA-based algorithms explore the search space more broadly. Figures 7 and 8 show examples of the progress of the best solution and diversification index improvements over generations. This shows how the diversification phase has improved the quality of obtained solutions by an SA-based method.   In the last experiment, we compare the proposed SAES with the other state-of-the-art methods abbreviated as follows, where 14 of comparative methods are used.

Conclusions
A two-phase approach using automated diversification and intensification based on memory and sensing methods is suggested to enhance the SA methodology. The proposed mechanisms could enhance SA with an exploration eye that guides the search process to have better solutions. Moreover, the random walk of the standard SA algorithm has been modified to benefit from the memory of visited states and diversification history. Finally, an advanced local search can help the SA algorithm quickly target optimal solutions in the later search stages. The numerical experiments show the efficiency of the proposed method in ensuring good discovery and finding better solutions.

Appendix B. Hard Test Functions
For the hard test functions h 1 -h 17 , their global minima and the bounds on the variables are listed in Table A1. For more details about these functions, the reader is referred to [72].