Next Article in Journal
An Overview of Temperature Issues in Microwave-Assisted Pyrolysis
Next Article in Special Issue
A Holonic-Based Self-Learning Mechanism for Energy-Predictive Planning in Machining Processes
Previous Article in Journal
Method of Moments Applied to Most-Likely High-Temperature Free-Radical Polymerization Reactions
Previous Article in Special Issue
The Bilinear Model Predictive Method-Based Motion Control System of an Underactuated Ship with an Uncertain Model in the Disturbance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fine-Tuning Meta-Heuristic Algorithm for Global Optimization

by
Ziyad T. Allawi
1,
Ibraheem Kasim Ibraheem
2 and
Amjad J. Humaidi
3,*
1
College of Engineering, Department of Computer Engineering, University of Baghdad, Al-Jadriyah, Baghdad 10001, Iraq
2
College of Engineering, Department of Electrical Engineering, University of Baghdad, Al-Jadriyah, Baghdad 10001, Iraq
3
Department of Control and Systems Engineering, University of Technology, Baghdad 10001, Iraq
*
Author to whom correspondence should be addressed.
Processes 2019, 7(10), 657; https://doi.org/10.3390/pr7100657
Submission received: 20 August 2019 / Revised: 5 September 2019 / Accepted: 10 September 2019 / Published: 26 September 2019
(This article belongs to the Special Issue Optimization for Control, Observation and Safety)

Abstract

:
This paper proposes a novel meta-heuristic optimization algorithm called the fine-tuning meta-heuristic algorithm (FTMA) for solving global optimization problems. In this algorithm, the solutions are fine-tuned using the fundamental steps in meta-heuristic optimization, namely, exploration, exploitation, and randomization, in such a way that if one step improves the solution, then it is unnecessary to execute the remaining steps. The performance of the proposed FTMA has been compared with that of five other optimization algorithms over ten benchmark test functions. Nine of them are well-known and already exist in the literature, while the tenth one is proposed by the authors and introduced in this article. One test trial was shown to check the performance of each algorithm, and the other test for 30 trials to measure the statistical results of the performance of the proposed algorithm against the others. Results confirm that the proposed FTMA global optimization algorithm has a competing performance in comparison with its counterparts in terms of speed and evading the local minima.

1. Introduction

Meta-heuristic optimization describes a broad spectrum of optimization algorithms that need only the relevant objective function along with key specifications, such as variable boundaries and parameter values. These algorithms can locate the near-optimum, or perhaps the optimum values of that objective function. In general, meta-heuristic algorithms simulate the physical, biological, or even chemical processes that happen in nature. Of the meta-heuristic optimization algorithms, the following are the most widely used:
  • Genetic algorithms (GAs) [1], which simulate Darwin’s theory of evolution;
  • Simulated annealing (SA) [2], which emerged from the thermodynamic argument;
  • Ant colony optimization (ACO) algorithms [3], which mimic the behavior of an ant colony foraging for food;
  • Particle swarm optimization (PSO) algorithms [4], which simulates the behavior of a flock of birds;
  • Artificial bee colony (ABC) algorithms [5], which mimic the behavior of the honeybee colony; and
  • Differential evolution algorithms (DEAs) [6], for solving global optimization problems.
Xing and Gao collected more than 130 state-of-the-art optimization algorithms in their book [7], and these swarm-based optimizations are applied in different applications and study cases [8,9,10,11,12,13,14]. Some algorithms start from a single point, such as SA, but the majority begin from a population of initial solutions (agents) like GAs, PSO, and DEAs, most of which is referred to as “swarm intelligence” in their mimicry of animal behaviors [15]. In these algorithms, every agent shares its information with other agents through a system of simple operations. This information sharing results in improvements to the algorithm performance and helps find the optimum or near-optimum solution(s) more quickly [3].
In any meta-heuristic optimization algorithm, there are three significant types of information exchange between a particular agent with other agents in the population. The first is called exploitation, which is a local search for the latest, and the best solution found so far. The second is called exploration, which is a global search using another agent existing in the problem space [16]. The third is called randomization, which is rarely used in some algorithms or may not be used at all. This last procedure is similar to exploration, but instead of an existing agent, a randomly-generated agent is used. For instance, ABC algorithms use randomization for the scout agent; therefore, it often succeeds in evading many local minima. Many algorithms begin with exploration and gradually shift to exploitation after several generations to avoid falling into local optimum values. Meta-heuristic algorithms then maintain trade between exploration and exploitation [17]. However, the different types demonstrate variations in how they perform this trade; by using this trade, these algorithms may get close to near-optimum or even optimum solutions.
All agents compete with themselves to stay alive inside the population. Every agent that improves its performance replaces any agent that did not promote itself. Therefore, in the fourth stage (i.e., selection) a variable selection method, such as greedy selection or roulette wheel, is used to choose the best agent to replace the worst one [1]. Meta-heuristic algorithms may find near-optimum solutions for some objective functions, but it may fall into local minima for other ones. This fact will be apparent in the results of this article. To date, an optimization algorithm that offers a superior convergence time and avoids local minima for objective functions has yet to be developed. Therefore, the area is open to improving the existing meta-heuristic algorithms or inventing new ones to fulfill these requirements [18].
In this article, a novel algorithm called the fine-tuning meta-heuristic algorithm (FTMA) is presented. It utilizes information sharing among the population agents in such a way that it finds the global optimum solution faster without falling into local ones; this is accomplished by performing the necessary optimization procedures sequentially. In the next section, the proposed algorithm is described in detail. Then, five well-known optimization algorithms are presented to compete with FTMA over a ten-function benchmark. The results and discussion are shown in the final section, along with the conclusions.

2. Literature Review

In the scope of the recent trends in nature-based meta-heuristic optimization algorithms, since the genetic algorithms [1] and simulated annealing [2] has been presented, the race begins in inventing many algorithms thanks to the rapid advances in computer speed and efficiency, especially in the new millennia. From these algorithms, we mention the firefly algorithm (FA) [19], cuckoo search (CS) [20], bat algorithm (BA) [21], flower pollination algorithm (FPA) [22], and many others mentioned in [23].
Many optimization algorithms were invented over the past five years. Some of them are new, and the others are modifications and enhancements to the already-existing ones. One of the recent and widely-used algorithms is grey wolf optimization (GWO) [24]; it is inspired by the grey wolves and their hunting behaviors in nature. Four types of leadership hierarchy of the grey wolves as well as three steps of prey hunting strategies are implemented. Mirjalili continued to invent other algorithms. The same authors presented moth–flame optimization (MFO) [25]. This algorithm mimics the navigation method of moths in nature which is called “traverse orientation”. The main path which the moths travel along is towards the Moon. However, they may fall into a useless spiral path around artificial lights if they encounter these in their way. Ant lion optimizer (ALO) has been proposed in [26], which simulated the hunting mechanism of antlions in nature. Five main steps of hunting are implemented in this algorithm. Moreover, the same authors of [24] proposed a novel population-based optimization algorithm, namely, the sine–cosine algorithm (SCA) [27], it fluctuates the solution agents towards, or outwards, the best solution using a model based on sine and cosine functions. It uses random and adaptive parameters to emphasize the search steps like exploration and exploitation. Another proposed algorithm in the literature is the whale optimization algorithm (WOA) [28]. This algorithm mimics the social behavior of humpback whales, using a hunting strategy called bubble-net, as well as three operators to simulate this hunting strategy. All these algorithms mentioned above are developed, enhanced, and modified through the years, hopefully to make them suitable for every real problem which needs solving. However, no-free-lunch theorems state that there is no single universal optimization method that can deal with every realistic problem [18].

3. Fine Tuning Meta-Heuristic Algorithm (FTMA)

The FTMA is a meta-heuristic optimization algorithm used to search for optimum solutions for simple and/or complex objective functions. The fundamental feature of FTMA is the fine-tuning meta-heuristic method used when searching for the optimum.
FTMA performs the fundamental procedures of solution update, which are exploration, exploitation, randomization, and selection in sequential order. In FTMA, the first procedure of exploration is undertaken concerning an arbitrarily-selected solution in the solution space. If the solution is not improved according to the probability, the second procedure of exploitation is performed concerning the best global solution found so far. Again, if the solution is not enhanced according to probability, then the third procedure of randomization is performed concerning a random solution generated in the solution space. The fourth procedure of selection is performed by comparing the new solution and the old one and choosing the best according to the objective function. The FTMA procedure steps are:
1) Initialization: FTMA begins with initialization. Its equation is shown below:
x i 0 ( k ) = l b ( k ) + r a n d × ( u b ( k ) l b ( k ) ) ;   k = 1 .   2 .   d ; i = 1 ,   2 , , N .
At this point in the process, all the solutions x i t are initialized randomly at the iteration counter t = 0 according to the lower bound l b and the upper bound u b for each solution space index k inside the solution space dimension d . A random number r a n d , its value is between 0 and 1, is used to place the solution value randomly somewhere between the lower and upper bounds. The space dimension, along with the number of solutions N must be specified prior to the process. Then, the fitness f x i 0 is evaluated for each solution x i 0 using the objective function. The values of the best objective fitness f b 0 and its associated best solution x b 0 are initially obtained from the fitness and solutions vectors, respectively. Additionally, the probabilities of exploitation and randomization, p , and r , respectively, are initialized.
After incrementing the iteration counter inside of the generation iteration loop, the four steps in each iteration are performed in the FTMA core, as follows:
2) Exploration: The general formula of this step is as follows:
y ( k ) = x i t ( k ) + r a n d × ( x j t ( k ) x i t ( k ) ) .
In this step, every solution x i t is moved with respect to another existing solution vector   x j t , where j i . The value of the objective function for the temporary solution y is then evaluated as a temporary fitness g .
3) Exploitation: Its equation is presented as follows:
i f   g > f x i t   & &   p > r a n d ,   y ( k ) = x i t ( k ) + r a n d × ( x b t ( k ) x i t ( k ) ) .
If the fitness g is not improved compared with f x i t and the probability of exploitation p is greater than a random number r a n d ; then the exploitation step will be initiated. In this step, the temporary solution vector y is calculated by moving the solution x i t with respect to the best global solution, x b t . The value of the objective function for the temporary solution y is re-evaluated and stored in the temporary fitness g .
4) Randomization: The formula of this step is as follows:
i f   g > f x i t   & &   r > r a n d ,   y ( k ) = x i t ( k ) + r a n d × ( l b ( k ) + r a n d × ( u b ( k ) l b ( k ) ) x i t ( k ) ) .
If the fitness g is not improved again in comparison with f x i t and the probability of randomization r is higher than a random number r a n d , then the randomization step will be initiated. In this step, the solution x i t moves with respect to a randomly-generated solution. The value of the objective function for the temporary solution y is again re-evaluated and then stored in the temporary fitness g .
5) Selection: The final step of the FTMA iteration process is the selection step, which is summarized as:
i f   g < f x i t ,   x i t + 1 = y ; f x i t + 1 = g ,
i f   g < f b t ,   x b t + 1 = y ; f b t + 1 = g .
6) Stopping Condition: The search ends if the global fitness value f b t + 1 reaches zero or below a specified tolerance value ε , or if the iteration counter t reaches its previously-specified maximum value R . The pseudocode of FTMA is summarized as in Algorithm 1 below.

4. Methodology

To check the validity of the proposed FTMA, it should be tested with different well-known optimization algorithms that were used widely in the literature. Five algorithms are chosen, although there are many.

4.1. Well-Known Optimization Algorithms

(1)
Genetic algorithm (decimal form) (DGA): This is similar to a conventional GAs with the exception that the chromosomes are not converted to binary digits. It has the same steps as GAs, selection, crossover, and mutation. Here, the crossover or mutation procedures are performed upon the decimal digits as they are performed upon the bits in a binary GA. The entire procedure of the DGA is taken from [29].
(2)
Genetic algorithm (real form) (RGA): In this algorithm, the vectors are used in optimization as real values, without converting them to integers or binary numbers. As a binary GA, it performs the same procedures. The complete steps of DGA are taken from [30].
(3)
Particle swarm optimization (PSO) with optimizer: The success of this famous algorithm is down to its simplicity. It uses the velocity vector to update every solution, using the best solution of the vector along with the best global solution found so far. The core formula of PSO is taken from [4].
(4)
Differential evolution algorithm (DEAs): This algorithm chooses two (possibly three) solutions other than the current solution and searches stochastically, using selected constants to update the current solution. The whole algorithm is shown in [6].
(5)
Artificial bee colony (ABC): This algorithm gained use for its distributed behavior simulating the collaborative system of a honeybee colony. The system is divided into three parts, the employed bees which perform exploration, the onlooker which shows exploitation, and the scout which performs randomization. The algorithm is illustrated in [5].
Algorithm 1: Fine-Tuning Meta-Heuristic Algorithm
Input: No. of solution population N , Maximum number of iterations R ;
Tick;
for i = 1 to N
 Initialize x i 0 using Equation (1);
 Evaluate f x i 0 for every x i 0 ;
end for
Search for x b 0 and f b 0 ;
Initialize t = 0 , set p and r ;
while t < R && f b t > ε
t = t + 1 ;
 for i = 1 to N
  Choose x j t such that j i ;
  Compute y using Exploration (Equation (2));
  Evaluate g for y ;
  if g > f x i t && p > r a n d
   Compute y using Exploitation (Equation (3));
   Evaluate g for y ;
   if g > f x i t && r > r a n d
    Compute y using Randomization (Equation (4));
    Evaluate g for y ;
   end if
  end if
  if g < f x i t
   Update x i t + 1 and f x i t + 1 using Equation (5);
   if g < f b t
    Update x b t + 1 and f b t + 1 using Equation (6);
   end if
  end if
 end for
end while
Output:   x b t + 1 , f b t + 1 , t , and the computation time.

4.2. Benchmark Test Functions

The optimization algorithms mentioned above, along with the proposed algorithm, will be tested on ten unimodal and multimodal benchmark functions. These functions have been used widely as alternatives to real-world optimization problems. Table 1 illustrates nine of these functions.
where x i represents one of the solution parameters that i = 1 ,   2 ,   3 d where d is the solution space dimension. The bold 0 represents a solution vector of zeros, whereas the bold 1 represents a solution vector of ones. The tenth benchmark function is proposed by the authors and introduced for the first time in this article, which is a multimodal function with multiple local and one global minimum, as shown in Table 2.
This function has 3 d 1 local minima which are located on points whose coordinates equal either 0 or ±1 except for the global minimum which is located precisely at the origin. The positive real parameter ε should be slightly higher than zero to trick the optimization algorithm to fall into the local minima.
Figure 1 illustrates that function for d = 2 and for ε = 2.22 × 10 16 , which is the default constant called eps used in MATLAB® package (MathWorks, Natick, MA, USA). There are eight local minima distributed in a square space around the global minimum. The value of the function at these minima may be represented as f ( x ) = 2 ε i = 1 d | x i | .

5. Results and Discussion

The two most essential requirements for an optimization algorithm are fast convergence and reaching the global minimum without falling into the local minima. Therefore, the judge for which of the optimization algorithms is the best will be taken according to these two criteria. The optimization algorithms were used to find the optimum values for the ten benchmark functions through 30 trials, to check for the mean error and the standard deviation for statistical comparison purposes. The parameters of the optimization algorithm FTMA, p and r were set to be 0.7 to make the flow control probably bypass the exploitation and randomization steps, even if the fitness is not improved in the exploration step. For all the algorithms, the number of dimensions was d = 2 , the number of solution population agents was N = 1000 , and the maximum number of iterations was R = 1000 . The results of a sample trial are illustrated in Table 3.
The data represent the output fitness value and the time taken by the optimization algorithm to drive its optimum global fitness below the minimum tolerance error ε = 2.22 × 10 16 . The data in bold represents the algorithm that simultaneously scored the fastest time and found the global minimum for a specific benchmark function. The underlined data represents the algorithms that failed to pass the tolerance and completed all 1000 generation cycles. The following ten figures in Figure 2 represent the ten benchmark functions, illustrating the process of the optimization. All charts contain six lines which differ in pattern, one for each optimization algorithm.
Concerning the computation time, it is evident from Table 3 that the FTMA outperforms all other algorithms. Furthermore, we can see that the DGA failed to reach the optimum in F4 and barely in F7. For RGA, F2, F4, and F8 also failed. PSO evaded all local minima in all the benchmark functions. Furthermore, DEA failed in F10 along with F5 and F7. The ABC algorithm succeeded in avoiding local minima except for F4. In F5, F6, F8, and F9, most of the algorithms succeeded in capturing the zero global optimum value. However, FTMA never fell into the local minima, scoring the best convergence time out of all the optimization algorithms. Additionally, it reaches zero optimum value in the functions from F5 through F9. One can see that some of the optimization algorithms are suitable for some problems and not ideal for others. For example, DGA, RGA, and ABC failed in F4, but DEA succeeded; the situation is in contrary to F5. This confirms the no-free-lunch theorems of the absence of a universal algorithm for every problem. PSO, as well as FTMA, have both succeeded in evading the local maxima and converging to the global one. However, the time taken by PSO to reach the optimum is three to four times the time taken by FTMA. If we look at the ten subgraphs, which represent the search progression of the algorithms for one trial (its results are illustrated in Table 3), we find that the FTMA line (solid black) is the closest line to the vertical axis, which is the logarithmic scale of the global fitness against the number of generations. Although the maximum number of iterations is 1000, the maximum number of iterations displayed in the plots is set to be 150, because most of the algorithms catch the global optimum at or before this generation. In all figures, FTMA is the best-performing function. PSO and ABC are next best in most of the graphs. DGA, RGA, and DEA failed on many occasions. If we take the time which FTMA reached the critical error tolerance, the best of the other functions barely reached the fitness value at the same time. It can be seen from the plots that some of the algorithms have trapped in local minima, especially in F4. This implies that FTMA has the fastest convergence speed among the identified optimization algorithms. The values of the mean and standard deviation for the 30 trials are evaluated for each optimization algorithm and benchmark function. Table 4 illustrates the distribution of the output error, and Table 5 shows the distribution of computation time. The bold and underlined values represent the fastest and failed sets of trials, respectively. The trial sets are presented in ten sub-figures in Figure 3, one for each benchmark function.
In Table 4, the overall trials show that some algorithms that succeeded in one of the tests might not achieve well in another one. It can be concluded that RGA failed in F2; DGA, RGA, and ABC failed in F4. Moreover, DEA failed in F5; DEA, and ABC failed in F7; DGA and RGA failed in F8; while PSO failed in the proposed benchmark function F10, which succeeded in all the other functions. Although DGA average error is slightly less than the mean error of FTMA in F1, F2, and F9, the average computation time is about eight times the computation time of the proposed algorithm. This implies that the proposed algorithm succeeded in reaching the global minimum before DGA. It can be seen that the computation time for the proposed algorithm is the best for all the benchmark functions. In Figure 3, the plots contain six lines with different patterns, one for each optimization algorithm. The figures show the logarithm of the computation time against computation trials. One can determine from these plots that some optimization functions are suitable for some algorithms and not for another. For instance, DEA is suitable for F4 but not for F5. The proposed algorithm always has the best computation time among all the remaining algorithms. Its solid line lies in the bottom near the horizontal axis. In F4, it is accompanied by PSO and DEA; in the other plots, it was alone in the bottom. For the proposed benchmark system, DEA was the worst. PSO fell in local optima many times, and DGA a few times. ABC and RGA performed well, but FTMA was the best.

6. Conclusions

This paper proposed a new global optimization named the fine-tuning meta-heuristic algorithm (FTMA). From the simulation results, it can be concluded that the FTMA reaches the optimum value faster than any other optimization algorithm used in the comparison. Its performance is competing with state-of-the-art methods, namely, RGA, DEA, ABC, PSO, and DGA. It accomplishes this in real-time and, unlike other optimization algorithms, evading any local optima. Moreover, it maintains the accuracy and robustness at the least runtime. Therefore, the FTMA offers a promising approach which, thanks to its rapid convergence time, could be applied in more complicated real-time systems where the time is a crucial factor. This result does not mean that this algorithm can solve any real problem we may encounter in practice, as it stated in the no-free-lunch theorems, there may be processes that this algorithm struggles to solve. So, there are possible opportunities to enhance the FTMA and/or its counterparts. Future studies include using the FTMA in combinatorial optimization or integrating the FTMA in control applications as an online or offline tuning algorithm for finding the optimal parameters of the feedback controllers. Moreover, because the lack of resources (supercomputers, etc.), the computation time of more than two parameters in the algorithm takes hours or sometimes days. So, it is intended to make the problem space higher if these resources become available. Finally, checking multi-dimensional spaces and using multi-objective problem scenarios are possible aspects for future research.

Author Contributions

Conceptualization, Z.T.A.; validation, I.K.I.; methodology, Z.T.A., I.K.I.; writing—review and editing, I.K.I., and A.J.H.; investigation, Z.T.A., I.K.I., and A.J.H.; formal analysis, A.J.H.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Goldberg, D. Genetic Algorithms in Search, Optimization and Machine Learning, 1st ed.; Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 1989. [Google Scholar]
  2. Kirkpatrick, S.; Gelatt, C.; Vecchi, M. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  3. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 1996, 26, 29–41. [Google Scholar] [CrossRef] [PubMed]
  4. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, MHS’95, Nagoya, Japan, 4–6 October 1995; IEEE: Piscataway, NJ, USA, 1995; pp. 39–43. [Google Scholar]
  5. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  6. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  7. Xing, B.; Gao, W. Innovative Computational Intelligence: A Rough Guide to 134 Clever Algorithms, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  8. Ibraheem, I.K.; Ajeil, F.H. Path Planning of an autonomous Mobile Robot using Swarm Based Optimization Techniques Technique. Al-Khwarizmi Eng. J. 2016, 12, 12–25. [Google Scholar] [CrossRef]
  9. Ibraheem, I.K.; Al-hussainy, A.A. Design of a Double-objective QoS Routing in Dynamic Wireless Networks using Evolutionary Adaptive Genetic Algorithm. Int. J. Adv. Res. Comput. Commun. Eng. 2015, 4, 156–165. [Google Scholar]
  10. Ibraheem, I.K.; Ibraheem, G.A. Motion Control of an Autonomous Mobile Robot using Modified Particle Swarm Optimization Based Fractional Order PID Controller. Eng. Technol. J. 2016, 34, 2406–2419. [Google Scholar]
  11. Humaidi, A.J.; Ibraheem, I.K.; Ajel, A.R. A Novel Adaptive LMS Algorithm with Genetic Search Capabilities for System Identification of Adaptive FIR and IIR Filters. Information 2019, 10, 176. [Google Scholar] [CrossRef]
  12. Humaidi, A.; Hameed, M. Development of a New Adaptive Backstepping Control Design for a Non-Strict and Under-Actuated System Based on a PSO Tuner. Information 2019, 10, 38. [Google Scholar] [CrossRef]
  13. Allawi, Z.T.; Abdalla, T.Y. A PSO-Optimized Reciprocal Velocity Obstacles Algorithm for Navigation of Multiple Mobile Robots. Int. J. Robot. Autom. 2015, 4, 31–40. [Google Scholar] [CrossRef]
  14. Allawi, Z.T.; Abdalla, T.Y. An ABC-Optimized Type-2 Fuzzy Logic Controller for Navigation of Multiple Mobile Robots Ziyad. In Proceedings of the Second Engineering Conference of Control, Computers and Mechatronics Engineering, Baghdad, Iraq, February 2014; pp. 239–247. [Google Scholar]
  15. Tarasewich, P.; McMullen, P. Swarm intelligence: Power in numbers. Commun. ACM 2002, 45, 62–67. [Google Scholar] [CrossRef]
  16. Crepinsek, M.; Liu, S.; Mernik, M. Exploration and exploitation in evolutionary algorithms: A survey. ACM Comput. Surv. 2013, 45, 35. [Google Scholar] [CrossRef]
  17. Chen, J.; Xin, B.; Peng, Z.; Dou, L.; Zhang, J. Optimal contraction theorem for exploration and exploitation tradeoff in search and optimization. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2009, 39, 680–691. [Google Scholar] [CrossRef]
  18. Wolpert, D.; Macready, W. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  19. Yang, X.S. Firefly algorithms for multimodal optimisation. In Proceedings of the Fifth Symposium on Stochastic Algorithms, Foundations and Applications, Sapporo, Japan, 26–28 October 2009; Watanabe, O., Zeugmann, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5792, pp. 169–178, Lecture notes in computer, science. [Google Scholar]
  20. Yang, X.S.; Deb, S. Cuckoo Search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 210–214. [Google Scholar]
  21. Yang, X.S. A new Meta-heuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NISCO 2010); Studies in computational intelligence; Springer: Berlin, Germany, 2010; pp. 65–74. [Google Scholar]
  22. Yang, X.S. Flower pollination algorithm for global optimization. In Unconventional Computation and Natural Computation; Lecture notes in computer science; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7445, pp. 240–249. [Google Scholar]
  23. Yang, X.S. Nature-Inspired Optimization Algorithms, 1st ed.; Elsevier: London, UK, 2014. [Google Scholar]
  24. Mirjalili, S.A.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  25. Mirjalili, S.A. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  26. Mirjalili, S.A. The Ant Lion Optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  27. Mirjalili, S.A. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  28. Mirjalili, S.A.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  29. Lee, Y.; Marvin, A.; Porter, S. Genetic algorithm using real parameters for array antenna design optimization. In MTT/ED/AP/LEO Societies Joint Chapter UK and Rep. of Ireland Section. 1999 High Frequency Postgraduate Student Colloquium; Cat. No.99TH840; IEEE: Piscataway, NJ, USA, 1999; pp. 8–13. [Google Scholar]
  30. Bessaou, M.; Siarry, P. A genetic algorithm with real-value coding to optimize multimodal continuous functions. Struct. Multidiscip. Optim. 2001, 23, 63–74. [Google Scholar] [CrossRef]
Figure 1. Graph of ALLAWI test function for d = 2 and ε = 2.22 × 10 16 .
Figure 1. Graph of ALLAWI test function for d = 2 and ε = 2.22 × 10 16 .
Processes 07 00657 g001
Figure 2. Convergence charts for the ten benchmark functions. (a) F1, (b) F2, (c) F3, (d) F4, (e) F5, (f) F6, (g) F7, (h) F8, (i) F9, and (j) F10.
Figure 2. Convergence charts for the ten benchmark functions. (a) F1, (b) F2, (c) F3, (d) F4, (e) F5, (f) F6, (g) F7, (h) F8, (i) F9, and (j) F10.
Processes 07 00657 g002aProcesses 07 00657 g002bProcesses 07 00657 g002c
Figure 3. Computation time distribution for the ten benchmark functions. (a) F1, (b) F2, (c) F3, (d) F4, (e) F5, (f) F6, (g) F7, (h) F8, (i) F9, and (j) F10.
Figure 3. Computation time distribution for the ten benchmark functions. (a) F1, (b) F2, (c) F3, (d) F4, (e) F5, (f) F6, (g) F7, (h) F8, (i) F9, and (j) F10.
Processes 07 00657 g003aProcesses 07 00657 g003b
Table 1. List of nine benchmark test functions used in global optimization.
Table 1. List of nine benchmark test functions used in global optimization.
Fn.Sym.FunctionFormula | x i | Optimum
F1SPHERE i = 1 d x i 2 < 5 f ( 0 ) = 0
F2ELLIPSOID i = 1 d i x i 2 < 5 f ( 0 ) = 0
F3EXPONENTIAL 1 exp ( 0.5 × i = 1 d x i 2 ) < 5 f ( 0 ) = 0
F4ROSENBROCK i = 1 d 1 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 < 2 f ( 1 ) = 0
F5RASTRIGIN 10 d + i = 1 d ( x i 2 10 cos 2 π x i ) < 5 f ( 0 ) = 0
F6SCHWEFEL 418.983 d i = 1 d ( x i + 420.968 ) sin | x i + 420.968 | < 100 f ( 0 ) = 0
F7GREIWANK i = 1 d x i 2 4000 i = 1 d cos x i i + 1 < 600 f ( 0 ) = 0
F8ACKLEY 20 exp ( 0.2 i = 1 d x i 2 d ) exp ( i = 1 d cos 2 π x d ) + e + 20 < 32 f ( 0 ) = 0
F9SCHAFFER i = 1 d 1 0.5 + sin 2 ( x i 2 x i + 1 2 ) 0.5 ( 1 + 0.001 ( x i 2 + x i + 1 2 ) ) 2 < 100 f ( 0 ) = 0
Table 2. The introduced benchmark test function.
Table 2. The introduced benchmark test function.
Fn.Sym.FunctionFormula | x i | Optimum
F10ALLAWI i = 1 d ( x i 6 2 ( ε + 1 ) x i 4 + ( 4 ε + 1 ) x i 2 ) .   0 < ε 1 < 2 f * ( 0 ) = 0
Table 3. Results of the global fitness and computation time (s) for a sample trial. DGA: Genetic algorithm (decimal form); RGA: Genetic algorithm (real form); PSO: Particle swarm optimization; DEA: Differential evolution algorithms; ABC: Artificial bee colony; FTMA: fine-tuning meta-heuristic algorithm.
Table 3. Results of the global fitness and computation time (s) for a sample trial. DGA: Genetic algorithm (decimal form); RGA: Genetic algorithm (real form); PSO: Particle swarm optimization; DEA: Differential evolution algorithms; ABC: Artificial bee colony; FTMA: fine-tuning meta-heuristic algorithm.
Fn.DGARGAPSODEAABCFTMA
FitnessTimeFitnessTimeFitnessTimeFitnessTimeFitnessTimeFitnessTime
F11.06 × 10−161.051.64 × 10−160.341.66 × 10−160.463.01 × 10−170.277.50 × 10−170.305.8 × 10−170.12
F21.07 × 10−161.395.10 × 10−168.881.16 × 10−170.699.28 × 10−170.402.15 × 10−170.451.51 × 10−170.13
F32.22 × 10−161022.22 × 10−160.282.22 × 10−160.381.11 × 10−160.251.11 × 10−160.351.11 × 10−160.10
F41.95 × 10−66.961.20 × 10−54.841.97 × 10−160.461.63 × 10−160.464.37 × 10−72.527.05 × 10−170.41
F501.1800.5000.675.22 × 10−64.8500.6500.18
F601.0000.4600.5800.2800.63300.15
F72.22 × 10−161.2300.5300.715.33 × 10−95.411.11 × 10−162.1500.21
F802.671.34 × 10−127.0200.8300.6500.7500.30
F900.582.22 × 10−160.2000.3100.432.22 × 10−160.6600.09
F104.44 × 10−168.011.18 × 10−160.291.76 × 10−160.511.70 × 10−134.191.54 × 10−160.672.12 × 10−160.11
Table 4. Statistical results of the output error for 30 trials.
Table 4. Statistical results of the output error for 30 trials.
Fn.DGARGAPSODEAABCFTMA
m.Std.m.Std.m.Std.m.Std.m.Std.m.Std.
F15.94 × 10−175.84 × 10−175.10 × 10152.43 × 10−141.03 × 10−166.23 × 10−179.13 × 10−175.75 × 10−171.00 × 10−166.80 × 10−177.84 × 10−175.35 × 10−17
F26.29 × 10−175.46 × 10−172.20 × 10167.04 × 10−167.74 × 10−175.98 × 10−171.05 × 10−166.76 × 10−178.78 × 10−176.24 × 10−179.95 × 10−176.52 × 10−17
F31.29 × 10169.96 × 10172.77 × 10151.31 × 10−141.36 × 10−167.94 × 10−171.25 × 10−167.97 × 10−171.14 × 10−167.29 × 10−171.03 × 10168.56 × 10−17
F42.69 × 10068.17 × 10063.51 × 10050.00011.07 × 10−166.38 × 10−171.09 × 10−166.92 × 10−171.05 × 10061.85 × 10061.25 × 10166.43 × 10−17
F5008.52*10154.52 × 10−14003.99 × 10074.84 × 10070000
F6000000000000
F71.18*10−161.03*10−161.33 × 10−168.78 × 10−171.03 × 10−168.56 × 10−171.50 × 10071.96 × 10071.11 × 10169.50 × 10171.03 × 10168.56 × 1017
F82.36*10−168.86*10−161.59 × 10084.54 × 100800000000
F91.03*10−161.10*10−161.55 × 10−161.01 × 10−161.25 × 10−161.1 × 10−161.55 × 10−161.01 × 10−161.48 × 10−161.04 × 10−161.40 × 10−161.07 × 10−16
F106.56*10−169.14*10−171.19 × 10−161.11 × 10−163.73 × 10163.01 × 10161.41 × 10096.34 × 10091.04 × 10−166.07 × 10−179.80 × 10−175.83 × 10−17
Table 5. Statistical results of the computation time (seconds) for 30 trials.
Table 5. Statistical results of the computation time (seconds) for 30 trials.
Fn.DGARGAPSODEAABCFTMA
m.Std.m.Std.m.Std.m.Std.m.Std.m.Std.
F11.0850.2210.5661.0910.5460.0720.3150.0470.3640.0400.1250.021
F21.4230.2212.3893.7800.7540.1030.4130.0520.4900.0630.1590.026
F31.0210.2191.2152.2240.5250.0750.2910.0340.3530.0670.1660.015
F46.2992.6605.7470.6580.5180.0870.5000.0762.9200.4100.4490.126
F51.2750.2471.7882.5610.6870.0915.2000.5000.7330.0840.2030.033
F61.1220.2020.3650.0370.6170.0300.3120.0320.3720.0320.1290.018
F71.3610.2900.6860.1100.7140.0305.7830.1112.1000.2410.2130.028
F85.0552.5187.8320.1240.9840.1470.6910.0350.8110.0590.3070.026
F90.7730.1390.4040.7610.3430.0540.4780.0510.7650.0920.1190.022
F102.2883.1540.7851.1882.4601.5854.5830.3000.6430.1100.1240.055

Share and Cite

MDPI and ACS Style

Allawi, Z.T.; Ibraheem, I.K.; Humaidi, A.J. Fine-Tuning Meta-Heuristic Algorithm for Global Optimization. Processes 2019, 7, 657. https://doi.org/10.3390/pr7100657

AMA Style

Allawi ZT, Ibraheem IK, Humaidi AJ. Fine-Tuning Meta-Heuristic Algorithm for Global Optimization. Processes. 2019; 7(10):657. https://doi.org/10.3390/pr7100657

Chicago/Turabian Style

Allawi, Ziyad T., Ibraheem Kasim Ibraheem, and Amjad J. Humaidi. 2019. "Fine-Tuning Meta-Heuristic Algorithm for Global Optimization" Processes 7, no. 10: 657. https://doi.org/10.3390/pr7100657

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop