Next Article in Journal
Scheduling Non-Preemptible Jobs to Minimize Peak Demand
Previous Article in Journal
A Comparative Study on Recently-Introduced Nature-Based Global Optimization Methods in Complex Mechanical System Design
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Optimization Algorithm Inspired by the Phase Transition Phenomenon for Global Optimization Problems with Continuous Variables

1
School of Computer Science and Engineering, Xi’an University of Technology, Xi’an 710048, China
2
School of Computer Science and Engineering, Xi’an Technological University, Xi’an 710021, China
*
Author to whom correspondence should be addressed.
Algorithms 2017, 10(4), 119; https://doi.org/10.3390/a10040119
Submission received: 8 September 2017 / Revised: 29 September 2017 / Accepted: 9 October 2017 / Published: 20 October 2017

Abstract

:
In this paper, we propose a novel nature-inspired meta-heuristic algorithm for continuous global optimization, named the phase transition-based optimization algorithm (PTBO). It mimics three completely different kinds of motion characteristics of elements in three different phases, which are the unstable phase, the meta-stable phase, and the stable phase. Three corresponding operators, which are the stochastic operator of the unstable phase, the shrinkage operator in the meta-stable phase, and the vibration operator of the stable phase, are designed in the proposed algorithm. In PTBO, the three different phases of elements dynamically execute different search tasks according to their phase in each generation. It makes it such that PTBO not only has a wide range of exploration capabilities, but also has the ability to quickly exploit them. Numerical experiments are carried out on twenty-eight functions of the CEC 2013 benchmark suite. The simulation results demonstrate its better performance compared with that of other state-of-the-art optimization algorithms.

1. Introduction

Nature-inspired optimization algorithms have become of increasing interest to many researchers in optimization fields in recent years [1]. After half a century of development, nature-inspired optimization algorithms have formed a great family. It not only has a wide range of contact with biological, physical, and other basic science, but also involves many fields such as artificial intelligence, artificial life, and computer science.
Due to the differences in natural phenomena, these optimization algorithms can be roughly divided into three types. These are algorithms based on biological evolution, algorithms based on swarm behavior, and algorithms based on physical phenomena. Typical biological evolutionary algorithms are Evolutionary Strategies (ES) [2], Evolutionary Programming (EP) [3], the Genetic Algorithm (GA) [4,5,6], Genetic Programming (GP) [7], Differential Evolution (DE) [8], the Backtracking Search Algorithm (BSA) [9], Biogeography-Based Optimization (BBO) [10,11], and the Differential Search Algorithm (DSA) [12].
In the last decade, swarm intelligence, as a branch of intelligent computation models, has been gradually rising [13]. Swarm intelligence algorithms mainly simulate biological habits or behavior, including foraging behavior, search behavior and migratory behavior, brooding behavior, and mating behavior. Inspired by these phenomena, researchers have designed many intelligent algorithms, such as Ant Colony Optimization (ACO) [14], Particle Swarm Optimization (PSO) [15], Bacterial Foraging (BFA) [16], Artificial Bee Colony (ABC) [17,18,19], Group Search Optimization (GSO) [20], Cuckoo Search (CS) [21,22], Seeker Optimization (SOA) [23], the Bat Algorithm (BA) [24], Bird Mating Optimization (BMO) [25], Brain Storm Optimization [26,27] and the Grey Wolf Optimizer (GWO) [28].
In recent years, in addition to the above two kinds of algorithms, intelligent algorithms simulating physical phenomenon have also attracted a great deal of researchers’ attention, such as the Gravitational Search Algorithm (GSA) [29], the Harmony Search Algorithm (HSA) [30], the Water Cycle Algorithm (WCA) [31], the Intelligent Water Drops Algorithm (IWDA) [32], Water Wave Optimization (WWO) [33], and States of Matter Search (SMS) [34]. The classical optimization algorithm about a physical phenomenon is simulated annealing, which is based on the annealing process of metal [35,36].
Though these meta-heuristic algorithms that have many advantages over traditional algorithms, especially in NP hard problems, have been proposed to tackle many challenging complex optimization problems in science and industry, there is no single approach that is optimal for all optimization problems [37]. In other words, an approach may be suitable for solving these problems, but it is not suitable for solving those problems. This is especially true as global optimization problems have become more and more complex, from simple uni-modal functions to hybrid rotated shifted multimodal functions [38]. Hence, more innovative and effective optimization algorithms are always needed.
A natural phenomenon provides a rich source of inspiration to researchers to develop a diverse range of optimization algorithms with different degrees of usefulness and practicability. In this paper, a new meta-heuristic optimization algorithm inspired by the phase transition phenomenon of elements in a natural system for continuous global optimization is proposed, which is termed phase transition-based optimization (PTBO). From a statistical mechanics point of view, a phase transition is a non-trivial macroscopic form of collective behavior in a system composed of a number of elements that follow simple microscopic laws [39]. Phase transitions play an important role in the probabilistic analysis of combinatorial optimization problems, and are successfully applied in Random Graph, Satisfiability, and Traveling Salesman Problems [40]. However, there are few algorithms with the mechanism of phase transition for solving continuous global optimization problems in the literature.
Phase transition is ubiquitous in nature or in social life. For example, at atmospheric pressure, water boils at a critical temperature of 100 °C. When the temperature is lower than 100 °C, water is a liquid, while above 100 °C, it is a gas. Besides this, there are also a lot of examples of phase transition. Everybody knows that magnets attract nails made out of iron. However, the attraction force disappears when the temperature of the nail is raised above 770 °C. When the temperature is above 770 °C, the nail enters a paramagnetic phase. Moreover, it is well-known that mercury is a good conductor for its weak resistance. Nevertheless, the electrical resistance of mercury falls abruptly down to zero when the temperature passes through 4.2 K (kelvin temperature). From a macroscopical view, a phase transition is the transformation of a system from one phase to another one, depending on the values of control parameters, such as temperature, pressure, and other outside interference.
From the above examples, we can see that each system or matter has different phases and related critical points for those phases. We can also observe that all phase transitions have a common law, and we may think that phase transition is the competitive result of two kinds of tendencies: stable order and unstable disorder. In complex system theory, phase transition is related to the self-organized process by which a system transforms from disorder to order. From this point of view, phase transition implicates a search process of optimization. The thermal motion of an element is a source of disorder [41]. The interaction of elements is the cause of order.
In the proposed PTBO algorithm, the degree of order or disorder is described by stability. We used the value of an objective function to depict the level of stability. For the sake of generality, we extract three kinds of phases in the process of transition from disorder to order, i.e., an unstable phase, a meta-stable phase [42] and a stable phase. From a microscopic viewpoint, the diverse motion characteristics of elements in different phases provide us with novel inspiration to develop a new meta-heuristic algorithm for solving complex continuous optimization problems.
This paper is organized as follows. In Section 2, we briefly introduce the prerequisite preparation for the phase transition-based optimization algorithm. In Section 3, the phase transition-based optimization algorithm and its operators are described. An analysis of PTBO and a comparative study are presented in Section 4. In Section 5, the experimental results with those of other state-of-the-art algorithms are demonstrated. Finally, Section 6 draws conclusions.

2. Prerequisite Preparation

2.1. Fundamental Concepts

Nature itself is a large complex system. In a natural system, the motion of elements transforming from an unstable phase (disorder) to a stable phase (order) is an eternal natural law. In this paper, we uniformly call a molecule in water, an atom in iron, an electron in mercury, etc., an element. The phase of an element in a system may be divided into an unstable phase, a meta-stable phase, and a stable phase. Figure 1 shows three possible positions of elements in a system.
From Figure 1, we can observe that an element is most unstable in the position of point A, and we intuitively call it an unstable phase. On the contrary, in the position of point C, the element is most stable, and we name it a stable phase. The position of point B, between point A and point C, is the transition phase, and we term it a meta-stable phase. The definitions of the three phases are as follows.
Definition 1.
Unstable Phase (UP). The element is in a phase of complete disorder, and moves freely towards an arbitrary direction. In the case of this phase, the element has a large range of motion, and has the ability of global divergence.
Definition 2.
Meta-stable Phase (MP). The element is in a phase between disorder and order, and moves according to a certain law, such as towards the lowest point. In the case of this phase, the element has a moderate activity, and possesses the ability of local shrinkage.
Definition 3.
Stable Phase (SP). The element is in a very regular phase of orderly motion. In the case of this phase, the element has a very small range of activity and has the ability of fine tuning.
According to the above definitions, we can give a more detailed description and examples about the characteristics of the three phases. These motion characteristics in three phases provide us with rich potential to develop the proposed PTBO algorithm. The following Table 1 summarizes the motional characteristics of the unstable phase, the meta-stable phase, and the stable phase.

2.2. The Determination of Critical Interval about Three Phases

As mentioned before, we use stability, which is depicted by the fitness value of an objective function, to describe the degree of order or disorder of an element. In our proposed algorithm, the higher the fitness value of an element is, the worse the stability is. The question of how to divide the critical intervals of the unstable phase, the meta-stable phase, and the stable phase is a primary problem that must be addressed before the proposed algorithm is designed.
For simplicity, we use F max to denote a maximum fitness value, and we say this element is in the most unstable phase. On the contrary, F min denotes a minimum fitness value. Then, we set up that the stable phase accounts for a l p h a , and the proportion of the meta-stable phase is b e t a . So, the proportion of the unstable phase is 1 a l p h a b e t a . The ratio of three critical intervals is shown in Figure 2.
Although F max and F min are dynamically changed in each generation, which means the iteration process of phase transition, the basic relationship between the phase of the elements and the critical intervals is shown in Table 2.

3. Phase Transition-Based Optimization Algorithm

3.1. Basic Idea of the PTBO Algorithm

In this work, the motion of elements from an unstable phase to another relative stable phase in PTBO is as natural selection in GA. Many of these iterations from an unstable phase to another relative stable phase can eventually make an element reach absolute stability. The diverse motional characteristics of elements in the three phases are the core of the PTBO algorithm to simulate this phase transition process of elements. In the PTBO algorithm, three corresponding operators are designed. An appropriate combination of the three operators makes an effective search for the global minimum in the solution space.

3.2. The Correspondence of PTBO and the Phase Transition Process

Based on the basic law of elements transitioning from an unstable phase (disorder) to a stable phase (order), the correspondence of PTBO and phase transition can be summarized in Table 3.

3.3. The Overall Design of the PTBO Algorithm

The simplified cyclic diagram of the phase transition process in our PTBO algorithm is shown in Figure 3. It is a complete cyclic process of phase transition from an unstable phase to a stable phase. Firstly, in the first generation, we calculate the maximum and minimum fitness value for each element, respectively, and divide the critical intervals of the unstable phase, the meta-stable phase, and the stable phase according to the rules in Table 2. Secondly, the element will perform the relevant search according to its own phase. If the result of the new degree of stability is better than that of the original phase, the motion will be reserved. Otherwise, we will abandon this operation. That is to say, if the original phase of an element is UP, the movement direction may be towards UP, MP, and SP. However, if the original phase of an element is MP, the movement direction may be towards MP and SP. Of course, if the original phase of an element is SP, the movement direction is towards only SP. Finally, after much iteration, elements will eventually obtain an absolute stability.
Broadly speaking, we may think of PTBO as an algorithmic framework. We simply define the general operations of the whole algorithm about the motion of elements in the phase transition. In a word, PTBO is flexible for skilled users to customize it according to a specific scene of phase transition.
According to the above complete cyclic process of the phase transition, the whole operating process of PTBO can be summarized as three procedures: population initialization, iterations of three operators, and individual selection. The three operators in the iterations include the stochastic operator, the shrinkage operator, and the vibration operator.

3.3.1. Population Initialization

PTBO is also a population-based meta-heuristic algorithm. Like other evolutionary algorithms (EAs), PTBO starts with an initialization of a population, which contains a population size (the size is N ) of element individuals. The current generation evolves into the next generation through the three operators described as below (see Section 3.3.2). That is to say, the population continually evolves along with the proceeding generation until the termination condition is met. Here, we initialize the j-th dimensional component of the i-th individual as
X i j = X x j min + r a n d * ( X j max X j min )
where r a n d is an uniformly distributed random number between 0 and 1, X j max and X j min are the upper boundary and lower boundary of j-th dimension of each individual, respectively.

3.3.2. Iterations of the Three Operators

Now we simply give some certain implementation details about the three operators in the three different phases.
(1) Stochastic operator
Stochastic diffusion is a common operation in which elements randomly move and pass one another in an unstable phase. Although the movement of elements is chaotic, it actually obeys a certain law from a statistical point of view. We can use the mean free path [43], which is a distance between an element and two other elements in two successive collisions, to represent the stochastic motion characteristic of elements in an unstable phase. Figure 4 simply shows the process of the free walking path of elements.
The free walking path of elements is the distance traveled by an element and other individuals through two collisions. Therefore, the stochastic operator of elements may be implemented as follows:
n e w X i = X i + r a n d 1 * ( X j X i ) + r a n d 2 * ( X k X i )
where n e w X i is the new position of X i after the stochastic motion, r a n d 1 and r a n d 2 are two random vectors, where each element is a random number in the range (0, 1), and the indices j and k are mutually exclusive integers randomly chosen from the range between 1 and N that is also different from the indices i.
(2) Shrinkage operator
In a meta-stable phase, an element will be inclined to move closer to the optimal one. From a statistical standpoint, the geometric center is a very important digital characteristic and represents the shrinkage trend of elements in a certain degree. Figure 5 briefly gives the shrinkage trend of elements towards the optimal point.
Hence, the gradual shrinkage to the central position is the best motion to elements in a meta-stable phase. So, the shrinkage operator of elements may be implemented as follows:
n e w X i = n e w X i + ( X g b X i ) * N ( 0 , 1 )
where n e w X i is the new position of X i after the shrinkage operation, X g b is the best individual in the population, and N ( 0 , 1 ) is a normal random number with mean 0 and standard deviation 1. The Normal distribution is an important family of continuous probability distributions applied in many fields.
(3) Vibration operator
Elements in a stable phase will be apt to only vibrate about their equilibrium positions. Figure 6 briefly shows the vibration of elements.
Hence, the vibration operator of elements may be implemented as follows:
n e w X i = n e w X i + ( 2 * r a n d 1 ) * s t e p S i z e
where n e w X i is the new position of X i after the vibration operation, r a n d is a uniformly distributed random number in the range (0, 1), and s t e p S i z e is the control parameter which regulates the amplitude of jitter with a process of evolutionary generation. With the evolution of the phase transition, the amplitude of vibration will gradually become smaller. s t e p S i z e is described as follows:
s t e p S i z e = exp ( 1 G / ( G g + 1 ) )
where the G and g denote the maximum number of iterations and current number of iteration respectively, and exp ( ) stands for the exponential function.

3.3.3. Individual Selection

In the PTBO algorithm, like other EAs, one-to-one greedy selection is employed by comparing a parent individual and its new generated corresponding offspring. In addition, this greedy selection strategy may raise diversity compared with other strategies, such as tournament selection and rank-based selection. The selection operation at the k-th generation is described as follows:
X i k + 1 = { X i k      if f   ( n e w X i k ) > f   ( X i k ) n e w X i k if f   ( n e w X i k ) < = f   ( X i k )
where f ( X ) is the objective function value of each individual.

3.4. Flowchart and Implementation Steps of PTBO

As described above, the main flowchart of the PTBO algorithm is given in Figure 7.
The implementation steps of PTBO are summarized as follows:
Step 1. Initialization: set up algorithm parameters N, D, alpha and beta, randomly generate initial population of elements, and set g = 0 ;
Step 2. Evaluation and partition interval: calculate the fitness values of all individuals and obtain the F max and F min , and divide the critical interval of UP, MP and SP according to Table 2;
Step 3. Stochastic operator: using Formula (2) to create n e w X i ;
Step 4. Shrinkage operator: using Formula (3) to update n e w X i ;
Step 5. Vibration operator: using Formula (4) and (5) to update n e w X i ;
Step 6. Individual selection: accept n e w X i if f ( n e w X i ) is better than f ( X i ) ;
Step 7. Termination judgment: if termination condition is satisfied, stop the algorithm; otherwise, g = g + 1 , go to Step 3.

4. The Analysis of PTBO and a Comparative Study

4.1. The Analysis of Time Complexity

For PTBO, the main operations include the operation of population initialization and the stochastic operator, shrinkage operator, and vibration operator. The time complexity of each operation in a single iteration can be computed as follows:
1
Population initialization operation: O ( N * D ) .
2
Stochastic operator: O ( N * D ) .
3
Shrinkage operator: O ( N * D ) .
4
Vibration operator: O ( N * D ) .
Therefore, the total worst time complexity of PTBO in one iteration is 3 * O ( N * D ) + O ( N ) . According to the operational rules of the symbol O, the worst time complexity of one iteration for PTBO can be simplified as O ( N * D ) . It is worth noting that PTBO has the similar time complexity to some popular meta-heuristic algorithms such as PSO ( O ( N * D ) ).

4.2. The Dynamic Implementation Analysis of PTBO

In this section, the step-wise procedure for the implementation of PTBO for optimization is presented. For the demonstration of the process, Rastrigin’s function [44] is herein considered as an example. Rastrigin’s function is a classic test function in optimization theory in which the point of global minimum is surrounded by a large number of local minima. To converge to the global minimum without being stuck at one of these local minima, however, is extremely difficult. Some numerical solvers need to take a long time to converge to it. Three-dimensional contour plot for Rastrigin’s function is shown in Figure 8a. Rastrigin’s function is described as follows:
F ( x ) = i = 1 D ( x i 2 10 cos ( 2 π x i ) + 10 )
In this experiment, we use 30 individuals to solve the above minimization problem, and the population distribution at various generations in an evolutionary process is shown in Figure 8b–f, with D = 2, alpha = 0.1 and beta = 0.8. In Figure 8b–f, the labels of red diamond represent the optimal point.
From Figure 8b–f, we can observe that the population distribution information can significantly vary at various generations during the run time. PTBO can effectively adapt to a time-varying search space or landscapes.

4.3. The Differences between PTBO and Other Algorithms

4.3.1. The Differences between PTBO and PSO

Like PSO (Particle Swarm Optimization), PTBO is also introduced to deal with unconstrained global optimization problems with continuous variables. In the representational form of implementation, we can certainly think that PTBO is also based on a particle system in the same way as PSO. However, according to the overall design of the PTBO algorithm, there are some differences between PTBO and the classical PSO.
Firstly, in heuristic thought, PSO is inspired by biological behavior or habits for simulating animal swarm behavior, such as fish schooling and bird flocking, while PTBO is inspired by the phase transition phenomenon of elements in nature. Secondly, in PSO, the direction of a particle is calculated by two best positions, p b e s t and g b e s t . However, the motion direction of an element in PTBO is arbitrarily derived from the other two elements that are different from each other. It may enhance the diversity of the population and ensure the avoidance of premature convergence. Thirdly, in the design of the operators, each particle in PSO contains a velocity item. Nevertheless, in PTBO the concept of velocity does not exist. Besides, PSO uses the position information of p b e s t and g b e s t to record the updating of velocity or position. However, PTBO uses only the position information about g b e s t and the p b e s t position of elements is not considered.

4.3.2. The Differences between PTBO and SMS

SMS (States of Matter Search) is a nature-inspired optimization algorithm based on the simulation of the states of matter phenomenon [35]. Specifically, SMS is devised by considering each state of matter at one different exploration–exploitation ratio by dividing the whole search process into three stages, i.e., gas, liquid, and solid.
Although the sources of inspiration for PBTO and SMS are similar, which are taken from a physical phenomenon about the states of matter, the evolution processes of PTBO and SMS are completely different. The evolution process of SMS is as follows. At first, all individuals in the population perform exploration in the mode of the gas state. Then, after 50% of the iterations, the search mode is changed into the liquid state for 40% of the iterations, i.e., the search between exploration and exploitation. Finally, the evolutionary process enters the stage of exploitation (liquid state) for 10% of the iterations. However, in PTBO, the three phases are coexistent in the entire search process. In other words, the three different phases of individuals execute dynamically different search tasks according to their phase in each generation. Hence, the implementation about the balance of exploration and exploitation between PTBO and SMS is completely different. Besides, the operators of PTBO and SMS are also completely different. In summary, it can be said that there are fundamental differences between PTBO and SMS.

5. Experimental Results

5.1. Benchmark Functions

In order to verify the effectiveness and robustness of the proposed PTBO algorithm, PTBO is applied in experimental simulation studies for finding the global minimum of the entire 28 test functions of the CEC 2013 special session [44]. The CEC 2013’s test suite, which has been improved on the basis of CEC 2005 [38], covers various types of function optimization, and is summarized in Table 4.
The search range of all functions is between −100 and 100 in every dimension. These problems are shifted or rotated to increase their complexity, and are treated as black-box problems. The explicit equations of the problems are not allowed to be used. The test suite of Table 4 consists of five uni-modal functions (F01 to F05), 15 multimodal functions (F06 to F20) and eight composition functions (F21 to F28).

5.2. Parameters Determination of the Interval Ratio of PTBO

In our PTBO algorithm, there are two parameters, the alpha and the beta, that need to be allocated to determine the critical intervals about the three phases. In a natural system, we can observe that the elements in the middle meta-stable phase account for the majority, and the elements in the unstable and stable phases occupy only a small proportion. This phenomenon is consistent with the two-eight law (or the 1/5th rule) [2]. For the case of simplicity, we give a value of 0.8 to beta, which is the proportion of the elements in the meta-stable phase. So, the elements in the unstable and stable phases account for 0.2 in total. The specific interval ratio settings of the three phases are shown in Table 5.
In order to determine which interval ratio is the most suitable for the PTBO algorithm, we conducted some compared experiments with 50 independent runs according to Table 5. The compared results of different interval ratios are listed in Table 6. In Table 6, the mean values are listed in the first line, and the standard deviations are displayed in the second line. We can intuitively observe that the ratio of prop4 has the best accuracy results compared with the other three ratios. Hence, in the subsequent experiments, we choose the value of beta to be 0.8, and the proportion of the unstable and stable phases is a random ratio of 0.2 in total.

5.3. Experimental Platform and Algorithms’ Parameter Settings

For a fair comparison, all of the experiments are conducted on the same machine with an Intel 3.4 GHz central processing unit (CPU) and 4GB memory. The operating system is Windows 7 with MATLAB 8.0.0.783(R2012b).
On the test functions, we compare PTBO with classic PSO, DE, and six other recent popular meta-heuristic algorithms, which are BA, CS, BSO, WWO, WCA, and SMS. The first three algorithms belong to the second category, i.e., swarm intelligence, and the remaining three algorithms belong to the third category, i.e., intelligent algorithms simulating physical phenomena. The parameters adopted for the PTBO algorithm and the compared algorithms are given in Table 7.

5.4. The Compared Experimental Results

Each of the experiments was repeated for 50 independent runs with different random seeds, and the average function values of the best solutions were recorded.

5.4.1. Comparisons on Solution Accuracy

The accuracy results of the uni-modal, multimodal, and composition functions are given in Table 8, Table 9 and Table 10, respectively. The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations. In all experiments, the dimensions of all problems are 30. The best results among the algorithms are shown in bold. In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line. A two-tailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other. In the below three tables, if PTBO significantly outperforms another algorithm, a ‘ǂ’ is labeled in the back of the corresponding result obtained by this algorithm. Corresponding to ‘ξ’, ‘ξ’ denotes that PTBO is worse than other algorithms, and ‘~’ denotes that there is no significance between PTBO and the compared algorithm. At the last row of each table, a summary of the total number of ‘ǂ’, ‘ξ’, and ‘~’ is calculated.
(1) The accuracy results of uni-modal functions
The results of the uni-modal functions are shown in Table 8 in terms of the mean optimum solution and the standard deviation of the solutions.
In Table 8, among the five functions, PTBO has yielded the best results on two of them (F03 and F05). Although PTBO is worse than WCA, which obtains the best results on F01, F02, and F04, we observe from the statistical results that the performance of PTBO in uni-modal functions is significantly better than PSO, DE, BA, CS, BSO, WWO, and SMS.
(2) The accuracy results of multimodal functions
From Table 9, it can be observed that the mean value and the standard deviation value of PSO displays the best results for the function F15. DE obtains the best results on F07, F09, and F12, and BA has the best result for the function F06. BSO has the best result for the function F16. WCA obtains the best results on F8 and F10. CS, WWO, and SMS do not obtain any best result except for the function F8. With regard to the function F8, there is no big difference about all eight algorithms. The PTBO algorithm performs well for the functions F11, F13, F14, F17, F18, and F20, and according to the data of the last row in Table 9, it can be concluded that the PTBO algorithm has good performance in solution accuracy for multimodal benchmark functions.
(3) The accuracy results of composition functions
It can be seen from Table 10 that DE acquires the best results on F25, and WWO obtains the best results on F26. However, PTBO obtains the best results on F21, F22, F23, F24, F27, and F28. In general, PTBO shows the best overall performance from the statistical results according to the data of the last row in Table 10.
(4) The total results of solution accuracy of the 28 functions
A summary of the total number of ‘ǂ’, ‘ξ’, and ‘~’ about the solution accuracy of 28 functions is given in Table 11. From Table 11, it can be observed that PTBO has the best performance of the nine algorithms.

5.4.2. The Comparison Results of Convergence Speed

Due to page limitation, Figure 9 presents the convergence graphs of parts of the 28 test functions in terms of the mean fitness values achieved by each of the nine algorithms for 50 runs. From Figure 9, it can be observed that PTBO converges towards the optimal values faster than the other algorithms in most cases, i.e., F1, F6, F12, F18, F21, F27, and F28.

5.4.3. The Comparison Results of Wilcoxon Signed-Rank Test

To further statistically compare PTBO with the other eight algorithms, a Wilcoxon signed-rank test [45] has been carried out to show the differences between PTBO and the other algorithms. The p-values on every function by a two-tailed test with a significance level of 0.05 between PTBO and other algorithms are given in Table 12.
From the results of the signed-rank test in Table 12, we can observe that our PTBO algorithm has a significant advantage over seven other algorithms in p-value, which are PSO, BA, CS, BSO, WWO, WCA, and SMS. Although PTBO has no significant advantage over DE, PTBO significantly outperforms DE in R+ value. R+ is the sum of ranks for the functions on which the first algorithm outperformed the second [46], and the differences are ranked according to their absolute values. So, according to the statistical results, it can be concluded that PTBO generally offered better performance than other algorithms for all 28 functions.
The above comparisons between PTBO and other nature-based meta-heuristic algorithms may offer a possible explanation for why PTBO could obtain better results on some optimization problems and that it is possible for PTBO to deal with more complex problems better.

5.4.4. The Comparison Results of Time Complexity

The total comparisons of mean time complexity of the 28 functions about the nine algorithms are given in Figure 10. From Figure 10, we can observe that PTBO is ranked sixth, and is only better than BSO and WWO. However, we can find that the mean CPU time of PTBO is slightly worse than PSO and DE, and it also confirms the analysis in Section 4.1.

6. Conclusions

In this work, a new meta-heuristic optimization algorithm simulating the phase transition of elements in a natural system, named PTBO, has been described. Although the proposed PTBO algorithm is an algorithm with some similarities to the SMS and PSO algorithms, the main concepts are slightly different. It is very simple and very flexible when compared to the existing nature-inspired algorithms. It is also very robust, at least for the test problems considered in this work. From the numerical simulation results and comparisons, it is concluded that PTBO can be used for solving uni-modal and multimodal numerical optimization problems, and is similarly effective and efficient compared with other optimization algorithms. It is worth noting that PTBO performs slightly worse than PSO and DE in time complexity. In the future, further research contents include (1) developing a more effective division method for the three phases, (2) combining the PTBO algorithm with other evolution algorithms, and (3) applying PTBO to real-world optimization problems, such as the reliability–redundancy allocation problem and structural engineering design optimization problems.

Acknowledgments

This research was partially supported by Industrial Science and technology project of Shaanxi Science and Technology Department (Grant No. 2016GY-088) and the Project of Department of Education Science Research of Shaanxi Province (Grant No. 17JK0371). Besides those, this research was partially supported by the fund of National Laboratory of Network and Detection Control (Grant No. GSYSJ2016007). The author declares that the above funding does not lead to any conflict of interests regarding the publication of this manuscript.

Author Contributions

Lei Wang conceived and designed the framework; Zijian Cao performed the experiments and wrote the paper.

Conflicts of Interest

The author declares that there is no conflict of interests regarding the publication of this paper.

References

  1. Yang, X.S. Nature-Inspired Optimization Algorithms; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar]
  2. Beyer, H.G.; Schwefel, H.P. Evolution strategies—A comprehensive introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
  3. Fogel, L.J.; Owens, A.J.; Walsh, M.J. Artificial Intelligence through Simulated Evolution; Wiley: New York, NY, USA, 1966. [Google Scholar]
  4. Holland, J.H. Adaptation in Natural and Artificial Systems; University of Michigan Press, The MIT Press: London, UK, 1975. [Google Scholar]
  5. Garg, H. A hybrid PSO-GA algorithm for constrained optimization problems. Appl. Math. Comput. 2016, 274, 292–305. [Google Scholar] [CrossRef]
  6. Garg, H. A hybrid GA-GSA algorithm for optimizing the performance of an industrial system by utilizing uncertain data. In Handbook of Research on Artificial Intelligence Techniques and Algorithms; IGI Global: Hershey, PA, USA, 2015; pp. 620–654. [Google Scholar]
  7. Holland, J.R. Genetic Programming: On the Programming of Computers by Means of Natural Selection; MIT Press: London, UK, 1992. [Google Scholar]
  8. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic strategy for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  9. Civicioglu, P. Backtracking search optimization algorithm for numerical optimization problems. Appl. Math. Comput. 2013, 219, 8121–8144. [Google Scholar] [CrossRef]
  10. Simon, D. Biogeography-based optimization. IEEE Trans. Evolut. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  11. Garg, H. An efficient biogeography based optimization algorithm for solving reliability optimization problems. Swarm Evolut. Comput. 2015, 24, 1–10. [Google Scholar] [CrossRef]
  12. Civicioglu, P. Transforming geocentric cartesian coordinates to geodetic coordinates by using differential search algorithm. Comput. Geosci. 2012, 46, 229–247. [Google Scholar] [CrossRef]
  13. Rozenberg, G.; Bäck, T.; Kok, J.N. Handbook of Natural Computing; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  14. Dorigo, M. Optimization, Learning and Natural Algorithms. Ph.D. Thesis, Politecnico di Milano, Milano, Italy, 1992. (In Italian). [Google Scholar]
  15. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks (ICNN), Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  16. Passino, K.M. Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Syst. 2002, 22, 52–67. [Google Scholar] [CrossRef]
  17. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  18. Garg, H.; Rani, M.; Sharma, S.P. An efficient two phase approach for solving reliability–redundancy allocation problem using artificial bee colony technique. Comput. Oper. Res. 2013, 40, 2961–2969. [Google Scholar] [CrossRef]
  19. Garg, H. Solving structural engineering design optimization problems using an artificial bee colony algorithm. J. Ind. Manag. Optim. 2014, 10, 777–794. [Google Scholar] [CrossRef]
  20. He, S.; Wu, Q.H.; Saunders, J.R. Group search optimizer: An optimization algorithm inspired by animal searching behavior. IEEE Trans. Evolut. Comput. 2009, 13, 973–990. [Google Scholar] [CrossRef]
  21. Yang, X.S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the IEEE World Congress on Nature & Biologically Inspired Computing (NaBIC 2009), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  22. Garg, H. An approach for solving constrained reliability-redundancy allocation problems using cuckoo search algorithm. Beni-Suef Univ. J. Basic Appl. Sci. 2015, 4, 14–25. [Google Scholar] [CrossRef]
  23. Dai, C.H.; Chen, W.R.; Song, Y.H.; Zhu, Y.H. Seeker optimization algorithm: A novel stochastic search algorithm for global numerical optimization. J. Syst. Eng. Electron. 2011, 21, 300–311. [Google Scholar] [CrossRef]
  24. Yang, X.S.; Gandomi, A.H. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef]
  25. Gandom, A. Bird mating optimizer: An optimization algorithm inspired by bird mating strategies. Commun. Nonlinear Sci. Numer. Simul. 2014, 19, 1213–1228. [Google Scholar]
  26. Shi, Y.H. Brain storm optimization algorithm. In Proceedings of the Second International Conference of Swarm Intelligence, Chongqing, China, 12–15 June 2011; pp. 303–309. [Google Scholar]
  27. Shi, Y.H. An optimization algorithm based on brainstorming process. Int. J. Swarm Intell. Res. 2011, 2, 35–62. [Google Scholar] [CrossRef]
  28. Mirjalili, S. How effective is the Grey Wolf optimizer in training multi-layer perceptrons. Appl. Intell. 2015, 43, 150–161. [Google Scholar] [CrossRef]
  29. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  30. Lee, K.S.; Geem, Z.W. A new meta-heuristic algorithm for continuous engineering optimization: Harmony search theory and practice. Comput. Methods Appl. Mech. Eng. 2005, 194, 3902–3933. [Google Scholar] [CrossRef]
  31. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm—A novel meta-heuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110, 151–166. [Google Scholar] [CrossRef]
  32. Shah-Hosseini, H. The intelligent water drops algorithm: A nature-inspired swarm-based optimization algorithm. Int. J. Bio-Inspired Comput. 2009, 1, 71–79. [Google Scholar] [CrossRef]
  33. Zheng, Y.J. Water wave optimization: A new nature-inspired meta-heuristic. Comput. Oper. Res. 2015, 55, 1–11. [Google Scholar] [CrossRef]
  34. Cuevas, E.; Echavarría, A.; Ramírez-Ortegón, M.A. An optimization algorithm inspired by the States of Matter that improves the balance between exploration and exploitation. Appl. Intell. 2014, 40, 256–272. [Google Scholar] [CrossRef]
  35. Kirkpatrick, S.; Gelatt, J.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  36. Granville, V.; Krivanek, M.; Rasson, J.P. Simulated annealing: A proof of convergence. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 652–656. [Google Scholar] [CrossRef]
  37. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evolut. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  38. Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.P.; Auger, A.; Tiwari, S. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL Rep. 2005, 2005, 2005005. [Google Scholar]
  39. Martin, O.C.; Monasson, R.; Zecchina, R. Statistical mechanics methods and phase transitions in optimization problems. Theor. Comput. Sci. 2001, 265, 3–67. [Google Scholar] [CrossRef]
  40. Barbosa, V.C.; Ferreira, R.G. On the phase transitions of graph coloring and independent sets. Phys. A Stat. Mech. Appl. 2004, 343, 401–423. [Google Scholar] [CrossRef]
  41. Mitchell, M.; Newman, M. Complex systems theory and evolution. In Encyclopedia of Evolution; Oxford University Press: New York, NY, USA, 2002; pp. 1–5. [Google Scholar]
  42. Metastability. Available online: https://en.wikipedia.org/wiki/Metastability (accessed on 10 January 2017).
  43. Sondheimer, E.H. The mean free path of electrons in metals. Adv. Phys. 2001, 50, 499–537. [Google Scholar] [CrossRef]
  44. Liang, J.J.; Qu, B.; Suganthan, P.N.; Hernández-Díaz, A.G. Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session on Real-Parameter Optimization; Technical Report, 201212; Computational Intelligence Laboratory, Zhengzhou University: Zhengzhou, China; Nanyang Technological University: Singapore, 2012. [Google Scholar]
  45. Gibbons, J.D.; Chakraborti, S. Nonparametric Statistical Inference; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  46. Garcı’a, S.; Molina, D.; Lozano, M.; Herrera, F. A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: A case study on the CEC’2005 special session on real parameter optimization. J. Heuristics 2009, 15, 617–644. [Google Scholar] [CrossRef]
Figure 1. Three possible positions of elements in a system.
Figure 1. Three possible positions of elements in a system.
Algorithms 10 00119 g001
Figure 2. The critical intervals of the three phases.
Figure 2. The critical intervals of the three phases.
Algorithms 10 00119 g002
Figure 3. A phase transition from an unstable phase to a stable phase.
Figure 3. A phase transition from an unstable phase to a stable phase.
Algorithms 10 00119 g003
Figure 4. Two-dimensional example showing the process of the free walking path of elements.
Figure 4. Two-dimensional example showing the process of the free walking path of elements.
Algorithms 10 00119 g004
Figure 5. The shrinkage trend of elements towards the optimal point.
Figure 5. The shrinkage trend of elements towards the optimal point.
Algorithms 10 00119 g005
Figure 6. The vibration of elements in an equilibrium position.
Figure 6. The vibration of elements in an equilibrium position.
Algorithms 10 00119 g006
Figure 7. The main flowchart of PTBO.
Figure 7. The main flowchart of PTBO.
Algorithms 10 00119 g007
Figure 8. Population distribution at various generations in an evolutionary process of PTBO.
Figure 8. Population distribution at various generations in an evolutionary process of PTBO.
Algorithms 10 00119 g008aAlgorithms 10 00119 g008b
Figure 9. Convergence performance of the compared algorithms on parts of functions.
Figure 9. Convergence performance of the compared algorithms on parts of functions.
Algorithms 10 00119 g009
Figure 10. The mean central processing unit (CPU) time of the nine compared algorithms. 30D: 30 dimensions.
Figure 10. The mean central processing unit (CPU) time of the nine compared algorithms. 30D: 30 dimensions.
Algorithms 10 00119 g010
Table 1. The motional characteristics of elements in the three phases.
Table 1. The motional characteristics of elements in the three phases.
PhaseMain PropertyMotion TendencyExample
UPdisorder, moving towards an arbitrary directionstochastic motiongaseous molecule such as water, atom, or electron in normal temperature
MPbetween disorder and order, moving according to a certain lawshrinkage motionliquid molecule such as water or rapid freezing alloy crystal
SPorder, moving in a very regular modevibration motionsolid molecule such as water or an atom of paramagnetic phase in a nail
Table 2. The relationship between the intervals and the three phases.
Table 2. The relationship between the intervals and the three phases.
PhaseInterval
UP[Fmax, F2]
MP(F2, F1)
SP[F1, Fmin]
Table 3. The correspondence of phase transition-based optimization (PTBO) algorithm and phase transition.
Table 3. The correspondence of phase transition-based optimization (PTBO) algorithm and phase transition.
PTBO AlgorithmPhase Transition Process
IndividualElement
Population sizeThe number of elements
Fitness functionStability degree of an element
Global optimal solutionThe lowest stability degree of an element
Stochastic operatorThe stochastic motion of an element in UP
Shrinkage operatorThe shrinkage motion of an element in a MP
Vibration operatorThe vibration and fine tuning of an element in SP
Table 4. Benchmark functions tested by PTBO (CEC2013).
Table 4. Benchmark functions tested by PTBO (CEC2013).
TypeFunction IDFunctions Name
Uni-modal FunctionsF01Sphere Function
F02Rotated High Conditioned Elliptic Function
F03Rotated Bent Cigar Function
F04Rotated Discus Function
F05Different Powers Function
Multimodal FunctionsF06Rotated Rosenbrock’s Function
F07Rotated Schaffers F7 Function
F08Rotated Ackley’s Function
F09Rotated Weierstrass Function
F10Rotated Griewank’s Function
F11Rastrigin’s Function
F12Rotated Rastrigin’s Function
F13Non-Continuous Rotated Rastrigin’s Function
F14Schwefel’s Function
F15Rotated Schwefel’s Function
F16Rotated Katsuura Function
F17Lunacek Bi_Rastrigin Function
F18Rotated Lunacek Bi_Rastrigin Function
F19Expanded Griewank’s plus Rosenbrock’s Function
F20Expanded Scaffer’s F6 Function
Composition FunctionsF21Composition Function 1 (n = 5, Rotated)
F22Composition Function 2 (n = 3, Unrotated)
F23Composition Function 3 (n = 3, Rotated)
F24Composition Function 4 (n = 3, Rotated)
F25Composition Function 5 (n = 3, Rotated)
F26Composition Function 6 (n = 5, Rotated)
F27Composition Function 7 (n = 5, Rotated)
F28Composition Function 8 (n = 5, Rotated)
Table 5. Different interval ratio settings of the three phases.
Table 5. Different interval ratio settings of the three phases.
Ratio MethodNo.Stable PhaseMeta-Stable PhaseUnstable Phase
Fixed ratioProp10.050.80.15
Prop20.10.80.1
Prop30.150.80.05
Random ratioProp40.2 * rand0.80.2 * (1 − rand)
Table 6. Compared results of different interval ratios.
Table 6. Compared results of different interval ratios.
Func.Prop1Prop2Prop3Prop4Func.Prop1Prop2Prop3Prop4
F016.16 × 10−276.13 × 10−273.82 × 10−284.31 × 10−28F154.45 × 1034.57 × 1034.27 × 1034.23 × 103
2.08 × 10−263.70 × 10−266.04 × 10−285.96 × 10−28 5.99 × 1026.55 × 1027.24 × 1027.64 × 102
F027.90 × 1058.73 × 1051.02 × 1068.63 × 105F164.10 × 10−14.71 × 10−14.11 × 10−13.94 × 10−1
2.89 × 1053.25 × 1053.66 × 1056.60 × 105 2.01 × 10−12.59 × 10−12.07 × 10−11.96 × 10−1
F031.95 × 1081.12 × 1088.01 × 1073.84 × 107F177.58 × 1017.13 × 1016.68 × 1016.72 × 101
2.91 × 1081.49 × 1081.22 × 1085.22 × 107 1.45 × 1011.25 × 1019.941.63 × 101
F048.08 × 1039.03 × 1031.30 × 1048.26 × 103F187.96 × 1017.57 × 1016.99 × 1016.66 × 101
2.59 × 1032.35 × 1033.43 × 1033.71 × 103 1.47 × 1011.15 × 1011.14 × 1011.23 × 101
F051.36 × 10−143.00 × 10−156.38 × 10−222.64 × 10−15F195.05 × 1025.05 × 1025.04 × 1025.04 × 102
2.19 × 10−144.32 × 10−151.18 × 10−214.44 × 10−15 1.611.451.068.36 × 10−1
F062.32 × 1013.09 × 1012.74 × 1012.53 × 101F201.21 × 1011.19 × 1011.26 × 1011.15 × 101
2.57 × 1012.88 × 1012.58 × 1012.67 × 101 1.682.012.072.12
F075.30 × 1015.01 × 1014.13 × 1013.04 × 101F213.01 × 1023.30 × 1023.22 × 1022.97 × 102
2.31 × 1011.90 × 1011.93 × 1011.73 × 101 7.76 × 1018.86 × 1017.19 × 1018.01 × 101
F082.09 × 1012.09 × 1012.09 × 1012.09 × 101F221.01 × 1038.03 × 1029.74 × 1028.61 × 102
6.18 × 10−26.57 × 10−26.50 × 10−26.50 × 10−2 4.92 × 1022.79 × 1024.68 × 1023.11 × 102
F092.49 × 1012.39 × 1012.43 × 1011.90 × 101F235.25 × 1035.16 × 1035.07 × 1034.15 × 103
5.335.245.545.26 8.49 × 1026.32 × 1027.01 × 1021.40 × 103
F103.50 × 10−13.39 × 10−13.43 × 10−13.27 × 10−1F242.42 × 1022.35 × 1022.33 × 1022.33 × 102
2.03 × 10−11.85 × 10−11.73 × 10−11.77 × 10−1 8.218.668.461.04 × 101
F114.85 × 1014.05 × 1013.89 × 1013.92 × 101F252.81 × 1022.79 × 1022.78 × 1022.74 × 102
1.60 × 1011.38 × 1011.12 × 1011.21 × 101 1.34 × 1011.46 × 1011.32 × 1011.02 × 101
F121.40 × 1021.28 × 1021.21 × 1026.32 × 101F262.53 × 1022.77 × 1022.48 × 1022.51 × 102
2.82 × 1013.41 × 1013.72 × 1012.51 × 101 6.85 × 1016.93 × 1016.71 × 1016.57 × 101
F131.61 × 1021.53 × 1021.47 × 1021.18 × 102F277.50 × 1027.41 × 1027.23 × 1026.38 × 102
3.13 × 1012.76 × 1012.67 × 1013.07 × 101 9.33 × 1011.21 × 1021.34 × 1021.16 × 102
F149.52 × 1028.63 × 1029.35 × 1029.47 × 102F283.93 × 1023.46 × 1023.51 × 1022.96 × 102
3.28 × 1023.31 × 1023.49 × 1022.81 × 102 4.61 × 1022.74 × 1023.92 × 1022.83 × 101
Bold text is the best result obtained by different interval ratios.
Table 7. The parameter settings of the compared algorithms.
Table 7. The parameter settings of the compared algorithms.
No.AlgorithmParameter Setting
1PSO ω [ 0.9 , 0.4 ] , c 1 = c 2 = 2 , V max = 0.2 * r a n g e
2DE F = 0.5 , C R = 0.9
3BA A = 0.25 , r = 0.5
4CS p a = 0.25 , s t e p s i z e = 0.05
5BSD m = 5 , p _ r e p l a c e = 0.2 , p _ o n e = 0.8 , p _ o n e _ c e n t e r = 0.4 , p _ t w o _ c e n t e r = 0.5 , k = 20
6WWO h M a x = 12 , a l p h a = 1.0026 , b e t a M a x = 0.25 , b e t a M i n = 0.001
7WCA N s r = 4 , d max = 1 e 16
8SMSGas state:   ρ [ 0.8 , 1 ] , β = 0.8 , α = 0.8 , H = 0.9
Liquid state:   ρ [ 0.3 , 0.6 ] , β = 0.4 , α = 0.2 , H = 0.2
Solid state:   ρ [ 0 , 0.1 ] , β = 0.1 , α = 0 , H = 0
9PTBOalpha is random ratio, beta = 0.8
PSO: particle swarm optimization; DE: Differential Evolution; BA: Bat Algorithm; CS: Cuckoo Search; BSO: Brain Storm Optimization; WWO: Water Wave Optimization; WCA: Water Cycle Algorithm; SMS: States of Matter Search.
Table 8. The results of solution accuracy for the uni-modal functions.
Table 8. The results of solution accuracy for the uni-modal functions.
Func.PTBOPSODEBACSBSOWWOWCASMS
F014.31 × 10−283.56 × 103 ǂ2.65 × 103 ǂ1.05 × 10−3 ǂ4.46 × 103 ǂ3.71 × 10−3 ǂ2.62 × 10−28 ξ1.58 × 10−28 ξ4.63 × 104 ǂ
5.96 × 10−282.65 × 1038.98 × 10−11.29 × 10−43.10 × 1025.71 × 10−31.30 × 10−281.45 × 10−284.24 × 103
F028.63 × 1051.12 × 107 ǂ1.56 × 107 ξ4.82 × 104 ξ4.21 × 107 ǂ1.17 × 106 ǂ1.20 × 106 ǂ7.53 × 105 ξ4.91 × 108 ǂ
6.60 × 1051.56 × 1072.05 × 1051.81 × 1044.04 × 1063.35 × 1057.19 × 1057.54 × 1058.49 × 107
F033.84 × 1074.98 × 1010 ǂ4.72 × 1010 ǂ8.96 × 108 ǂ1.72 × 1010 ǂ1.66 × 108 ǂ3.98 × 108 ǂ2.94 × 109 ǂ1.00 × 1010 ǂ
5.22 × 1074.72 × 10103.57 × 1073.48 × 1082.11 × 1092.22 × 1086.71 × 1084.30 × 1090.00
F048.26 × 1035.64 × 103 ξ8.41 × 103 ǂ2.42 × 104 ǂ6.05 × 104 ǂ5.44 × 103 ξ5.57 × 104 ǂ6.55 × 101 ξ8.14 × 104 ǂ
3.71 × 1038.41 × 1031.27 × 1033.91 × 1026.61 × 1022.22 × 1031.05 × 1042.07 × 1029.79 × 103
F052.64 × 10−151.58 × 103 ǂ1.25 × 103 ǂ1.37 × 10−2 ǂ1.09 × 103 ǂ4.12 × 10−2 ǂ5.53 × 10−2 ǂ2.95 × 10−12 ǂ1.38 × 104 ǂ
4.44 × 10−151.25 × 1031.84 × 1012.08 × 10−36.70 × 1011.74 × 10−22.90 × 10−17.89 × 10−122.66 × 103
ǂ/ξ/~4/1/04/1/04/1/05/0/04/1/04/1/02/3/05/0/0
“ǂ”, “ξ”, and “~” denote that the performance of PTBO is better than, worse than, and similar to that of the corresponding algorithm, respectively. Bold text is the best result obtained by the compared algorithms.
Table 9. The results of solution accuracy for the multimodal functions.
Table 9. The results of solution accuracy for the multimodal functions.
FunctionPTBOPSODEBACSBSOWWOWCASMS
F062.53 × 1012.51 × 102 ǂ3.20 × 101 ǂ1.36 ξ4.99 × 102 ǂ4.30 × 101 ǂ5.51 × 101 ǂ3.27 × 101 ǂ6.18 × 103 ǂ
2.67 × 1012.11 × 1022.51 × 1016.615.70 × 1012.84 × 1012.76 × 1012.67 × 1019.16 × 102
F073.04 × 1011.19 × 102 ǂ2.44 × 101 ξ2.59 × 108 ǂ2.51 × 108 ǂ1.59 × 102 ǂ8.91 × 101 ǂ1.90 × 102 ǂ5.50 × 103 ǂ
1.73 × 1014.67 × 1011.43 × 1014.83 × 1072.44 × 1071.11 × 1023.02 × 1014.00 × 1013.85 × 103
F082.09 × 1012.09 × 101 ~2.09 × 101 ~2.10 × 101 ~2.11 × 101 ~2.09 × 101 ~2.09 × 101 ~2.09 × 101 ~2.09 × 101 ~
6.50 × 10−25.71 × 10−24.33 × 10−26.52 × 10−26.15 × 10−27.40 × 10−25.56 × 10−25.56 × 10−25.31 × 10−2
F091.90 × 1012.18 × 101 ~1.86 × 101 ξ5.77 × 101 ǂ5.73 × 101 ǂ3.42 × 101 ǂ2.76 × 101 ǂ3.40 × 101 ǂ4.00 × 101 ǂ
5.262.734.403.291.163.724.293.011.48
F103.27 × 10−17.01 × 102 ǂ6.65 × 10−1 ~1.14 ǂ9.05 × 102 ǂ9.62 × 10−1 ~1.41 × 10−1 ξ3.17 × 10−1 ~5.60 × 103 ǂ
1.77 × 10−14.13 × 1021.216.95 × 10−14.90 × 1012.16 × 10−17.49 × 10−22.00 × 10−16.76 × 102
F113.92 × 1011.05 × 102 ǂ2.27 × 101 ξ9.90 × 102 ǂ9.90 × 102 ǂ6.03 × 102 ǂ1.22 × 102 ǂ1.05 × 102 ǂ7.65 × 102 ǂ
1.21 × 1015.68 × 1017.872.71 × 1012.24 × 1011.03 × 1023.80 × 1014.84 × 1016.72 × 101
F126.32 × 1011.07 × 102 ǂ4.14 × 101 ξ9.50 × 102 ǂ9.52 × 102 ǂ5.97 × 102 ǂ1.36 × 102 ǂ3.38 × 102 ǂ7.63 × 102 ǂ
2.51 × 1013.62 × 1011.21 × 1011.28 × 1011.34 × 1011.04 × 1023.51 × 1019.25 × 1015.78 × 101
F131.18 × 1022.80 × 102 ǂ2.09 × 102 ǂ1.12 × 103 ǂ1.12 × 103 ǂ6.51 × 102 ǂ2.07 × 102 ǂ3.46 × 102 ǂ7.68 × 102 ǂ
3.07 × 1013.13 × 1013.63 × 1014.65 × 1013.75 × 1017.86 × 1015.72 × 1016.77 × 1015.35 × 101
F149.47 × 1022.00 × 103 ǂ1.46 × 103 ǂ5.72 × 103 ǂ5.90 × 103 ǂ4.42 × 103 ǂ3.23 × 103 ǂ2.37 × 103 ǂ7.72 × 103 ǂ
2.81 × 1026.45 × 1024.71 × 1024.38 × 1024.38 × 1027.77 × 1026.47 × 1027.83 × 1023.88 × 102
F154.23 × 1034.02 × 103 ξ7.21 × 103 ǂ5.02 × 103 ǂ4.99 × 103 ǂ4.33 × 103 ~5.16 × 103 ǂ4.72 × 103 ǂ7.59 × 103 ǂ
7.64 × 1026.52 × 1023.53 × 1022.58 × 1022.57 × 1026.67 × 1021.72 × 1036.74 × 1022.60 × 102
F163.94 × 10−11.94 ǂ~2.49 ǂ1.98 ǂ1.89 ǂ3.13 × 10−1 ξ2.00 ǂ1.63 ǂ2.59 ǂ
1.96 × 10−14.89 × 10−13.29 × 10−19.87 × 10−18.45 × 10−11.16 × 10−18.11 × 10−14.95 × 10−13.08 × 10−1
F176.72 × 1018.45 × 101 ǂ6.39 × 101 ξ8.29 × 102 ǂ8.42 × 102 ǂ6.10 × 102 ǂ1.43 × 102 ǂ2.39 × 102 ǂ1.51 × 103 ǂ
1.63 × 1013.92 × 1019.901.12 × 1011.41 × 1011.03 × 1023.67 × 1011.09 × 1021.02 × 102
F186.66 × 1011.26 × 102 ǂ2.92 × 102 ǂ8.26 × 102 ǂ8.33 × 102 ǂ5.13 × 102 ǂ1.42 × 102 ǂ4.66 × 102 ǂ1.53 × 103 ǂ
1.23 × 1014.99 × 1011.86 × 1011.17 × 1011.02 × 1019.43 × 1013.31 × 1011.43 × 1028.20 × 101
F195.04 × 1022.16 × 103 ǂ5.65 × 102 ǂ3.37 × 103 ǂ1.55 × 103 ǂ5.12 × 102 ~5.16 × 102 ~5.14 × 102 ~5.90 × 105 ǂ
8.36 × 10−13.38 × 1032.841.85 × 1027.16 × 1012.411.715.602.78 × 105
F201.15 × 1011.24 × 101 ǂ1.28 × 101 ǂ1.50 × 101 ǂ1.50 × 101 ǂ1.45 × 101 ǂ1.29 × 101 ǂ1.50 × 101 ǂ1.50 × 101 ǂ
2.128.98 × 10−15.20 × 10−10.000.009.18 × 10−21.000.005.71 × 10−2
ǂ/ξ/~12/1/28/5/213/1/114/0/110/1/412/1/210/3/214/0/1
“ǂ”, “ξ”, and “~” denote that the performance of PTBO is better than, worse than, and similar to that of the corresponding algorithm, respectively. Bold text is the best result obtained by the compared algorithms.
Table 10. The results of solution accuracy for the composition functions.
Table 10. The results of solution accuracy for the composition functions.
FunctionPTBOPSODEBACSBSOWWOWCASMS
F212.97 × 1024.91 × 102 ǂ4.20 × 102 ǂ3.01 × 102 ~1.50 × 103 ǂ3.48 × 102 ǂ3.25 × 102 ~3.51 × 102 ~3.64 × 103 ǂ
8.01 × 1012.21 × 1028.52 × 1012.68 × 10−28.81 × 1018.87 × 1018.01 × 1018.51 × 1012.17 × 102
F228.61 × 1022.17 × 103 ǂ1.33 × 103 ǂ8.52 × 103 ǂ8.52 × 103 ǂ5.62 × 103 ǂ4.09 × 103 ǂ4.01 × 103 ǂ8.39 × 103 ǂ
3.11 × 1026.53 × 1023.94 × 1021.76 × 1021.93 × 1028.61 × 1027.54 × 1021.23 × 1033.46 × 102
F234.15 × 1034.16 × 103 ~6.61 × 103 ǂ7.55 × 103 ǂ7.74 × 103 ǂ5.67 × 103 ǂ5.40 × 103 ~6.64 × 103 ǂ8.25 × 103 ǂ
1.40 × 1038.98 × 1029.13 × 1025.35 × 1023.99 × 1026.67 × 1021.33 × 1036.71 × 1022.72 × 102
F242.33 × 1022.84 × 102 ǂ2.95 × 102 ǂ7.03 × 102 ǂ7.28 × 102 ǂ3.34 × 102 ǂ2.63 × 102 ǂ3.08 × 102 ǂ3.58 × 102 ǂ
1.04 × 1011.23 × 1011.05 × 1019.88 × 1011.43 × 1011.89 × 1011.44 × 1013.54 × 1011.05 × 101
F252.74 × 1023.03 × 102 ~2.66 × 102 ξ4.48 × 102 ǂ4.45 × 102 ǂ3.56 × 102 ǂ3.04 × 102 ~3.17 × 102 ~3.73 × 102 ǂ
1.02 × 1019.327.552.23 × 1014.241.76 × 1011.47 × 1011.11 × 1018.47
F262.51 × 1023.23 × 102 ǂ2.70 × 102 ǂ5.52 × 102 ǂ5.15 × 102 ǂ2.38 × 102 ξ2.00 × 102 ξ3.51 × 102 ~2.62 × 102 ǂ
6.57 × 1016.95 × 1017.11 × 1013.00 × 1013.47 × 1017.46 × 1013.74 × 10−27.68 × 1011.61 × 101
F276.38 × 1029.62 × 102 ǂ7.14 × 102 ~3.30 × 103 ǂ2.83 × 103 ǂ1.36 × 103 ǂ9.90 × 102 ǂ1.22 × 103 ǂ1.50 × 103 ǂ
1.16 × 1027.94 × 1017.45 × 1012.89 × 1028.94 × 1019.72 × 1011.20 × 1028.93 × 1014.49 × 101
F282.96 × 1022.11 × 103 ǂ3.00 × 102 ~7.53 × 103 ǂ8.82 × 103 ǂ4.95 × 103 ǂ5.36 × 102 ǂ2.03 × 103 ǂ5.34 × 103 ǂ
2.83 × 1015.50 × 1021.09 × 10−14.46 × 1022.61 × 1027.15 × 1025.90 × 1021.69 × 1034.17 × 102
ǂ/ξ/~6/0/25/1/27/0/18/0/07/0/14/1/35/0/38/0/0
“ǂ”, “ξ”, and “~” denote that the performance of PTBO is better than, worse than, and similar to that of the corresponding algorithm, respectively. Bold text is the best result obtained by the compared algorithms.
Table 11. The total results of solution accuracy for the 28 functions.
Table 11. The total results of solution accuracy for the 28 functions.
PTBO vs.PSODEBACSBSOWWOWCASMS
F01–054/1/04/1/04/1/05/0/04/1/04/1/02/3/05/0/0
F06–2012/1/28/5/213/1/114/0/110/1/412/1/210/3/214/0/1
F21–286/0/25/1/27/0/18/0/07/0/14/1/35/0/38/0/0
ǂ/ξ/~22/2/417/7/424/4/227/0/121/2/520/3/517/6/527/0/1
Table 12. Wilcoxon signed-rank test of 28 functions.
Table 12. Wilcoxon signed-rank test of 28 functions.
PTBO vs.R+Rp-Value
PSO305732.69 × 10−4
DE2151633.16 × 10−1
BA325531.19 × 10−4
CS368103.79 × 10−6
BSO340381.57 × 10−4
WWO359192.52 × 10−5
WCA320581.20 × 10−3
SMS360182.96 × 10−5
Bold text is the best result obtained by the compared algorithms.

Share and Cite

MDPI and ACS Style

Cao, Z.; Wang, L. An Optimization Algorithm Inspired by the Phase Transition Phenomenon for Global Optimization Problems with Continuous Variables. Algorithms 2017, 10, 119. https://doi.org/10.3390/a10040119

AMA Style

Cao Z, Wang L. An Optimization Algorithm Inspired by the Phase Transition Phenomenon for Global Optimization Problems with Continuous Variables. Algorithms. 2017; 10(4):119. https://doi.org/10.3390/a10040119

Chicago/Turabian Style

Cao, Zijian, and Lei Wang. 2017. "An Optimization Algorithm Inspired by the Phase Transition Phenomenon for Global Optimization Problems with Continuous Variables" Algorithms 10, no. 4: 119. https://doi.org/10.3390/a10040119

APA Style

Cao, Z., & Wang, L. (2017). An Optimization Algorithm Inspired by the Phase Transition Phenomenon for Global Optimization Problems with Continuous Variables. Algorithms, 10(4), 119. https://doi.org/10.3390/a10040119

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop