An Optimization Algorithm Inspired by the Phase Transition Phenomenon for Global Optimization Problems with Continuous Variables

: In this paper, we propose a novel nature-inspired meta-heuristic algorithm for continuous global optimization, named the phase transition-based optimization algorithm (PTBO). It mimics three completely different kinds of motion characteristics of elements in three different phases, which are the unstable phase, the meta-stable phase, and the stable phase. Three corresponding operators, which are the stochastic operator of the unstable phase, the shrinkage operator in the meta-stable phase, and the vibration operator of the stable phase, are designed in the proposed algorithm. In PTBO, the three different phases of elements dynamically execute different search tasks according to their phase in each generation. It makes it such that PTBO not only has a wide range of exploration capabilities, but also has the ability to quickly exploit them. Numerical experiments are carried out on twenty-eight functions of the CEC 2013 benchmark suite. The simulation results demonstrate its better performance compared with that of other state-of-the-art optimization algorithms.


Introduction
Nature-inspired optimization algorithms have become of increasing interest to many researchers in optimization fields in recent years [1].After half a century of development, nature-inspired optimization algorithms have formed a great family.It not only has a wide range of contact with biological, physical, and other basic science, but also involves many fields such as artificial intelligence, artificial life, and computer science.
In the last decade, swarm intelligence, as a branch of intelligent computation models, has been gradually rising [13].Swarm intelligence algorithms mainly simulate biological habits or behavior, including foraging behavior, search behavior and migratory behavior, brooding behavior, and mating behavior.Inspired by these phenomena, researchers have designed many intelligent algorithms, such as Ant Colony Optimization (ACO) [14], Particle Swarm Optimization (PSO) [15], Bacterial Foraging (BFA) [16], Artificial Bee Colony (ABC) [17][18][19], Group Search Optimization (GSO) [20], From Figure 1, we can observe that an element is most unstable in the position of point A, and we intuitively call it an unstable phase.On the contrary, in the position of point C, the element is most stable, and we name it a stable phase.The position of point B, between point A and point C, is the transition phase, and we term it a meta-stable phase.The definitions of the three phases are as follows.

Definition 1. Unstable Phase (UP). The element is in a phase of complete disorder, and moves freely towards an arbitrary direction. In the case of this phase, the element has a large range of motion, and has the ability of global divergence.
Definition 2. Meta-stable Phase (MP).The element is in a phase between disorder and order, and moves according to a certain law, such as towards the lowest point.In the case of this phase, the element has a moderate activity, and possesses the ability of local shrinkage.

Definition 3. Stable Phase (SP).
The element is in a very regular phase of orderly motion.In the case of this phase, the element has a very small range of activity and has the ability of fine tuning.
According to the above definitions, we can give a more detailed description and examples about the characteristics of the three phases.These motion characteristics in three phases provide us with rich potential to develop the proposed PTBO algorithm.The following Table 1 summarizes the motional characteristics of the unstable phase, the meta-stable phase, and the stable phase.From Figure 1, we can observe that an element is most unstable in the position of point A, and we intuitively call it an unstable phase.On the contrary, in the position of point C, the element is most stable, and we name it a stable phase.The position of point B, between point A and point C, is the transition phase, and we term it a meta-stable phase.The definitions of the three phases are as follows.
Definition 1. Unstable Phase (UP).The element is in a phase of complete disorder, and moves freely towards an arbitrary direction.In the case of this phase, the element has a large range of motion, and has the ability of global divergence.Definition 2. Meta-stable Phase (MP).The element is in a phase between disorder and order, and moves according to a certain law, such as towards the lowest point.In the case of this phase, the element has a moderate activity, and possesses the ability of local shrinkage.Definition 3. Stable Phase (SP).The element is in a very regular phase of orderly motion.In the case of this phase, the element has a very small range of activity and has the ability of fine tuning.
According to the above definitions, we can give a more detailed description and examples about the characteristics of the three phases.These motion characteristics in three phases provide us with rich potential to develop the proposed PTBO algorithm.The following Table 1 summarizes the motional characteristics of the unstable phase, the meta-stable phase, and the stable phase.

The Determination of Critical Interval about Three Phases
As mentioned before, we use stability, which is depicted by the fitness value of an objective function, to describe the degree of order or disorder of an element.In our proposed algorithm, the higher the fitness value of an element is, the worse the stability is.The question of how to divide the critical intervals of the unstable phase, the meta-stable phase, and the stable phase is a primary problem that must be addressed before the proposed algorithm is designed.
For simplicity, we use F max to denote a maximum fitness value, and we say this element is in the most unstable phase.On the contrary, F min denotes a minimum fitness value.Then, we set up that the stable phase accounts for al pha, and the proportion of the meta-stable phase is beta.So, the proportion of the unstable phase is 1 − al pha − beta.The ratio of three critical intervals is shown in Figure 2.

The Determination of Critical Interval about Three Phases
As mentioned before, we use stability, which is depicted by the fitness value of an objective function, to describe the degree of order or disorder of an element.In our proposed algorithm, the higher the fitness value of an element is, the worse the stability is.The question of how to divide the critical intervals of the unstable phase, the meta-stable phase, and the stable phase is a primary problem that must be addressed before the proposed algorithm is designed.
For simplicity, we use max F to denote a maximum fitness value, and we say this element is in the most unstable phase.On the contrary, min F denotes a minimum fitness value.Then, we set up that the stable phase accounts for alpha , and the proportion of the meta-stable phase is beta .So, the proportion of the unstable phase is 1 alpha beta − − . The ratio of three critical intervals is shown in Figure 2.Although max F and min F are dynamically changed in each generation, which means the iteration process of phase transition, the basic relationship between the phase of the elements and the critical intervals is shown in Table 2.Although F max and F min are dynamically changed in each generation, which means the iteration process of phase transition, the basic relationship between the phase of the elements and the critical intervals is shown in Table 2.

Basic Idea of the PTBO Algorithm
In this work, the motion of elements from an unstable phase to another relative stable phase in PTBO is as natural selection in GA.Many of these iterations from an unstable phase to another relative stable phase can eventually make an element reach absolute stability.The diverse motional characteristics of elements in the three phases are the core of the PTBO algorithm to simulate this phase transition process of elements.In the PTBO algorithm, three corresponding operators are designed.An appropriate combination of the three operators makes an effective search for the global minimum in the solution space.

The Correspondence of PTBO and the Phase Transition Process
Based on the basic law of elements transitioning from an unstable phase (disorder) to a stable phase (order), the correspondence of PTBO and phase transition can be summarized in Table 3.
Table 3.The correspondence of phase transition-based optimization (PTBO) algorithm and phase transition.

Individual Element Population size
The number of elements Fitness function

Stability degree of an element Global optimal solution
The lowest stability degree of an element Stochastic operator The stochastic motion of an element in UP Shrinkage operator The shrinkage motion of an element in a MP Vibration operator The vibration and fine tuning of an element in SP

The Overall Design of the PTBO Algorithm
The simplified cyclic diagram of the phase transition process in our PTBO algorithm is shown in Figure 3.It is a complete cyclic process of phase transition from an unstable phase to a stable phase.Firstly, in the first generation, we calculate the maximum and minimum fitness value for each element, respectively, and divide the critical intervals of the unstable phase, the meta-stable phase, and the stable phase according to the rules in Table 2. Secondly, the element will perform the relevant search according to its own phase.If the result of the new degree of stability is better than that of the original phase, the motion will be reserved.Otherwise, we will abandon this operation.That is to say, if the original phase of an element is UP, the movement direction may be towards UP, MP, and SP.However, if the original phase of an element is MP, the movement direction may be towards MP and SP.Of course, if the original phase of an element is SP, the movement direction is towards only SP.Finally, after much iteration, elements will eventually obtain an absolute stability.
Broadly speaking, we may think of PTBO as an algorithmic framework.We simply define the general operations of the whole algorithm about the motion of elements in the phase transition.In a word, PTBO is flexible for skilled users to customize it according to a specific scene of phase transition.
According to the above complete cyclic process of the phase transition, the whole operating process of PTBO can be summarized as three procedures: population initialization, iterations of three operators, and individual selection.The three operators in the iterations include the stochastic operator, the shrinkage operator, and the vibration operator.
relevant search according to its own phase.If the result of the new degree of stability is better than that of the original phase, the motion will be reserved.Otherwise, we will abandon this operation.That is to say, if the original phase of an element is UP, the movement direction may be towards UP, MP, and SP.However, if the original phase of an element is MP, the movement direction may be towards MP and SP.Of course, if the original phase of an element is SP, the movement direction is towards only SP.Finally, after much iteration, elements will eventually obtain an absolute stability.Figure 3.A phase transition from an unstable phase to a stable phase.Figure 3.A phase transition from an unstable phase to a stable phase.

Population Initialization
PTBO is also a population-based meta-heuristic algorithm.Like other evolutionary algorithms (EAs), PTBO starts with an initialization of a population, which contains a population size (the size is N) of element individuals.The current generation evolves into the next generation through the three operators described as below (see Section 3.3.2).That is to say, the population continually evolves along with the proceeding generation until the termination condition is met.Here, we initialize the j-th dimensional component of the i-th individual as where rand is an uniformly distributed random number between 0 and 1, X jmax and X jmin are the upper boundary and lower boundary of j-th dimension of each individual, respectively.

Iterations of the Three Operators
Now we simply give some certain implementation details about the three operators in the three different phases.

(1) Stochastic operator
Stochastic diffusion is a common operation in which elements randomly move and pass one another in an unstable phase.Although the movement of elements is chaotic, it actually obeys a certain law from a statistical point of view.We can use the mean free path [43], which is a distance between an element and two other elements in two successive collisions, to represent the stochastic motion characteristic of elements in an unstable phase.Figure 4 simply shows the process of the free walking path of elements.Broadly speaking, we may think of PTBO as an algorithmic framework.We simply define the general operations of the whole algorithm about the motion of elements in the phase transition.In a word, PTBO is flexible for skilled users to customize it according to a specific scene of phase transition.
According to the above complete cyclic process of the phase transition, the whole operating process of PTBO can be summarized as three procedures: population initialization, iterations of three operators, and individual selection.The three operators in the iterations include the stochastic operator, the shrinkage operator, and the vibration operator.

Population Initialization
PTBO is also a population-based meta-heuristic algorithm.Like other evolutionary algorithms (EAs), PTBO starts with an initialization of a population, which contains a population size (the size is N ) of element individuals.The current generation evolves into the next generation through the three operators described as below (see Section 3.3.2).That is to say, the population continually evolves along with the proceeding generation until the termination condition is met.Here, we initialize the j-th dimensional component of the i-th individual as where rand is an uniformly distributed random number between 0 and 1, max j X and min j X are the upper boundary and lower boundary of j-th dimension of each individual, respectively.

Iterations of the Three Operators
Now we simply give some certain implementation details about the three operators in the three different phases.

(1) Stochastic operator
Stochastic diffusion is a common operation in which elements randomly move and pass one another in an unstable phase.Although the movement of elements is chaotic, it actually obeys a certain law from a statistical point of view.We can use the mean free path [43], which is a distance between an element and two other elements in two successive collisions, to represent the stochastic motion characteristic of elements in an unstable phase.Figure 4 simply shows the process of the free walking path of elements.The free walking path of elements is the distance traveled by an element and other individuals through two collisions.Therefore, the stochastic operator of elements may be implemented as follows: where i newX is the new position of i X after the stochastic motion, The free walking path of elements is the distance traveled by an element and other individuals through two collisions.Therefore, the stochastic operator of elements may be implemented as follows: where newX i is the new position of X i after the stochastic motion, rand 1 and rand 2 are two random vectors, where each element is a random number in the range (0, 1), and the indices j and k are mutually exclusive integers randomly chosen from the range between 1 and N that is also different from the indices i. (

2) Shrinkage operator
In a meta-stable phase, an element will be inclined to move closer to the optimal one.From a statistical standpoint, the geometric center is a very important digital characteristic and represents the shrinkage trend of elements in a certain degree.Figure 5 briefly gives the shrinkage trend of elements towards the optimal point.are mutually exclusive integers randomly chosen from the range between 1 and N that is also different from the indices i. (

2) Shrinkage operator
In a meta-stable phase, an element will be inclined to move closer to the optimal one.From a statistical standpoint, the geometric center is a very important digital characteristic and represents the shrinkage trend of elements in a certain degree.Figure 5 briefly gives the shrinkage trend of elements towards the optimal point.Hence, the gradual shrinkage to the central position is the best motion to elements in a meta-stable phase.So, the shrinkage operator of elements may be implemented as follows: where i newX is the new position of i X after the shrinkage operation, gb X is the best individual in the population, and (0,1) N is a normal random number with mean 0 and standard deviation 1.The Normal distribution is an important family of continuous probability distributions applied in many fields.

(3) Vibration operator
Elements in a stable phase will be apt to only vibrate about their equilibrium positions.Figure 6 briefly shows the vibration of elements.Hence, the vibration operator of elements may be implemented as follows: where i newX is the new position of i X after the vibration operation, rand is a uniformly distributed random number in the range (0, 1), and stepSize is the control parameter which regulates the amplitude of jitter with a process of evolutionary generation.With the evolution of the phase transition, the amplitude of vibration will gradually become smaller.stepSize is described as follows: (5) Hence, the gradual shrinkage to the central position is the best motion to elements in a meta-stable phase.So, the shrinkage operator of elements may be implemented as follows: where newX i is the new position of X i after the shrinkage operation, X gb is the best individual in the population, and N(0, 1) is a normal random number with mean 0 and standard deviation 1.The Normal distribution is an important family of continuous probability distributions applied in many fields. (

3) Vibration operator
Elements in a stable phase will be apt to only vibrate about their equilibrium positions.Figure 6 briefly shows the vibration of elements.are mutually exclusive integers randomly chosen from the range between 1 and N that is also different from the indices i. (

2) Shrinkage operator
In a meta-stable phase, an element will be inclined to move closer to the optimal one.From a statistical standpoint, the geometric center is a very important digital characteristic and represents the shrinkage trend of elements in a certain degree.Figure 5 briefly gives the shrinkage trend of elements towards the optimal point.Hence, the gradual shrinkage to the central position is the best motion to elements in a meta-stable phase.So, the shrinkage operator of elements may be implemented as follows: where i newX is the new position of i X after the shrinkage operation, gb X is the best individual in the population, and (0,1) N is a normal random number with mean 0 and standard deviation 1.The Normal distribution is an important family of continuous probability distributions applied in many fields.

(3) Vibration operator
Elements in a stable phase will be apt to only vibrate about their equilibrium positions.Figure 6 briefly shows the vibration of elements.Hence, the vibration operator of elements may be implemented as follows: where i newX is the new position of i X after the vibration operation, rand is a uniformly distributed random number in the range (0, 1), and stepSize is the control parameter which regulates the amplitude of jitter with a process of evolutionary generation.With the evolution of the phase transition, the amplitude of vibration will gradually become smaller.stepSize is described as follows: (5) Hence, the vibration operator of elements may be implemented as follows: where newX i is the new position of X i after the vibration operation, rand is a uniformly distributed random number in the range (0, 1), and stepSize is the control parameter which regulates the amplitude of jitter with a process of evolutionary generation.With the evolution of the phase transition, the amplitude of vibration will gradually become smaller.stepSize is described as follows: where the G and g denote the maximum number of iterations and current number of iteration respectively, and exp() stands for the exponential function.

Individual Selection
In the PTBO algorithm, like other EAs, one-to-one greedy selection is employed by comparing a parent individual and its new generated corresponding offspring.In addition, this greedy selection strategy may raise diversity compared with other strategies, such as tournament selection and rank-based selection.The selection operation at the k-th generation is described as follows: where f (X) is the objective function value of each individual.

Flowchart and Implementation Steps of PTBO
As described above, the main flowchart of the PTBO algorithm is given in Figure 7.

Individual Selection
In the PTBO algorithm, like other EAs, one-to-one greedy selection is employed by comparing a parent individual and its new generated corresponding offspring.In addition, this greedy selection strategy may raise diversity compared with other strategies, such as tournament selection and rank-based selection.The selection operation at the k-th generation is described as follows: where ( ) f X is the objective function value of each individual.

Flowchart and Implementation Steps of PTBO
As described above, the main flowchart of the PTBO algorithm is given in Figure 7.The implementation steps of PTBO are summarized as follows: Step 1. Initialization: set up algorithm parameters N, D, alpha and beta, randomly generate initial population of elements, and set g = 0; Step 2. Evaluation and partition interval: calculate the fitness values of all individuals and obtain the F max and F min , and divide the critical interval of UP, MP and SP according to Table 2; Step 3. Stochastic operator: using Formula (2) to create newX i ; Step 4. Shrinkage operator: using Formula (3) to update newX i ; Step 5. Vibration operator: using Formula ( 4) and ( 5) to update newX i ; Step 6.Individual selection: accept newX i if f (newX i ) is better than f (X i ); Step 7. Termination judgment: if termination condition is satisfied, stop the algorithm; otherwise, g = g + 1, go to Step 3.

The Analysis of Time Complexity
For PTBO, the main operations include the operation of population initialization and the stochastic operator, shrinkage operator, and vibration operator.The time complexity of each operation in a single iteration can be computed as follows:

The Dynamic Implementation Analysis of PTBO
In this section, the step-wise procedure for the implementation of PTBO for optimization is presented.For the demonstration of the process, Rastrigin's function [44] is herein considered as an example.Rastrigin's function is a classic test function in optimization theory in which the point of global minimum is surrounded by a large number of local minima.To converge to the global minimum without being stuck at one of these local minima, however, is extremely difficult.Some numerical solvers need to take a long time to converge to it.Three-dimensional contour plot for Rastrigin's function is shown in Figure 8a.Rastrigin's function is described as follows: In this experiment, we use 30 individuals to solve the above minimization problem, and the population distribution at various generations in an evolutionary process is shown in Figure 8b-f, with D = 2, alpha = 0.1 and beta = 0.8.In Figure 8b-f, the labels of red diamond represent the optimal point.
From Figure 8b-f, we can observe that the population distribution information can significantly vary at various generations during the run time.PTBO can effectively adapt to a time-varying search space or landscapes.
In this experiment, we use 30 individuals to solve the above minimization problem, and the population distribution at various generations in an evolutionary process is shown in Figure 8b-f, with D = 2, alpha = 0.1 and beta = 0.8.In Figure 8b-f, the labels of red diamond represent the optimal point.From Figure 8b-f, we can observe that the population distribution information can significantly vary at various generations during the run time.PTBO can effectively adapt to a time-varying search space or landscapes.

The Differences between PTBO and PSO
Like PSO (Particle Swarm Optimization), PTBO is also introduced to deal with unconstrained global optimization problems with continuous variables.In the representational form of implementation, we can certainly think that PTBO is also based on a particle system in the same way as PSO.However, according to the overall design of the PTBO algorithm, there are some differences between PTBO and the classical PSO.
Firstly, in heuristic thought, PSO is inspired by biological behavior or habits for simulating animal swarm behavior, such as fish schooling and bird flocking, while PTBO is inspired by the  Like PSO (Particle Swarm Optimization), PTBO is also introduced to deal with unconstrained global optimization problems with continuous variables.In the representational form of implementation, we can certainly think that PTBO is also based on a particle system in the same way as PSO.However, according to the overall design of the PTBO algorithm, there are some differences between PTBO and the classical PSO.
Firstly, in heuristic thought, PSO is inspired by biological behavior or habits for simulating animal swarm behavior, such as fish schooling and bird flocking, while PTBO is inspired by the phase transition phenomenon of elements in nature.Secondly, in PSO, the direction of a particle is calculated by two best positions, pbest and gbest.However, the motion direction of an element in PTBO is arbitrarily derived from the other two elements that are different from each other.It may enhance the diversity of the population and ensure the avoidance of premature convergence.Thirdly, in the design of the operators, each particle in PSO contains a velocity item.Nevertheless, in PTBO the concept of velocity does not exist.Besides, PSO uses the position information of pbest and gbest to record the updating of velocity or position.However, PTBO uses only the position information about gbest and the pbest position of elements is not considered.

The Differences between PTBO and SMS SMS (States of Matter Search
) is a nature-inspired optimization algorithm based on the simulation of the states of matter phenomenon [35].Specifically, SMS is devised by considering each state of matter at one different exploration-exploitation ratio by dividing the whole search process into three stages, i.e., gas, liquid, and solid.
Although the sources of inspiration for PBTO and SMS are similar, which are taken from a physical phenomenon about the states of matter, the evolution processes of PTBO and SMS are completely different.The evolution process of SMS is as follows.At first, all individuals in the population perform exploration in the mode of the gas state.Then, after 50% of the iterations, the search mode is changed into the liquid state for 40% of the iterations, i.e., the search between exploration and exploitation.Finally, the evolutionary process enters the stage of exploitation (liquid state) for 10% of the iterations.However, in PTBO, the three phases are coexistent in the entire search process.In other words, the three different phases of individuals execute dynamically different search tasks according to their phase in each generation.Hence, the implementation about the balance of exploration and exploitation between PTBO and SMS is completely different.Besides, the operators of PTBO and SMS are also completely different.In summary, it can be said that there are fundamental differences between PTBO and SMS.

Benchmark Functions
In order to verify the effectiveness and robustness of the proposed PTBO algorithm, PTBO is applied in experimental simulation studies for finding the global minimum of the entire 28 test functions of the CEC 2013 special session [44].The CEC 2013's test suite, which has been improved on the basis of CEC 2005 [38], covers various types of function optimization, and is summarized in Table 4.
The search range of all functions is between −100 and 100 in every dimension.These problems are shifted or rotated to increase their complexity, and are treated as black-box problems.The explicit equations of the problems are not allowed to be used.The test suite of Table 4 consists of five uni-modal functions (F01 to F05), 15 multimodal functions (F06 to F20) and eight composition functions (F21 to F28).

Parameters Determination of the Interval Ratio of PTBO
In our PTBO algorithm, there are two parameters, the alpha and the beta, that need to be allocated to determine the critical intervals about the three phases.In a natural system, we can observe that the elements in the middle meta-stable phase account for the majority, and the elements in the unstable and stable phases occupy only a small proportion.This phenomenon is consistent with the two-eight law (or the 1/5th rule) [2].For the case of simplicity, we give a value of 0.8 to beta, which is the proportion of the elements in the meta-stable phase.So, the elements in the unstable and stable phases account for 0.2 in total.The specific interval ratio settings of the three phases are shown in Table 5.In order to determine which interval ratio is the most suitable for the PTBO algorithm, we conducted some compared experiments with 50 independent runs according to Table 5.The compared results of different interval ratios are listed in Table 6.In Table 6, the mean values are listed in the first line, and the standard deviations are displayed in the second line.We can intuitively observe that the ratio of prop4 has the best accuracy results compared with the other three ratios.Hence, in the subsequent experiments, we choose the value of beta to be 0.8, and the proportion of the unstable and stable phases is a random ratio of 0.2 in total.Bold text is the best result obtained by different interval ratios.

Experimental Platform and Algorithms' Parameter Settings
For a fair comparison, all of the experiments are conducted on the same machine with an Intel 3.4 GHz central processing unit (CPU) and 4GB memory.The operating system is Windows 7 with MATLAB 8.0.0.783(R2012b).
On the test functions, we compare PTBO with classic PSO, DE, and six other recent popular meta-heuristic algorithms, which are BA, CS, BSO, WWO, WCA, and SMS.The first three algorithms belong to the second category, i.e., swarm intelligence, and the remaining three algorithms belong to the third category, i.e., intelligent algorithms simulating physical phenomena.The parameters adopted for the PTBO algorithm and the compared algorithms are given in Table 7.

The Compared Experimental Results
Each of the experiments was repeated for 50 independent runs with different random seeds, and the average function values of the best solutions were recorded.

Comparisons on Solution Accuracy
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A two-tailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

(1) The accuracy results of uni-modal functions
The results of the uni-modal functions are shown in Table 8 in terms of the mean optimum solution and the standard deviation of the solutions.
In Table 8, among the five functions, PTBO has yielded the best results on two of them (F03 and F05).Although PTBO is worse than WCA, which obtains the best results on F01, F02, and F04, we observe from the statistical results that the performance of PTBO in uni-modal functions is significantly better than PSO, DE, BA, CS, BSO, WWO, and SMS.ccuracy results of the uni-modal, multimodal, and composition functions are given in 0, respectively.The accuracy results are in terms of the mean optimum solution and the eviation of the solutions obtained by each algorithm over 300,000 function evaluation ) or 10,000 maximum generations.In all experiments, the dimensions of all problems are t results among the algorithms are shown in bold.In each row of the three tables, the mean listed in the first line, and the standard deviations are displayed in the second line.A twot [45] is performed with a 0.05 significance level to evaluate whether the median fitness wo sets of obtained results are statistically different from each other.In the below three TBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the ing result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse algorithms, and '~' denotes that there is no significance between PTBO and the compared At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 3
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 −3
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.The accuracy results of the uni-modal, multimodal, and composition functions are Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solutio standard deviation of the solutions obtained by each algorithm over 300,000 function e times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all pro 30.The best results among the algorithms are shown in bold.In each row of the three tables, values are listed in the first line, and the standard deviations are displayed in the second lin tailed t-test [45] is performed with a 0.05 significance level to evaluate whether the medi values of two sets of obtained results are statistically different from each other.In the be tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the ba corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO than other algorithms, and '~' denotes that there is no significance between PTBO and the c algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', calculated.ccuracy results of the uni-modal, multimodal, and composition functions are given in 0, respectively.The accuracy results are in terms of the mean optimum solution and the eviation of the solutions obtained by each algorithm over 300,000 function evaluation ) or 10,000 maximum generations.In all experiments, the dimensions of all problems are t results among the algorithms are shown in bold.In each row of the three tables, the mean listed in the first line, and the standard deviations are displayed in the second line.A twot [45] is performed with a 0.05 significance level to evaluate whether the median fitness wo sets of obtained results are statistically different from each other.In the below three TBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the ing result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse algorithms, and '~' denotes that there is no significance between PTBO and the compared At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.
1.17 × 10 6 The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 6
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.The accuracy results of the uni-modal, multimodal, and composition functions are Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solutio standard deviation of the solutions obtained by each algorithm over 300,000 function e times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all pro 30.The best results among the algorithms are shown in bold.In each row of the three tables, values are listed in the first line, and the standard deviations are displayed in the second lin tailed t-test [45] is performed with a 0.05 significance level to evaluate whether the medi values of two sets of obtained results are statistically different from each other.In the be tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the ba corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO than other algorithms, and '~' denotes that there is no significance between PTBO and the c algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', calculated.ccuracy results of the uni-modal, multimodal, and composition functions are given in 0, respectively.The accuracy results are in terms of the mean optimum solution and the eviation of the solutions obtained by each algorithm over 300,000 function evaluation ) or 10,000 maximum generations.In all experiments, the dimensions of all problems are t results among the algorithms are shown in bold.In each row of the three tables, the mean listed in the first line, and the standard deviations are displayed in the second line.A twot [45] is performed with a 0.05 significance level to evaluate whether the median fitness wo sets of obtained results are statistically different from each other.In the below three TBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the ing result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse algorithms, and '~' denotes that there is no significance between PTBO and the compared At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is .

× 10 10
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 8
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 10
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 8
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 9
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 10
The accuracy results of the uni-modal, multimodal, and composition functions are Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solutio standard deviation of the solutions obtained by each algorithm over 300,000 function e times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all pro 30.The best results among the algorithms are shown in bold.In each row of the three tables, values are listed in the first line, and the standard deviations are displayed in the second lin tailed t-test [45] is performed with a 0.05 significance level to evaluate whether the medi values of two sets of obtained results are statistically different from each other.In the be tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the ba corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO than other algorithms, and '~' denotes that there is no significance between PTBO and the algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', calculated.The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 4
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 4
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.The accuracy results of the uni-modal, multimodal, and composition functions are Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solutio standard deviation of the solutions obtained by each algorithm over 300,000 function e times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all pro 30.The best results among the algorithms are shown in bold.In each row of the three tables, values are listed in the first line, and the standard deviations are displayed in the second lin tailed t-test [45] is performed with a 0.05 significance level to evaluate whether the medi values of two sets of obtained results are statistically different from each other.In the be tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the ba corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO than other algorithms, and '~' denotes that there is no significance between PTBO and the c algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', calculated.ccuracy results of the uni-modal, multimodal, and composition functions are given in 0, respectively.The accuracy results are in terms of the mean optimum solution and the eviation of the solutions obtained by each algorithm over 300,000 function evaluation ) or 10,000 maximum generations.In all experiments, the dimensions of all problems are t results among the algorithms are shown in bold.In each row of the three tables, the mean listed in the first line, and the standard deviations are displayed in the second line.A twot [45] is performed with a 0.05 significance level to evaluate whether the median fitness wo sets of obtained results are statistically different from each other.In the below three TBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the ing result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse algorithms, and '~' denotes that there is no significance between PTBO and the compared At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.
4.12 × 10 −2 The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 −2
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 −12
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.
The accuracy results of the uni-modal, multimodal, and composition functions are Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solutio standard deviation of the solutions obtained by each algorithm over 300,000 function e times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all pro 30.The best results among the algorithms are shown in bold.In each row of the three tables, values are listed in the first line, and the standard deviations are displayed in the second lin tailed t-test [45] is performed with a 0.05 significance level to evaluate whether the medi values of two sets of obtained results are statistically different from each other.In the be tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the ba corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO than other algorithms, and '~' denotes that there is no significance between PTBO and the c algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', calculated.In all experiments, the dimensions of all problems are shown in bold.In each row of the three tables, the mean dard deviations are displayed in the second line.A twonificance level to evaluate whether the median fitness tistically different from each other.In the below three nother algorithm, a ' ǂ' is labeled in the back of the m.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse ere is no significance between PTBO and the compared summary of the total number of '01C2', 'ξ', and '~' is ", "ξ", and "~" denote that the performance of PTBO is better than, worse than, and similar to that of the corresponding algorithm, respectively.Bold text is the best result obtained by the compared algorithms.The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 1
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 1
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 1
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 1
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 3
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 8
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 2
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 1
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 2
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 3
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 1
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 1
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 1
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 1
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 1
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.
6.65 × 10 −1 ~1. 14  The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 2
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 2
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 2
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 2
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 2
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 2
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 2
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 2
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 2
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.
1.12 × 10 3 The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.
1.12 × 10 3 The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 2
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 2
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 2
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 2
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated., multimodal, and composition functions are given in lts are in terms of the mean optimum solution and the d by each algorithm over 300,000 function evaluation .In all experiments, the dimensions of all problems are shown in bold.In each row of the three tables, the mean dard deviations are displayed in the second line.A twonificance level to evaluate whether the median fitness tistically different from each other.In the below three nother algorithm, a ' ǂ' is labeled in the back of the m.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse ere is no significance between PTBO and the compared summary of the total number of '01C2', 'ξ', and '~' is ", "ξ", and "~" denote that the performance of PTBO is better than, worse than, and similar to that of the corresponding algorithm, respectively.Bold text is the best result obtained by the compared algorithms. (

2) The accuracy results of multimodal functions
From Table 9, it can be observed that the mean value and the standard deviation value of PSO displays the best results for the function F15.DE obtains the best results on F07, F09, and F12, and BA has the best result for the function F06.BSO has the best result for the function F16.WCA obtains the best results on F8 and F10.CS, WWO, and SMS do not obtain any best result except for the function F8.With regard to the function F8, there is no big difference about all eight algorithms.The PTBO algorithm performs well for the functions F11, F13, F14, F17, F18, and F20, and according to the data of the last row in Table 9, it can be concluded that the PTBO algorithm has good performance in solution accuracy for multimodal benchmark functions.

(3) The accuracy results of composition functions
It can be seen from Table 10 that DE acquires the best results on F25, and WWO obtains the best results on F26.However, PTBO obtains the best results on F21, F22, F23, F24, F27, and F28.In general, PTBO shows the best overall performance from the statistical results according to the data of the last row in Table 10. 1 accuracy results of the uni-modal, multimodal, and composition functions are given in 10, respectively.The accuracy results are in terms of the mean optimum solution and the deviation of the solutions obtained by each algorithm over 300,000 function evaluation S) or 10,000 maximum generations.In all experiments, the dimensions of all problems are est results among the algorithms are shown in bold.In each row of the three tables, the mean e listed in the first line, and the standard deviations are displayed in the second line.A twoest [45] is performed with a 0.05 significance level to evaluate whether the median fitness two sets of obtained results are statistically different from each other.In the below three PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the nding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse r algorithms, and '~' denotes that there is no significance between PTBO and the compared .At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is d.The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.The accuracy results of the uni-modal, multimodal, and composition functions are gi Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution a standard deviation of the solutions obtained by each algorithm over 300,000 function eval times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all proble 30.The best results among the algorithms are shown in bold.In each row of the three tables, the values are listed in the first line, and the standard deviations are displayed in the second line.A tailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median values of two sets of obtained results are statistically different from each other.In the below tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is than other algorithms, and '~' denotes that there is no significance between PTBO and the com algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and calculated. 1 accuracy results of the uni-modal, multimodal, and composition functions are given in 10, respectively.The accuracy results are in terms of the mean optimum solution and the deviation of the solutions obtained by each algorithm over 300,000 function evaluation S) or 10,000 maximum generations.In all experiments, the dimensions of all problems are est results among the algorithms are shown in bold.In each row of the three tables, the mean e listed in the first line, and the standard deviations are displayed in the second line.A twoest [45] is performed with a 0.05 significance level to evaluate whether the median fitness two sets of obtained results are statistically different from each other.In the below three PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the nding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse r algorithms, and '~' denotes that there is no significance between PTBO and the compared .At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is d.accuracy results of the uni-modal, multimodal, and composition functions are given in 10, respectively.The accuracy results are in terms of the mean optimum solution and the deviation of the solutions obtained by each algorithm over 300,000 function evaluation S) or 10,000 maximum generations.In all experiments, the dimensions of all problems are est results among the algorithms are shown in bold.In each row of the three tables, the mean e listed in the first line, and the standard deviations are displayed in the second line.A twoest [45] is performed with a 0.05 significance level to evaluate whether the median fitness two sets of obtained results are statistically different from each other.In the below three PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the nding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse r algorithms, and '~' denotes that there is no significance between PTBO and the compared .At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is d.The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 3
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 3
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 2
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 3
The accuracy results of the uni-modal, multimodal, and composition functions are given in Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution and the standard deviation of the solutions obtained by each algorithm over 300,000 function evaluation times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all problems are 30.The best results among the algorithms are shown in bold.In each row of the three tables, the mean values are listed in the first line, and the standard deviations are displayed in the second line.A twotailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other.In the below three tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse than other algorithms, and '~' denotes that there is no significance between PTBO and the compared algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is calculated.

× 10 3
The accuracy results of the uni-modal, multimodal, and composition functions are gi Tables 8-10, respectively.The accuracy results are in terms of the mean optimum solution a standard deviation of the solutions obtained by each algorithm over 300,000 function eval times (FES) or 10,000 maximum generations.In all experiments, the dimensions of all proble 30.The best results among the algorithms are shown in bold.In each row of the three tables, the values are listed in the first line, and the standard deviations are displayed in the second line.A tailed t-test [45] is performed with a 0.05 significance level to evaluate whether the median values of two sets of obtained results are statistically different from each other.In the below tables, if PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back corresponding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is than other algorithms, and '~' denotes that there is no significance between PTBO and the com algorithm.At the last row of each table, a summary of the total number of '01C2', 'ξ', and calculated.ni-modal, multimodal, and composition functions are given in racy results are in terms of the mean optimum solution and the s obtained by each algorithm over 300,000 function evaluation nerations.In all experiments, the dimensions of all problems are rithms are shown in bold.In each row of the three tables, the mean d the standard deviations are displayed in the second line.A twoa 0.05 significance level to evaluate whether the median fitness lts are statistically different from each other.In the below three erforms another algorithm, a ' ǂ' is labeled in the back of the is algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse tes that there is no significance between PTBO and the compared table, a summary of the total number of '01C2', 'ξ', and '~' is /ξ/~6/0/2 5/1/2 7/0/1 8/0/0 7/0/1 4/1/3 5/0/3 8/0/0 " , multimodal, and composition functions are given in lts are in terms of the mean optimum solution and the d by each algorithm over 300,000 function evaluation .In all experiments, the dimensions of all problems are shown in bold.In each row of the three tables, the mean dard deviations are displayed in the second line.A twonificance level to evaluate whether the median fitness tistically different from each other.In the below three nother algorithm, a ' ǂ' is labeled in the back of the m.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse ere is no significance between PTBO and the compared summary of the total number of '01C2', 'ξ', and '~' is ", "ξ", and "~" denote that the performance of PTBO is better than, worse than, and similar to that of the corresponding algorithm, respectively.Bold text is the best result obtained by the compared algorithms.
(4) The total results of solution accuracy of the 28 functions A summary of the total number of ' accuracy results of the uni-modal, multimodal, and composition functions are given in 10, respectively.The accuracy results are in terms of the mean optimum solution and the deviation of the solutions obtained by each algorithm over 300,000 function evaluation S) or 10,000 maximum generations.In all experiments, the dimensions of all problems are est results among the algorithms are shown in bold.In each row of the three tables, the mean e listed in the first line, and the standard deviations are displayed in the second line.A twoest [45] is performed with a 0.05 significance level to evaluate whether the median fitness f two sets of obtained results are statistically different from each other.In the below three PTBO significantly outperforms another algorithm, a ' ǂ' is labeled in the back of the nding result obtained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse er algorithms, and '~' denotes that there is no significance between PTBO and the compared .At the last row of each table, a summary of the total number of '01C2', 'ξ', and '~' is d.
', 'ξ', and '~' about the solution accuracy of 28 functions is given in Table 11.From Table 11, it can be observed that PTBO has the best performance of the nine algorithms.dal, multimodal, and composition functions are given in esults are in terms of the mean optimum solution and the ined by each algorithm over 300,000 function evaluation ons.In all experiments, the dimensions of all problems are are shown in bold.In each row of the three tables, the mean tandard deviations are displayed in the second line.A twosignificance level to evaluate whether the median fitness e statistically different from each other.In the below three s another algorithm, a ' ǂ' is labeled in the back of the rithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse t there is no significance between PTBO and the compared , a summary of the total number of '01C2', 'ξ', and '~' is Due to page limitation, Figure 9 presents the convergence graphs of parts of the 28 test functions in terms of the mean fitness values achieved by each of the nine algorithms for 50 runs.From Figure 9, it can be observed that PTBO converges towards the optimal values faster than the other algorithms in most cases, i.e., F1, F6, F12, F18, F21, F27, and F28.

The Comparison Results of Wilcoxon Signed-Rank Test
To further statistically compare PTBO with the other eight algorithms, a Wilcoxon signed-rank test [45] has been carried out to show the differences between PTBO and the other algorithms.The p-values on every function by a two-tailed test with a significance level of 0.05 between PTBO and other algorithms are given in Table 12.

The Comparison Results of Wilcoxon Signed-Rank Test
To further statistically compare PTBO with the other eight algorithms, a Wilcoxon signed-rank test [45] has been carried out to show the differences between PTBO and the other algorithms.The p-values on every function by a two-tailed test with a significance level of 0.05 between PTBO and other algorithms are given in Table 12.From the results of the signed-rank test in Table 12, we can observe that our PTBO algorithm has a significant advantage over seven other algorithms in p-value, which are PSO, BA, CS, BSO, WWO, WCA, and SMS.Although PTBO has no significant advantage over DE, PTBO significantly outperforms DE in R + value.R + is the sum of ranks for the functions on which the first algorithm outperformed the second [46], and the differences are ranked according to their absolute values.So, according to the statistical results, it can be concluded that PTBO generally offered better performance than other algorithms for all 28 functions.
The above comparisons between PTBO and other nature-based meta-heuristic algorithms may offer a possible explanation for why PTBO could obtain better results on some optimization problems and that it is possible for PTBO to deal with more complex problems better.

The Comparison Results of Time Complexity
The total comparisons of mean time complexity of the 28 functions about the nine algorithms are given in Figure 10.From Figure 10, we can observe that PTBO is ranked sixth, and is only better than BSO and WWO.However, we can find that the mean CPU time of PTBO is slightly worse than PSO and DE, and it also confirms the analysis in Section 4.1.From the results of the signed-rank test in Table 12, we can observe that our PTBO algorithm has a significant advantage over seven other algorithms in p-value, which are PSO, BA, CS, BSO, WWO, WCA, and SMS.Although PTBO has no significant advantage over DE, PTBO significantly outperforms DE in R + value.R + is the sum of ranks for the functions on which the first algorithm outperformed the second [46], and the differences are ranked according to their absolute values.So, according to the statistical results, it can be concluded that PTBO generally offered better performance than other algorithms for all 28 functions.
The above comparisons between PTBO and other nature-based meta-heuristic algorithms may offer a possible explanation for why PTBO could obtain better results on some optimization problems and that it is possible for PTBO to deal with more complex problems better.

The Comparison Results of Time Complexity
The total comparisons of mean time complexity of the 28 functions about the nine algorithms are given in Figure 10.From Figure 10, we can observe that PTBO is ranked sixth, and is only better than BSO and WWO.However, we can find that the mean CPU time of PTBO is slightly worse than PSO and DE, and it also confirms the analysis in Section 4.1.

Conclusions
In this work, a new meta-heuristic optimization algorithm simulating the phase transition of elements in a natural system, named PTBO, has been described.Although the proposed PTBO algorithm is an algorithm with some similarities to the SMS and PSO algorithms, the main concepts are slightly different.It is very simple and very flexible when compared to the existing nature-inspired algorithms.It is also very robust, at least for the test problems considered in this work.From the numerical simulation results and comparisons, it is concluded that PTBO can be used for solving uni-modal and multimodal numerical optimization problems, and is similarly

Conclusions
In this work, a new meta-heuristic optimization algorithm simulating the phase transition of elements in a natural system, named PTBO, has been described.Although the proposed PTBO algorithm is an algorithm with some similarities to the SMS and PSO algorithms, the main concepts are slightly different.It is very simple and very flexible when compared to the existing nature-inspired algorithms.It is also very robust, at least for the test problems considered in this work.From the numerical simulation results and comparisons, it is concluded that PTBO can be used for solving uni-modal and multimodal numerical optimization problems, and is similarly effective and efficient compared with other optimization algorithms.It is worth noting that PTBO performs slightly worse than PSO and DE in time complexity.In the future, further research contents include (1) developing a more effective division method for the three phases, (2) combining the PTBO algorithm with other evolution algorithms, and (3) applying PTBO to real-world optimization problems, such as the reliability-redundancy allocation problem and structural engineering design optimization problems.

Figure 1 .
Figure 1.Three possible positions of elements in a system.

Figure 1 .
Figure 1.Three possible positions of elements in a system.

Figure 2 .
Figure 2. The critical intervals of the three phases.

Figure 2 .
Figure 2. The critical intervals of the three phases.

Figure 4 .
Figure 4. Two-dimensional example showing the process of the free walking path of elements.

Figure 4 .
Figure 4. Two-dimensional example showing the process of the free walking path of elements.

Figure 5 .
Figure 5.The shrinkage trend of elements towards the optimal point.

Figure 6 .
Figure 6.The vibration of elements in an equilibrium position.

Figure 5 .
Figure 5.The shrinkage trend of elements towards the optimal point.

Figure 5 .
Figure 5.The shrinkage trend of elements towards the optimal point.

Figure 6 .
Figure 6.The vibration of elements in an equilibrium position.

Figure 6 .
Figure 6.The vibration of elements in an equilibrium position.

1
Population initialization operation: O(N * D). 2 Stochastic operator: O(N * D). 3 Shrinkage operator: O(N * D). 4 Vibration operator: O(N * D).Therefore, the total worst time complexity of PTBO in one iteration is 3 * O(N * D) + O(N).According to the operational rules of the symbol O, the worst time complexity of one iteration for PTBO can be simplified as O(N * D).It is worth noting that PTBO has the similar time complexity to some popular meta-heuristic algorithms such as PSO (O(N * D)).

Figure 8 .
Figure 8. Population distribution at various generations in an evolutionary process of PTBO.

Figure 8 .
Figure 8. Population distribution at various generations in an evolutionary process of PTBO.

Figure 9 .
Figure 9. Convergence performance of the compared algorithms on parts of functions.

Figure 9 .
Figure 9. Convergence performance of the compared algorithms on parts of functions.

Figure 10 .
Figure 10.The mean central processing unit (CPU) time of the nine compared algorithms.30D: 30 dimensions.

Figure 10 .
Figure 10.The mean central processing unit (CPU) time of the nine compared algorithms.30D: 30 dimensions.

Table 1 .
The motional characteristics of elements in the three phases.

Table 1 .
The motional characteristics of elements in the three phases.

Table 2 .
The relationship between the intervals and the three phases.

Table 2 .
The relationship between the intervals and the three phases.

Table 5 .
Different interval ratio settings of the three phases.

Table 6 .
Compared results of different interval ratios.

Table 7 .
The parameter settings of the compared algorithms.

Table 8 .
The results of solution accuracy for the uni-modal functions.
1.56 × 10 7 ξ 4.82 × 10 4 ξ 4.21 × 10 7 4.44 × 10 −15 1.25 × 10 3 1.84 × 10 1 2.08 × 10 −3 6.70 × 10 1 1.74 × 10 −2 2.90 × 10 −1 7.89 × 10 −12 2.66 × 10 3 modal, multimodal, and composition functions are given in y results are in terms of the mean optimum solution and the btained by each algorithm over 300,000 function evaluation ations.In all experiments, the dimensions of all problems are ms are shown in bold.In each row of the three tables, the mean e standard deviations are displayed in the second line.A two- .05 significance level to evaluate whether the median fitness are statistically different from each other.In the below three rms another algorithm, a ' ǂ' is labeled in the back of the lgorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse that there is no significance between PTBO and the compared ble, a summary of the total number of '01C2', 'ξ', and '~' is

Table 9 .
The results of solution accuracy for the multimodal functions.
~2.09 × 10 1 ~2.10 × 10 1 ~2.11× 10 1 ~2.09× 10 1 ~2.09× 10 1 ~2.09× 10 1 ~2.09× 10 1 6.50 × ts of the uni-modal, multimodal, and composition functions are given in .The accuracy results are in terms of the mean optimum solution and the e solutions obtained by each algorithm over 300,000 function evaluation ximum generations.In all experiments, the dimensions of all problems are g the algorithms are shown in bold.In each row of the three tables, the mean rst line, and the standard deviations are displayed in the second line.A twormed with a 0.05 significance level to evaluate whether the median fitness tained results are statistically different from each other.In the below three antly outperforms another algorithm, a ' ǂ' is labeled in the back of the ained by this algorithm.Corresponding to 'ξ', 'ξ' denotes that PTBO is worse d '~' denotes that there is no significance between PTBO and the compared w of each table, a summary of the total number of '01C2', 'ξ', and '~' is

Table 10 .
The results of solution accuracy for the composition functions.

Table 11 .
The total results of solution accuracy for the 28 functions.
Bold text is the best result obtained by the compared algorithms.

Table 12 .
Wilcoxon signed-rank test of 28 functions.Bold text is the best result obtained by the compared algorithms.