Next Article in Journal
Subgraphs of Interest Social Networks for Diffusion Dynamics Prediction
Next Article in Special Issue
Silhouette Analysis for Performance Evaluation in Machine Learning with Applications to Clustering
Previous Article in Journal
Understanding the Variability in Graph Data Sets through Statistical Modeling on the Stiefel Manifold
Previous Article in Special Issue
Knowledge Discovery for Higher Education Student Retention Based on Data Mining: Machine Learning Algorithms and Case Study in Chile
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Two-Stage Algorithm for Solving Optimization Problems

1
Department of Mathematics and Computer Science, Sirjan University of Technology, Sirjan, Iran
2
Department of Electrical Engineering, Shahreza Campus, University of Isfahan, Iran
3
Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz, Iran
4
School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, Chile
5
CROM Center for Research on Microgrids, Department of Energy Technology, Aalborg University, 9220 Aalborg, Denmark
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(4), 491; https://doi.org/10.3390/e23040491
Submission received: 29 March 2021 / Revised: 16 April 2021 / Accepted: 17 April 2021 / Published: 20 April 2021

Abstract

:
Optimization seeks to find inputs for an objective function that result in a maximum or minimum. Optimization methods are divided into exact and approximate (algorithms). Several optimization algorithms imitate natural phenomena, laws of physics, and behavior of living organisms. Optimization based on algorithms is the challenge that underlies machine learning, from logistic regression to training neural networks for artificial intelligence. In this paper, a new algorithm called two-stage optimization (TSO) is proposed. The TSO algorithm updates population members in two steps at each iteration. For this purpose, a group of good population members is selected and then two members of this group are randomly used to update the position of each of them. This update is based on the first selected good member at the first stage, and on the second selected good member at the second stage. We describe the stages of the TSO algorithm and model them mathematically. Performance of the TSO algorithm is evaluated for twenty-three standard objective functions. In order to compare the optimization results of the TSO algorithm, eight other competing algorithms are considered, including genetic, gravitational search, grey wolf, marine predators, particle swarm, teaching-learning-based, tunicate swarm, and whale approaches. The numerical results show that the new algorithm is superior and more competitive in solving optimization problems when compared with other algorithms.

1. Introduction

Optimization is the science of finding the best solution available for a problem, maximizing or minimizing the corresponding objective function. Each optimization problem has essentially three elements: (i) decision variables; (ii) objective function; and (iii) constraints. An optimization problem can have more than a solution, reason why its global optimum is called the main solution [1].
Methods to solve optimization problems may be divided into two categories: (i) exact and (ii) approximate [2]. Exact methods are able to find the optimum accurately, but they are not efficient enough in complex problems, with their execution times being increasing exponentially according to the problem dimension. The approximate methods (or algorithms) are able to find good (near-optimal) solutions in a short time for complex problems.
There are numerous optimization problems in engineering and sciences that can be solved with different algorithms, where the population-based approaches are often considered as one of the most effective methods in solving such problems [3]. Note that optimization is the challenging problem that underlies many machine and statistical learning algorithms, from the logistic regression model to training artificial neural networks, tools which are fundamental for the development of artificial intelligence [4].
In order to optimize the objective function, population-based algorithms are able to find appropriate values for the decision variables, based on the constraints to which this function is subject to, through random scanning of the problem search space [5].
Although optimization algorithms provide good solutions, they do not necessarily attain the global optimum. However, often these solutions are close to this optimum and then accepted as a quasi-optimal solution. In order to evaluate the performance of the approximate methods in solving optimization problems, an algorithm is superior to another if the former one provides a better quasi-optimal solution than the last one.
Some researchers have focused on designing algorithms to provide quasi-optimal solutions closer to the global optimum. In this regard, diverse algorithms have been applied by engineers and scientists in various fields such as engineering [6] and energy [7] to achieve quasi-optimal solutions.
Therefore, mainly in computationally highly complex and challenging optimization problems, different practitioners are interested on improving the computational efficiency of the algorithm used to solve such problems. Consequently, population-based algorithms can be useful to deal with this improvement considering two stages of updating of population members. To the best of our knowledge, this two-stage approach has not been until now considered to improve population-based algorithms.
The main objective of this paper is to propose a new algorithm called two-stage optimization (TSO). The TSO algorithm updates each population member in two stages based on a selected group of the good members. Accordingly, the position of a member of the population is updated using two randomly selected members of the good group.
The rest of the article is organized as follows. Section 2 provides an overview of optimization algorithms published in the literature, mentioning several related works. Then, in Section 3, the proposed TSO algorithm is introduced. The performance of the new algorithm in solving optimization problems is evaluated in Section 4. We present further analysis of the results and discussion on the performance of the TSO algorithm in Section 5. Finally, conclusions and suggestions for future works are given in Section 6.

2. Literature Review

In this section, we provide an overview of optimization algorithms published in the literature.
The main purpose of the algorithms is to search effectively and efficiently for the solution space of the optimization problem, as well as to apply rules and strategies to guide the search process. In population-based optimization algorithms [3], a population of random solutions is created first [5]. Then, in an iterative process, this population is improved using rules of the algorithm. The principal idea of the population-based algorithms is to update the population in successive iterations, providing better quasi-optimal solutions. An optimization algorithm may provide a reasonable solution to some problems but inadequate to others. Therefore, the main indicator to compare the performance of optimization algorithms is the value of the objective function.
Optimization algorithms have been inspired by various natural phenomena, behavior of living organisms, plant growth, physical laws, and rules of the games, among others. In general, this type of algorithms can be classified into four groups including: (i) evolutionary-based, (ii) game-based, (iii) physics-based, and (iv) swarm-based approaches, which are detailed below.
Evolutionary optimization algorithms [8] were derived by taking into account genetic processes, especially reproduction. Genetic algorithms [9] are the most famous and widely used of this group, which are based on simulating the birth process and Darwin theory of evolution. In these algorithms, population members are updated based on: (i) selection, (ii) crossover, and (iii) mutation. The differential evolution [8] is proposed to overcome the drawback of the genetic algorithm [9], namely its lack of local search. The main difference between the genetic algorithm and differential evolution is in the selection operators. For these operators of the genetic algorithm, the chance of selecting an answer as one of the parents depends on the value of its objective function, but in the differential evolution all answers have an equal chance of being selected. Therefore, this chance does not depend on the value of its objective function. The artificial immune system evolutionary algorithm is inspired by the mechanisms of the human body and designed by simulating the defense mechanism against disease, microbes, and viruses [10].
Game-based algorithms [11] are developed by simulating the rules of various individual and group games with the aim of solving optimization problems. The orientation search is one of the algorithms in this group, which has been designed by considering the orientation game rule. With this rule, the players move on the playground (that is the same as search space) according to the direction indicated by the referee. Football game-based optimization is another of these algorithms which is formulated by simulating the behaviors and policies of clubs in the football league. In this algorithm, the population is updated in four phases: (i) holding the league, (ii) training the clubs, (iii) transferring the players, and (iv) relegation and promotion of the clubs [12].
Swarm-based optimization algorithms [13] are widely considered and designed mimicking the behaviors of animals, plants, and living organisms, as well as other population-based phenomena [14]. One of the most famous algorithms is the particle swarm optimization (PSO), which imitates the birds’ movement. The process of population updating in the PSO algorithm [15] is based on individual knowledge (local best) and the knowledge of the whole population (global best). Teaching-learning-based optimization (TLBO) is another algorithm in this swarm-based group that was introduced following the teaching-learning process between students and teacher [16]. Grey wolf optimization is also in the group of swarm intelligence algorithms and is inspired by nature. This algorithm simulates the hierarchical structure of social behavior of gray wolfs during hunting [17]. When implementing the algorithm, four types of gray wolf (alpha, beta, delta, and omega) are used to model their hierarchical leadership, with three hunting steps being executed: (i) search for prey, (ii) siege of prey, and (iii) attack on prey. The marine predators (MP) algorithm is inspired by the movement strategies that marine predators use when trapping their prey [18]. In the first phase, MP generates a random population of predators in the search space. Then, given that stronger hunters get more chances and share of food, the best solution is applied. Tunicate swarm (TS) is an optimization algorithm that imitates the jet propulsion and swarm behaviors of tunicates during the navigation and foraging process [19]. Whale optimization (WO) is an algorithm inspired by the bubble net hunting method of whales [20]. The WO is performed into three phases: (i) encircling prey, (ii) bubble-net attack, and (iii) searching for prey.
Physics-based algorithms are designed using physical laws to achieve quasi-optimal solutions [21]. One of these optimizers is the gravitational search (GS), which was formulated by simulating the law of gravitational force between objects [22]. Simulation of the Hooke and spring displacement laws were applied to designing the spring search algorithm [23]. In this algorithm, population members correspond to weights connected to each other by different springs. These members are updated moving in the search space using the forces exerted on the weights by the springs. The Henry gas solubility algorithm tries to imitate the behavior governed by the Henry law to solve challenging optimization problems. This is an essential law that states that the amount of gas dissolved in a liquid is proportional to its partial pressure on the liquid at a fixed temperature. The Henry algorithm imitates the huddling behavior of gas to balance exploration and exploitation in the search space and avoid local optima [24].

3. The New Two-Stage Optimization Algorithm

In this section, the stages of the proposed TSO algorithm are described and then mathematically modeled to be implemented on various optimization problems.

3.1. Theory of the TSO Algorithm

In most population-based optimization algorithms, the member that provides the best value of the objective function (the best member) has an impressive impact on population update and algorithm progress. However, the position of the best member in the problem search space may not be appropriate in all axes (decision variables). This concept means that the best member might not be suitable for leading the population in some axes.
The main idea of the TSO algorithm for solving such an issue is to employ a selected group of good members of the population called the good group. The use of this group in population updating utilizes more information in population development to achieve a quasi-optimal solution. Each member in the TSO algorithm is updated in two stages. At each stage of this algorithm, a member of the good group is randomly selected to update the position of each population member on each axis of the search space. This population update continues iterating until the algorithm stops. Then, when the algorithm reaches the stopping condition, the best quasi-optimal solution for the problem is reported. In the next subsection, mathematical modeling of the TSO algorithm is presented.

3.2. Mathematical Modeling of the TSO Algorithm

As mentioned, the TSO algorithm is a population-based optimization technique. Each row of the population matrix belongs to a population member, which proposes values for the decision variables. Each column of this matrix also specifies values of a variable proposed by different members. Therefore, for the population matrix, the number of rows is equal to the number of members, whereas the number of columns is equal to the number of decision variables. The population matrix ( X ) of the TSO algorithm is defined as
X = [ X 1 X i X N ] = [   x 1 1 x 1 d x 1 m x i 1 x i d x i m x N 1 x N d x N m   ] N × m ,
where X i is the i’-th population member, x i d is the suggested value for the d’-th variable by the i’-th population member, m is the number of variables, and N is the number of members. After defining the mentioned matrix, the objective function is evaluated based on the corresponding members according to the values proposed for the variables. By comparing the obtained values, a certain number of population members (for example, a ten percent), for which quasi-optimal values have been achieved in the objective function, are selected as members of the good group. This group is described using the matrix representation stated as
G = [ G 1 G j G N G ] = [   g 1 1 g 1 d g 1 m g j 1 g j d g j m g N G 1 g N G d g N G m   ] N G × m ,
where G j is the j’-th good member, g j d is the d’-th dimension of the j’-th good member, and N G is the number of selected good members. The main idea in the TSO algorithm is to update the value of each variable (proposed by each member of the population) using two different members of the good group.
In the first stage, the position of each population member on each axis of the search space is updated with a selected good member. Thus, a good member may be selected to lead a population member on one or more axes. In addition, a good member may not be selected to lead other members on any of the axes. The first stage of the TSO algorithm for updating population members is expressed as
x i d = { x i d + rand × ( g j d x i d ) ,   F j < F i , x i d + rand × ( x i d g j d ) ,   else ;   j 1 : N G ,
X i = { X i ,   F i < F i , X i ,   else ,
where x i d is the new position of the i’-th member in the d’th dimension,   rand is a random number in the interval [ 0 ,   1 ] , F i is the value of the objective function for the i’-th population member, X i is the new position of the i’-th member, and F i is its corresponding objective function value. Equation (1) indicates that a member is updated if the value of the objective function is improved in the new position.
In the second stage, the position of each member, on each axis of the search space, is updated again based on a non-repetitive good member. This means that the position of each member, on each axis, is affected by two different members of the good group. This stage of the TSO algorithm in updating population members is defined as
x i d = { x i d + rand × ( g k d x i d ) ,   F k < F i , x i d + rand × ( x i d g k d ) ,   else ;   k 1 : N G ,   k j ,
X i = { X i ,   F i < F i , X i ,   else .
After updating the population based on the mentioned two stages, new members of the good group are selected. This process is repeated until the algorithm reaches the condition of stopping. The implementation process of the TSO algorithm is presented as a pseudo-code in Algorithm 1. Furthermore, the steps of the TSO algorithm are shown as a flowchart in Figure 1.
Algorithm 1 Pseudo-code of the TSO approach.
Start the TSO algorithm.
1.Determine the range of decision variables, constraints and objective function of the problem.
2.Create the initial population at random.
3.Evaluate the objective function based on the initial population.
4.  For t = 1:T, with t being iteration number and T the maximum iteration:
5.  Update the good group.
6.      For i = 1:N, with N being the number of population members;
7.          For d = 1:m, with d being the contour and m the number of variables:
8.              Select the j’-th good member.
9.              Stage 1: Update x i d based on (1).
10.              End for d = 1:m.
11.          Update X i based on (2).
12.          For d = 1:m:
13.              Select the k’-th good member, with kj.
14.              Stage 2: Update x i d based on (3).
15.          End for d = 1:m.
16.          Update X i based on (4).
17.      End for i = 1:N.
18.  Save the best quasi-optimal solution.
19.  End for t = 1:T.
20.  Print the best quasi-optimal solution obtained by the TSO algorithm.
End the TSO algorithm.

4. Simulation Study and Results

In this section, the performance of the TSO algorithm for solving optimization problems is evaluated. For this purpose, the algorithm has been implemented on twenty-three different objective functions for achieving a suitable quasi-optimal solution. These objective functions can be categorized into three different types including: (i) unimodal, (ii) high-dimension multimodal, and (iii) fixed-dimension multimodal functions. Detailed information of these objective functions is given in the Appendix A (Table A1,Table A2 and Table A3).

4.1. Experimental Setup

In order to analyze the performance of our proposal, the results obtained by the TSO algorithm are compared, as mentioned, with three classes of existing optimization algorithms, which include (i) GA and PSO, as the most well-studied algorithms (famous methods), (ii) GSA, GWO and TLBO, as algorithms which are cited by many scientists (popular methods), and (iii) MPA, TSA and WOA, as recently developed algorithms (new methods). The experimentation has been done on MATLAB (R2017b version, MathWorks, Natick, MA, USA) using a 64-bit Core i7 processor of 3.20 GHz and 16 GB main memory. For all objective functions, the TSO algorithm and its competing algorithms have been simulated in 20 independent runs, where each run employs 1000 iterations. The optimal solutions of the objective functions are evaluated using the two most important indexes for comparing the performance of algorithms when solving optimization problems, that are: average (AV) and standard deviation (SD) of the best obtained solutions, where, as it is known, such an SD reports the dispersion of these solutions. Indeed, when analyzing the performance of the optimization algorithms with the results presented in Table 1, Table 2 and Table 3, the AV index is important first, but if two algorithms have a similar AV, then the algorithm with less dispersion is superior.

4.2. Evaluation for Unimodal Objective Functions

The objective functions F1 to F7 are unimodal. The optimization results of the TSO algorithm and other mentioned algorithms for these objective functions are presented in Table 1. For all of these functions, the TSO algorithm performs better than the other eight algorithms. Note that the proposed algorithm provides exactly the global optimal solution for F6. In addition, for other functions, the TSO algorithm provides a solution very close to the global optimum, especially for F1 and F2. These results show that the new proposed algorithm has a good efficiency in achieving a suitable quasi-optimal solution for this type of objective functions.

4.3. Evaluation for High-Dimesional Multimodal Objective Functions

Six objective functions F8 to F13 are selected from high-dimension multimodal functions. Table 2 reports the results of optimizing these functions using the TSO algorithm and other algorithms. Note that the new algorithm performs better for all F8 to F13. Especially for F9 and F11, the TSO algorithm has achieved the global-optimal solution. An overview of the results in Table 2 shows that the proposed algorithm is able to solve this type of optimization problems more effectively compared to the other algorithms.

4.4. Evaluation for Fixed-Dimesional Multimodal Objective Functions

The functions F14 to F23 are used to evaluate the performance of the TSO algorithm and other algorithms for multimodal functions. The results are reported in Table 3. Notice that the new algorithm provides suitable quasi-optimal solutions for this type of functions. Although the MP algorithm also performs well, it is not competitive with the TSO algorithm for F15, F17, and F20. Thus, the new algorithm is more efficient than the other eight algorithms in optimizing this type of objective functions.
The AV and SD of the optimal solutions of the objective functions using the proposed TSO algorithm and eight other optimization algorithms are presented in Table 1, Table 2 and Table 3. However, since this class of objective functions are associated with too many local minima, in order to have a better understanding of the results, logarithmic scale plots of the optimal solutions for each algorithm and function are shown in Figure 2.
As mentioned, in order to evaluate the performance of optimization algorithms, objective functions of three different types have been selected. The objective functions F1 to F7 of the unimodal type have no local optimum, and the global optimum solution for these functions is zero. Based on the plots of F1 to F7, the TSO algorithm provides the best performance among the optimization algorithms. The GA algorithm is the worst optimizer for F1, F2, F3, and F5. The PSO algorithm is not a good optimizer for F4, F6, and F7. Note that the objective functions F8 to F13 are high-dimension multimodal type with local optimal solutions. Considering the plots drawn for these objective functions in Figure 2, it is clear that the TSO algorithm has good performance in solving these types of optimization problems. The distributions of quasi-optimal solutions in the TSO algorithm are very close to each other and therefore have very low SD. The objective functions F14 to F23 are fixed-dimension multimodal type with local optimal solutions. The superiority of the TSO algorithm in providing quasi-optimal solutions with low SD is evident in Figure 2 for F14, F15, F20, F21, F22, and F23. As reported in Table 3, the TSO algorithm and other eight algorithms provide similar performance in optimizing the objective functions F16, F17, F18, and F19. Thus, it is expected that the plots of these functions are similar and practically with no difference to each other.
Based on the analysis of numerical results in Table 1, Table 2 and Table 3 and the plots presented in Figure 2, it is evident that the TSO algorithm is able to provide suitable quasi-optimal solutions with low SD in various problems.

4.5. Statistical Testing

Comparison of the performance of the optimization algorithms in providing quasi-optimal solutions based on AV and SD gives us relevant information. However, considering only these results is not enough to guarantee the superiority of an algorithm. This is because, even after twenty independent runs for each algorithm, the superiority of one over the another may occur randomly with very low probability.
Therefore, in order to prove non-randomness superiority of the TSO algorithm, a statistical test on the performance of the algorithms must be considered. In this paper, the Friedman rank test [25], pp. 262–274 is applied for statistical analysis of optimization results and performance of the algorithms. The results of this test for the TSO algorithm and eight other algorithms are reported in Table 4. According to this table, the TSO algorithm ranks first in optimizing unimodal objective functions. After the TSO algorithm, the TSA algorithm ranks second in the optimization of this type of functions. The proposed algorithm also ranks first among eight other algorithms in optimizing high-dimension multimodal objective functions. For this type of functions, after the TSO algorithm, the GWO algorithm ranks second. The proposed algorithm has also achieved the best performance when optimizing fixed-dimension multimodal objective functions. After the TSO algorithm, the MP algorithm is in the second position. In addition, based on general analysis of the results reported in Table 4, for all twenty-three objective functions, the TSO algorithm achieves the best performance among the mentioned optimization algorithms and has the first position. These results confirm the superiority of the TSO algorithm over the other eight algorithms and prove that this superiority is not product of the randomness.

5. Discussion

Exploitation and exploration capabilities are two important indicators to evaluate performance of algorithms in providing quasi-optimal solutions [26]. Exploitation power means the ability of an algorithm to achieve a suitable quasi-optimal solution. In fact, at the end of iterations of an algorithm, this must provide the best quasi-optimal solution so far. An algorithm has a higher exploitation power regardless of whether this quasi-optimal solution is closer to the global solution. Exploration power indicates the ability of an optimization algorithm to accurately scan different areas of the search space. Thus, an algorithm that scans the search space more accurately for all iterations can provide a quasi-optimal solution close to the global solution without getting stuck in the local solutions. An important point is to maintain a balance between these two indicators. Then, in the first iterations, the priority is with the exploration index to check the search space well. Therefore, by increasing the number of iterations of the algorithm, the priority is with the exploitation index to achieve the best quasi-optimal solution.
The new TSO algorithm, with suitable number of members, has the potential to accurately scan the search space. Guiding the population members in this space under the influence of several good members causes the population to move to different areas of such a space [27]. This issue increases the ability of the TSO algorithm to accurately scan the search space, which indicates the reasonable exploration power of this algorithm. In addition, as the number of iterations increases, the population members move towards the good members, and as the algorithm reaches the final iterations, population converges and concentrates on near the optimal solution. This issue proves the suitable exploitation power of our TSO algorithm to provide an appropriate quasi-optimal solution.
The analyzed unimodal objective functions have one global optimal solution and no local optimal solutions. Then, these functions are suitable to evaluate the exploitation index. The optimization results of such objective functions presented in Table 1 indicate that the TSO algorithm has an acceptable ability to provide a quasi-solution close to the global solution and has a much higher exploitation power than the other algorithms.
The studied high-dimension and fixed-dimension multimodal functions have several local optimal solutions, in addition to the global optimal solution. Therefore, these types of objective functions are suitable for evaluating the exploration index. Based on the results reported in Table 2 and Table 3, the TSO algorithm, with the desired exploration power, was able to provide appropriate quasi-solutions. This shows that the TSO algorithm has a reasonable ability to accurately scan the search space and therefore has higher exploration power compared to the other eight optimization algorithms.
The statistical results of the Friedman rank test presented in Table 4 confirmed that the superiority of the TSO algorithm over the other eight algorithms analyzed in the exploitation and exploration indexes is not random.

6. Conclusions and Future Works

Certain algorithms are able to provide a solution for optimization problems, which is not necessarily the global solution, but could be close to it. In this paper, a two-stage algorithm was introduced to solve optimization problems. The main idea of this algorithm acronymized as TSO is to update the population based on a selected group of its good members. For this purpose, several good members are utilized to lead each population member in all axes of the search space, instead of using only the best member. Therefore, the position of each member in each axis of the search space is updated in two stages and under the influence of two different good members. The main feature of the TSO algorithm is its simplicity of relationships and implementation, as well as the lack of control parameters not needing their tuning.
The stages of the TSO algorithm were described and then mathematically modeled for solving optimization problems. The performance of the proposed algorithm was evaluated on a set of twenty-three objective functions from three different types including unimodal, high-dimension multimodal, and fixed-dimension multimodal functions. The results of this evaluation were compared with the performance of the genetic, gravitational search, grey wolf, marine predators, particle swarm, teaching-learning-based, tunicate swarm, and whale algorithms in optimizing these objective functions [28]. By comparing the simulation results for the unimodal case, which are suitable for evaluating the exploitation index due to having an optimal solution, obvious superiority of the TSO algorithm over the other eight algorithms was demonstrated. Considering the performance of the proposed algorithm and other algorithms on both groups of multimodal objective functions, it was shown that the TSO algorithm has higher exploration power and is superior to other algorithms in optimizing this type of objective functions. Furthermore, the Friedman rank test was applied in order to further analyze the performance of the TSO algorithm and other algorithms. Based on the results of this statistical analysis, it was found that the proposed algorithm ranks first among the studied algorithms and its superiority in optimizing objective functions is not random. Therefore, general analysis of the optimization and statistical results confirmed the superiority of the TSO algorithm doing it more competitive than the other eight analyzed algorithms.
Some ideas and perspectives for future research the arise from the present investigation are the following: (i) the design of the binary version as well as the multi-objective version of the TSO algorithm has an interesting potential; (ii) the implementing of the TSO algorithm on various optimization problems and real-world problems could be explored and achieve some significant contributions [29]; and (iii) it exists a promising area of application in machine, deep and statistical learnings, for instance, in image compression [5]. These and other aspects for further research are being studied by the authors and we hope to publish their finding in future works.

Author Contributions

Methodology, S.A.D.; software, Z.M., M.D.; validation, V.L., H.G.; formal analysis, H.G., V.L., J.M.G.; investigation, V.L.; data curation, V.L., J.M.G.; writing—original draft preparation, S.A.D., M.D., Z.M., H.G.; writing—review and editing, V.L., J.M.G.; supervision, M.D. All authors have read and agreed to the published version of the manuscript.

Funding

The research of V. Leiva was partially supported by grant FONDECYT 1200525 from the National Agency for Research and Development (ANID) of the Chilean government under the Ministry of Science, Technology, Knowledge and Innovation.

Data Availability Statement

The authors declare to honor the Principles of Transparency and Best Practice in Scholarly Publishing about Data.

Acknowledgments

The authors would also like to thank the Editor and Reviewers for their constructive comments which led to improve the presentation of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Mathematical details of the twenty-three objective functions used for obtaining the results in Table A1, Table A2 and Table A3.
Table A1. Unimodal objective functions and their variables’ interval.
Table A1. Unimodal objective functions and their variables’ interval.
Objective FunctionVariables’ Interval
F 1 ( x ) = i = 1 m x i 2 [ 100 ,   100 ] m
F 2 ( x ) = i = 1 m | x i | + i = 1 m | x i | [ 10 ,   10 ] m
F 3 ( x ) = i = 1 m ( j = 1 i x i ) 2 [ 100 ,   100 ] m
F 4 ( x ) = max { | x i | ,   1 i m } [ 100 ,   100 ] m
F 5 ( x ) = i = 1 m 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ) ] [ 30 ,   30 ] m
F 6 ( x ) = i = 1 m ( [ x i + 0.5 ] ) 2 [ 100 ,   100 ] m
F 7 ( x ) = i = 1 m i x i 4 + rand ( 0 , 1 ) [ 1.28 ,   1.28 ] m
Table A2. High-dimension multimodal objective functions and their variables’ interval.
Table A2. High-dimension multimodal objective functions and their variables’ interval.
Objective FunctionVariables’ Interval
F 8 ( x ) = i = 1 m x i   sin ( | x i | ) [ 500 ,   500 ] m
F 9 ( x ) = i = 1 m [   x i 2 10 cos ( 2 π x i ) + 10 ] [ 5.12 ,   5.12 ] m
F 10 ( x ) = 20 exp ( 0.2 1 m i = 1 m x i 2 ) exp ( 1 m i = 1 m cos ( 2 π x i ) ) + 20 + e [ 32 ,   32 ] m
F 11 ( x ) = 1 4000 i = 1 m x i 2 i = 1 m cos ( x i i ) + 1 [ 600 ,   600 ] m
F 12 ( x ) = π m   { 10 sin ( π y 1 ) + i = 1 m ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 m u ( x i , 10 , 100 , 4 ) ,
where u ( x i , a , i , n ) = { k ( x i a ) n   x i > a , 0 a < x i < a , k ( x i a ) n   x i < a
[ 50 ,   50 ] m
F 13 ( x ) = 0.1 {   sin 2 ( 3 π x 1 ) + i = 1 m ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x m ) ] } + i = 1 m u ( x i , 5 , 100 , 4 ) [ 50 ,   50 ] m
Table A3. Fixed-dimension multimodal test functions and their variables’ interval.
Table A3. Fixed-dimension multimodal test functions and their variables’ interval.
Objective FunctionVariables’ Interval
F 14 ( x ) = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ) 1 [ 65.53 ,   65.53 ] 2
F 15 ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 [ 5 ,   5 ] 4
F 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 [ 5 ,   5 ] 2
F 17 ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos   x 1 + 10 [−5, 10] × [0, 15]
F 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] [ 5 ,   5 ] 2
F 19 ( x ) = i = 1 4 c i exp ( j = 1 3 a i j ( x j P i j ) 2 ) [ 0 ,   1 ] 3
F 20 ( x ) = i = 1 4 c i exp ( j = 1 6 a i j ( x j P i j ) 2 ) [ 0 ,   1 ] 6
F 21 ( x ) = i = 1 5 [ ( X a i ) ( X a i ) T + 6 c i ] 1 [ 0 ,   10 ] 4
F 22 ( x ) = i = 1 7 [ ( X a i ) ( X a i ) T + 6 c i ] 1 [ 0 ,   10 ] 4
F 23 ( x ) = i = 1 10 [ ( X a i ) ( X a i ) T + 6 c i ] 1 [ 0 ,   10 ] 4

References

  1. Zelinka, I.; Snasael, V.; Abraham, A. Handbook of Optimization: From Classical to Modern Approach; Springer: New York, NY, USA, 2012. [Google Scholar]
  2. Beck, A.; Stoica, P.; Li, J. Exact and approximate solutions of source localization problems. IEEE Trans. Signal Process. 2008, 56, 1770–1778. [Google Scholar] [CrossRef]
  3. Beheshti, Z.; Shamsuddin, S.M.H. A review of population-based meta-heuristic algorithms. Int. J. Adv. Soft Comput. Appl. 2013, 5, 1–35. [Google Scholar]
  4. Palacios, C.A.; Reyes-Suarez, J.A.; Bearzotti, L.A.; Leiva, V.; Marchant, C. Knowledge discovery for higher education student retention based on data mining: Machine learning algorithms and case study in Chile. Entropy 2021, 23, 485. [Google Scholar] [CrossRef]
  5. Kawamoto, K.; Hirota, K. Random scanning algorithm for tracking curves in binary image sequences. Int. J. Intell. Comput. Med. Sci. Image Process. 2008, 2, 101–110. [Google Scholar] [CrossRef]
  6. Dhiman, G. SSC: A hybrid nature-inspired meta-heuristic optimization algorithm for engineering applications. Knowl.-Based Syst. 2021, 222, 106926. [Google Scholar] [CrossRef]
  7. Dehghani, M.; Mardaneh, M.; Malik, O.P.; Guerrero, J.M.; Sotelo, C.; Sotelo, D.; Nazari-Heris, M.; Al-Haddad, K.; Ramirez-Mendoza, R.A. Genetic algorithm for energy commitment in a power system supplied by multiple energy carriers. Sustainability 2020, 12, 10053. [Google Scholar] [CrossRef]
  8. Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 2011, 15, 4–31. [Google Scholar] [CrossRef]
  9. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  10. Hofmeyr, S.A.; Forrest, S. Architecture for an artificial immune system. Evol. Comput. 2000, 8, 443–473. [Google Scholar] [CrossRef] [PubMed]
  11. Singh, P.R.; Abd Elaziz, M.; Xiong, S. Ludo game-based metaheuristics for global and engineering optimization. Appl. Soft Comput. 2019, 84, 105723. [Google Scholar] [CrossRef]
  12. Ji, Z.; Xiao, W. Improving decision-making efficiency of image game based on deep Q-learning. Soft Comput. 2020, 24, 8313–8322. [Google Scholar] [CrossRef]
  13. Derrac, J.; Garcia, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  14. Ramirez-Figueroa, J.A.; Martin-Barreiro, C.; Nieto, A.B.; Leiva, V.; Galindo, M.P. A new principal component analysis by particle swarm optimization with an environmental application for data science. Stoch. Environ. Res. Risk Assess. 2021. [Google Scholar] [CrossRef]
  15. Cheng, Z.; Song, H.; Wang, J.; Zhang, H.; Chang, T.; Zhang, M. Hybrid firefly algorithm with grouping attraction for constrained optimization problem. Knowl. Based Syst. 2021, 220, 106937. [Google Scholar] [CrossRef]
  16. Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  17. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  18. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine predators algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  19. Kaur, S.; Awasthi, L.K.; Sangal, A.; Dhiman, G. Tunicate swarm algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  20. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  21. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  22. Montazeri, Z.; Niknam, T. Optimal utilization of electrical energy from power plants based on final energy consumption using gravitational search algorithm. Electr. Eng. Electromech. 2018, 4, 70–73. [Google Scholar] [CrossRef] [Green Version]
  23. Dehghani, M.; Montazeri, Z.; Dehghani, A.; Malik, O.P.; Morales-Menendez, R.; Dhiman, G.; Nouri, N.; Ehsanifar, A.; Guerrero, J.M.; Ramirez-Mendoza, R.A. Binary spring search algorithm for solving various optimization problems. Appl. Sci. 2021, 11, 1286. [Google Scholar] [CrossRef]
  24. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  25. Daniel, W.W. Applied Nonparametric Statistics; PWS-Kent Publisher: Boston, MA, USA, 1990. [Google Scholar]
  26. Korosec, P.; Eftimov, T. Insights into exploration and exploitation power of optimization algorithm using DSCTool. Mathematics 2020, 8, 1474. [Google Scholar] [CrossRef]
  27. Givi, H.; Dehghani, M.; Montazeri, Z.; Morales-Menendez, R.; Ramirez-Mendoza, R.A.; Nouri, N. GBUO: “The good, the bad, and the ugly” optimizer. Appl. Sci. 2021, 11, 2042. [Google Scholar] [CrossRef]
  28. Kang, H.; Bei, F.; Shen, Y.; Sun, X.; Chen, Q. A diversity model based on dimension entropy and its application to swarm intelligence algorithm. Entropy 2021, 23, 397. [Google Scholar] [CrossRef] [PubMed]
  29. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the TSO algorithm.
Figure 1. Flowchart of the TSO algorithm.
Entropy 23 00491 g001
Figure 2. Plots of the objective function average with y-axis in logarithm scale for the indicated algorithm and function.
Figure 2. Plots of the objective function average with y-axis in logarithm scale for the indicated algorithm and function.
Entropy 23 00491 g002aEntropy 23 00491 g002bEntropy 23 00491 g002c
Table 1. Results of applying the indicated algorithm on the listed unimodal objective function.
Table 1. Results of applying the indicated algorithm on the listed unimodal objective function.
GeneticPSOGSTLBOGWOWOTSMPTSO
F1AV13.24051.7740 × 10−52.0255 × 10−178.3373 × 10−601.09 × 10−582.1741 × 10−97.71 × 10−383.2715 × 10−211.2 × 10−163
SD4.7664 × 10−156.4396 × 10−211.1369 × 10−324.9436 × 10−765.1413 × 10−747.3985 × 10−257.00 × 10−214.6153 × 10−212.65 × 10−180
F2AV2.47940.34112.3702 × 10−87.1704 × 10−351.2952 × 10−340.54628.48 × 10−391.57 × 10−122.29 × 10−86
SD2.2342 × 10−157.4476 × 10−175.1789 × 10−246.6936 × 10−501.9127 × 10−501.7377 × 10−165.92 × 10−411.42 × 10−121.05 × 10−99
F3AV1536.896589.492279.34392.7531 × 10−157.4091 × 10−151.7634 × 10−81.15 × 10−210.08645.83 × 10−70
SD6.6095 × 10−137.1179 × 10−131.2075 × 10−132.6459 × 10−315.6446 × 10−301.0357 × 10−236.70 × 10−210.14444.06 × 10−77
F4AV2.09423.96343.2547 × 10−99.4199 × 10−151.2599 × 10−142.9009 × 10−51.33 × 10−232.6 × 10−81.91 × 10−70
SD2.2342 × 10−151.9860 × 10−162.0346 × 10−242.1167 × 10−301.0583 × 10−291.2121 × 10−201.15 × 10−229.25 × 10−94.56 × 10−83
F5AV310.427350.2624536.10695146.456436.860741.776728.861546.04928.4397
SD2.0972 × 10−131.5888 × 10−143.0982 × 10−141.9065 × 10−142.6514 × 10−142.5421 × 10−244.76 × 10−30.42191.83 × 10−15
F6AV14.5520.2500.44350.64231.6085 × 10−97.10 × 10−210.3980
SD3.1776 × 10−151.256404.2203 × 10−166.2063 × 10−174.6240 × 10−251.12 × 10−250.19140
F7AV5.6799 × 10−30.11340.02060.00170.00080.02053.72 × 10−40.00182.75 × 10−5
SD7.7579 × 10−194.3444 × 10−172.7152 × 10−183.87896 × 10−197.2730 × 10−201.5515 × 10−185.09 × 10−50.0018.49 × 10−20
Where AV: average and SD: standard deviation.
Table 2. Results of applying the indicted algorithm on the listed high-dimension multimodal objective function.
Table 2. Results of applying the indicted algorithm on the listed high-dimension multimodal objective function.
GeneticPSOGSTLBOGWOWOTSMPTSO
F8AV−8184.4142−6908.6558−2849.0724−7408.6107−5885.1172−1663.9782−5740.3388−3594.16321−12536.9
SD833.2165625.6248264.3516513.5784467.5138716.349241.5811.32651.30 × 10−11
F9AV62.411457.061316.267510.24858.5265 × 10−154.20115.70 × 10−3140.12380
SD2.5421 × 10−146.3552 × 10−153.1776 × 10−155.5608 × 10−155.6446 × 10−304.3692 × 10−151.46 × 10−326.31240
F10AV3.22182.15463.5673 × 10−90.27571.7053 × 10−140.32939.80 × 10−149.6987 × 10−124.44 × 10−15
SD5.1636 × 10−157.9441 × 10−163.6992 × 10−252.5641 × 10−152.7517 × 10−291.9860 × 10−164.51 × 10−126.1325 × 10−127.06 × 10−31
F11AV1.23020.04623.73750.60820.00370.11891.00 × 10−700
SD8.4406 × 10−163.1031 × 10−182.7804 × 10−151.9860 × 10−161.2606 × 10−188.9991 × 10−177.46 × 10−700
F12AV0.0470.48060.03620.02030.03721.74140.03680.08517.42 × 10−4
SD4.6547 × 10−181.8619 × 10−166.2063 × 10−187.7579 × 10−194.3444 × 10−178.1347 × 10−121.5461 × 10−20.00521.75 × 10−18
F13AV1.20850.50840.0020.32930.57630.34562.95750.49011.08 × 10−4
SD3.2272 × 10−164.9650 × 10−174.2617 × 10−142.1101 × 10−162.4825 × 10−163.25391 × 10−121.5682 × 10−120.19323.41 × 10−17
Where AV: average and SD: standard deviation.
Table 3. Results of applying the indicated algorithm on the listed fixed-dimension multimodal objective function.
Table 3. Results of applying the indicated algorithm on the listed fixed-dimension multimodal objective function.
GeneticPSOGSTLBOGWOWOTSMPTSO
F14AV0.99862.17353.59132.27213.74080.9981.99230.9980.998
SD1.5640 × 10−157.9441 × 10−167.9441 × 10−161.9860 × 10−166.4545 × 10−159.4336 × 10−162.6548 × 10−74.2735 × 10−168.69 × 10−16
F15AV5.3952 × 10−20.05350.00240.00330.00630.00490.00040.0030.0003
SD7.0791 × 10−183.8789 × 10−192.9092 × 10−191.2218 × 10−171.1636 × 10−183.4910 × 10−189.0125 × 10−44.0951 × 10−151.82 × 10−19
F16AV−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
SD7.9441 × 10−163.4755 × 10−165.9580 × 10−161.4398 × 10−153.9720 × 10−169.9301 × 10−162.6514 × 10−164.4652 × 10−168.65 × 10−17
F17AV0.43690.78540.39780.39780.39780.40470.39910.39790.3978
SD4.9650 × 10−174.9650 × 10−179.9301 × 10−177.4476 × 10−178.6888 × 10−172.4825 × 10−172.1596 × 10−169.1235 × 10−159.93 × 10−17
F18AV4.3592333.000933333
SD5.9580 × 10−163.6741 × 10−156.9511 × 10−161.5888 × 10−152.0853 × 10−155.6984 × 10−152.6528 × 10−151.9584 × 10−154.97 × 10−16
F19AV−3.85434−3.8627−3.8627−3.8609−3.8621−3.8627−3.8066−3.8627−3.8627
SD9.9301 × 10−178.9371 × 10−158.3413 × 10−157.3483 × 10−152.4825 × 10−153.1916 × 10−152.6357 × 10−154.2428 × 10−156.95 × 10−16
F20AV−2.8239−3.2619−3.0396−3.2014−3.2523−3.2424−3.3206−3.3211−3.3219
SD3.97205 × 10−162.9790 × 10−162.1846 × 10−141.7874 × 10−152.1846 × 10−157.9441 × 10−165.6918 × 10−151.1421 × 10−111.89 × 10−15
F21AV−4.3040−5.3891−5.1486−9.1746−9.6452−7.4016−5.5021−10.1532−10.1532
SD1.5888 × 10−151.4895 × 10−152.9790 × 10−168.5399 × 10−156.5538 × 10−152.3819 × 10−115.4615 × 10−132.5361 × 10−115.96 × 10−16
F22AV−5.1174−7.6323−9.0239−10.0389−10.4025−8.8165−5.0625−10.4029−10.4029
SD1.2909 × 10−151.5888 × 10−151.6484 × 10−121.5292 × 10−141.9860 × 10−156.7524 × 10−158.4637 × 10−142.8154 × 10−111.79 × 10−15
F23AV−6.5621−6.1648−8.9045−9.2905−10.1302−10.0003−10.3613−10.5364−10.5364
SD3.8727 × 10−152.7804 × 10−157.1497 × 10−141.1916 × 10−154.5678 × 10−159.1357 × 10−157.6492 × 10−123.9861 × 10−119.33 × 10−16
Table 4. Results of the Friedman rank test for evaluating the indicated algorithm and type of objective function.
Table 4. Results of the Friedman rank test for evaluating the indicated algorithm and type of objective function.
Function TSOMPTSWOGWOTLBOGSPSOGenetic
1Unimodal
(F1F7)
Friedman value73716422728375657
Friedman rank152634578
2High-dimension multimodal
(F8F13)
Friedman value63327382425323740
Friedman rank164823579
3Fixed-dimension multimodal
(F14F23)
Friedman value101533333135384555
Friedman rank124435678
4All 23 functionsFriedman value2385761138288107138152
Friedman rank142735689
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Doumari, S.A.; Givi, H.; Dehghani, M.; Montazeri, Z.; Leiva, V.; Guerrero, J.M. A New Two-Stage Algorithm for Solving Optimization Problems. Entropy 2021, 23, 491. https://doi.org/10.3390/e23040491

AMA Style

Doumari SA, Givi H, Dehghani M, Montazeri Z, Leiva V, Guerrero JM. A New Two-Stage Algorithm for Solving Optimization Problems. Entropy. 2021; 23(4):491. https://doi.org/10.3390/e23040491

Chicago/Turabian Style

Doumari, Sajjad Amiri, Hadi Givi, Mohammad Dehghani, Zeinab Montazeri, Victor Leiva, and Josep M. Guerrero. 2021. "A New Two-Stage Algorithm for Solving Optimization Problems" Entropy 23, no. 4: 491. https://doi.org/10.3390/e23040491

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop