A New “Good and Bad Groups-Based Optimizer” for Solving Various Optimization Problems

: Optimization is the science that presents a solution among the available solutions con-sidering an optimization problem’s limitations. Optimization algorithms have been introduced as efﬁcient tools for solving optimization problems. These algorithms are designed based on various natural phenomena, behavior, the lifestyle of living beings, physical laws, rules of games, etc. In this paper, a new optimization algorithm called the good and bad groups-based optimizer (GBGBO) is introduced to solve various optimization problems. In GBGBO, population members update under the inﬂuence of two groups named the good group and the bad group. The good group consists of a certain number of the population members with better ﬁtness function than other members and the bad group consists of a number of the population members with worse ﬁtness function than other members of the population. GBGBO is mathematically modeled and its performance in solving optimization problems was tested on a set of twenty-three different objective functions. In addition, for further analysis, the results obtained from the proposed algorithm were compared with eight optimization algorithms: genetic algorithm (GA), particle swarm optimization (PSO), gravitational search algorithm (GSA), teaching–learning-based optimization (TLBO), gray wolf optimizer (GWO), and the whale optimization algorithm (WOA), tunicate swarm algorithm (TSA), and marine predators algorithm (MPA). The results show that the proposed GBGBO algorithm has a good ability to solve various optimization problems and is more competitive than other similar algorithms.


Introduction
Optimization is the process in which the best solution (based on a set of constraints) to a particular problem is selected from a set of possible solutions. When an optimization problem is expressed, it must be modeled mathematically. In this modeling, the objectives of the problem and the limitations must be considered. In fact, an optimization problem has three main parts: the problem variables, the primary objects of the problem including constraints, and the secondary objects of the problem including the objective functions of the problem [1]. After designing the optimization problem, the next step is to solve the optimization problem using a suitable method. Optimization algorithms always have a special application in solving optimization problems. Optimization algorithms attempt to provide a solution by randomly scanning the search space.
An optimization problem has a definite optimal solution called global optimal. Optimization algorithms with random displacement in the problem search space provide a solution to a problem that is not necessarily global optimal, but close to it. For this reason, the solution obtained using optimization algorithms is called the quasi-optimal solution [2]. An optimization algorithm that presents a quasi-optimal solution closer to the global optimal solution is a more appropriate algorithm. This issue has led to the introduction of many optimization algorithms by researchers.
Various ideas have been applied in the design of optimization algorithms. These ideas are based on various natural phenomena, the behavior of living things, plants, the laws of physics, the rules of the game, etc. Optimization algorithms can be divided into four general groups based on the main design idea. These groups are physic-based optimization algorithms, swarm-based optimization algorithms, game-based optimization algorithms, and evolutionary-based optimization algorithms.
Physic-based optimization algorithms are designed based on simulations of processes and physical laws. Simulated annealing (SA) is a physics-based optimization algorithm modeled on the process of annealing metals [3]. In physics, annealing is the heat treatment during which changes the physical and sometimes chemical properties of a material occur. During this process, the metal is first heated, then kept at a certain temperature, and finally gradually cooled. The momentum search algorithm (MSA) is another physics-based optimization algorithm based on simulation of momentum laws and Newton's laws of motion [4]. In MSA, the momentum that enters a bullet causes the bullet to motion toward quasi-optimal points in the search space. The gravitational search algorithm (GSA) is inspired by the physical law of gravity between objects at different distances from each other. According to this law, particles (or objects) in this universe always exert a force called gravity on each other, which is directly proportional to the mass of two objects and inversely proportional to the square of the distance between them. In GSA, the simulation of this concept is used in designing an optimizer for optimization problems [5].
Swarm-based optimization algorithms are modeled on various natural phenomena, the behavior of animals, plants, and other living organisms. Particle swarm optimization (PSO) is one of the oldest and most famous swarm-based optimization algorithms which is designed based on the simulation of group motion of birds [6]. The seagull optimization algorithm (SOA) is another swarm-based optimization algorithm that is designed based on simulating the migration and aggressive behavior of a seagull in nature [7]. The teachinglearning-based optimization (TLBO) was designed based on simulating the educational relationship between students and teacher that leads to student learning and progress. The TLBO has a mathematical model for teaching and learning, which is implemented in two stages teaching and learning [8]. The whale optimization algorithm (WOA) was developed based on simulation the social behavior of humpback whales in bubble-net hunting strategy [9]. The gray wolf optimizer (GWO) was inspired by the leadership hierarchy and hunting mechanism of gray wolves in nature. This natural behavior of gray wolves is such that four types of gray wolves such as alpha, beta, delta, and omega are used to simulate the leadership hierarchy. In addition, the three main steps of huntingsearching for prey, encircling prey, and attacking prey-are simulated in the GWO [10]. The tunicate swarm algorithm (TSA) was designed based on the simulation of jet propulsion and swarm behaviors of tunicates during the navigation and foraging process [11]. The marine predators algorithm (MPA) was inspired by the movement strategies that marine predators use when trapping their prey in the oceans. The main inspiration of the MPA is the widespread foraging strategy namely Lévy and Brownian movements in ocean predators along with optimal encounter rate policy in biological interaction between predator and prey [12].
Game-based optimization algorithms are designed using the potentials of various individual and group games. The orientation search algorithm (OSA) was designed by modeling the behavior of players and referees in the orientation game. In the orientation game, players move in the game space according to the direction specified by the refer-ence [13]. The darts game optimizer (DGO) is another game-based optimization algorithm that was designed based on simulation of rules of the game and the behavior of the players in the darts game [14].
Evolutionary-based optimization algorithms as a family of stochastic search methods are inspired by the natural process of evolution of species. The genetic algorithm (GA) is an evolutionary-based optimization algorithm that was inspired by genetics and Darwin's theory of evolution and is based on the survival of the fittest or natural selection. The GA simulates the reproductive process using three operators: selection, crossover, and mutation [15].
Although many optimization algorithms have been developed by scientists, no optimization algorithm can definitively provide global optimal solutions to the optimization problems. An optimization algorithm which provides the best solution for one optimization problem may fail to optimize another problem. The contribution of the authors and the main purpose of this paper was to design an optimization algorithm that could be used to solve optimization problems in various sciences. In designing the proposed algorithm, it was assumed that by increasing the power of exploration and exploitation of the algorithm, suitable quasi-optimal solutions are provided that are closer to the global optimal.
In this study, a new population-based optimization algorithm called the good and bad groups-based optimizer (GBGBO) was developed. The main idea of the proposed algorithm was to use more information from different population members in updating the whole population in such a way that instead of a good member, a good group, and instead of a bad member, a bad group would lead the population members. Thus, in each iteration, the status of the population members was updated based on two groups: The good group with the best values of the objective function and the bad group with the worst values of the objective function.
The rest of the article is organized in such a way that in Section 2, the proposed algorithm is introduced and modeled mathematically. In Section 3, the implementation of the proposed algorithm in the optimization is simulated. The Friedman rank test analysis is presented in Section 4. The analysis of the results and performance of the optimization algorithm is presented in Section 5. Section 6 provides conclusions and suggestions for future studies.

Good and Bad Groups-Based Optimizer
In this section, firstly the GBGBO is described, and then the mathematical modeling of the proposed algorithm is presented in order to implement in solving optimization problems.
The GBGBO is a population-based optimization algorithm, which is based on random scanning of the problem search space. The member of the population that has a better objective function as the best member of the population can direct the population of the algorithm to the optimal regions. However, the best member of the population may not provide the suitable amount for some problem variables. Thus, instead of just the best member of the population, GBGBO proposes a group of the best members to guide the population in the search space. This is also debatable for the worst member of the population. Instead of moving away from just the worst member of the population, GBGBO suggests moving away from a group of worst members. Thus, in GBGBO, population members update under the influence of two groups named the good group and the bad group. A good group consists of a certain number of the population members with better fitness function than other members and a bad group consists of a number of the population members with a worse fitness function than other members of the population.
The population of the GBGBO was defined using a matrix in which each row represented a member that proposed a solution to the optimization problem. The population matrix was first generated randomly and then updated according to the algorithm steps. This population matrix was specified in Equation (1).
Here, X is the population matrix, X i is the i-th population member, x i,d is the d-th dimension of the i-th population member, N is the number of population members, and m is the number of variables of the optimization problem. After determining the population matrix, the objective function of the optimization problem was evaluated based on each member of the population that represents a solution. The vector values of the fitness function are specified in Equation (2).
Here, F is the fitness function value vector and F i is the fitness function value of the i-th population member.
Each optimization problem has a definite number of m variables that must be specified to optimize the objective function. In fact, the search space consists of m axes, each of which determines the value of a problem variable. In most optimization algorithms, a population member leads the population in the search space. That is, a good member leads the population in all axes. However, one or more other members may be more appropriate to guide the population in some axes. In addition, in some optimization algorithms, moving away from the worst population member is effective in updating and improving the population. The main idea of the GBGBO is to use the information of population members more effectively. Accordingly, instead of a good member leading the entire population in all axes, a group of good members was selected to lead the population. In addition, instead of just one bad member moving the algorithm away from the bad areas, a bad group was selected. In the GBGBO, the population members were updated based on two good and bad groups.
Good group updating: The criterion for selecting good members was the value of the objective function. The number of NG members of the population that provide the best values of the objective function was selected as a good matrix.
Bad group updating: As mentioned, the criterion for selecting bad members was also the value of the objective function. The number of NB members of the population that provided the worst values of the objective function was selected as the bad matrix.
In fact, if members of the population were arranged from the smallest to the largest value of the objective function, the NG number of the first member was selected as the good group and the NB number of the last member was selected as the bad group. Thus, based on the values of the fitness functions, the good group and the bad group could be calculated according to Equations (3) and (4).
Here, GG is the good group, GG i is the i-th good member, gg i,d is the d-th dimension of the i-th good member, N G is the number of selected good members, BG is the bad group, BG i is the i-th bad member, bg i,d is the d-th dimension of the i-th bad member, and N B is the number of selected bad members.
Each row of the population matrix as a population member was a proposed solution and indeed determined the variables of the problem. The best member of the population was the member that provided the best value for the objective function. Although the best member suggested appropriate values for the problem variables, it may not necessarily have been appropriate to guide the population in some variables. If the members of the algorithm population moved only under the guidance of the best member in the problem search space, all variables of each member moved towards the variables determined by the best member. In the GFBGO, a member of a good group was randomly selected to guide each variable of each member of the population. In fact, a good group member may have only led only a few variables of a population member in the search space, although a good group member may not have been selected to lead any variables. These concepts could also be developed for the worst members of the population and the bad group. This step of the GBGBO was simulated using Equations (5) and (6).
Here, gg k,d is the d-th dimension of selected good member to guide the d-th dimension of the ith population member, F GG k is the objective function value of the k-th selected good member, bg k,d is the d-th dimension of selected bad member to guide the d-th dimension of the i-th population member, F BG k is the objective function value of the k-th selected bad member, and rand is a random number with a normal distribution within the range from 0 to 1.
After all variables of all population members were updated based on Equations (5) and (6), the algorithm process was repeated until the stop condition was reached. Then, after the end of the algorithm iterations, the best solution obtained using the BGBGO was presented. Figure 1 shows the implementation of the BGBGO as a flowchart.

Simulation and Results
In this section, the performance of the proposed BGBGO algorithm in solving various optimization problems is evaluated. For this purpose, a set of twenty-three standard objective functions including unimodal, high-dimensional multimodal, and fixed-dimensional multimodal functions [16] were optimized using BGBGO. Complete information on these objective functions is given in Tables A1-A3 in Appendix A. Eight optimization algorithms, including the genetic algorithm (GA) [15], particle swarm optimization (PSO)

Simulation and Results
In this section, the performance of the proposed BGBGO algorithm in solving various optimization problems is evaluated. For this purpose, a set of twenty-three standard objective functions including unimodal, high-dimensional multimodal, and fixed-dimensional multimodal functions [16] were optimized using BGBGO. Complete information on these objective functions is given in Tables A1-A3 in Appendix A. Eight optimization algorithms, including the genetic algorithm (GA) [15], particle swarm optimization (PSO) [6], gravita-tional search algorithm (GSA) [5], teaching-learning-based optimization (TLBO) [8], gray wolf optimizer (GWO) [10], whale optimization algorithm (WOA) [9], tunicate swarm algorithm (TSA) [11], and marine predators algorithm (MPA) [12], were investigated in order to compare the optimization results. The experimentation was performed on MATLAB (version R2020a) using a 64-bit Core i7 processor with 3.20 GHz and 16 GB of main memory. Each of the optimization algorithms were independently implemented twenty times, and at the end the optimization results were presented as the mean and standard deviation of the best solutions as "ave" and "std".
The values used for the main controlling parameters of the comparative algorithms are specified in Table 1. Seven objective functions F 1 to F 7 were selected as unimodal objective functions to evaluate the performance of the GBGBO and other algorithms. Complete information on these objective functions is given in Table A1 in Appendix A. These objective functions had only one optimal solution and were therefore suitable for evaluating the exploitation power of optimization algorithms. The results of the optimization of these objective functions after twenty independent implementations are presented in Table 2. The optimization results of these objective functions showed that the proposed GBGBO offered better solutions than other algorithms. This indicates that the GBGBO had a high exploitation power in achieving the optimal solution.

Simulation Results on High Dimensional Multi Modal Test Function F 8 to F 13
The second group of objective functions to evaluate optimization algorithms are the high dimensional multimodal test function. The six target functions F 8 to F 13 are of this type. Complete information on these objective functions is given in Table A2 in Appendix A. These types of objective functions are several local optimal solutions and were therefore suitable for evaluating the exploration power of the optimization algorithms. The results of the optimization of these objective functions are presented in Table 3. These results indicate the ability of the GBGBO to solve these types of objective functions and the superiority of the proposed algorithm over other algorithms. Therefore, the GBGBO had good exploration capability and scanned the search space of the problem well.

Simulation Results on Fixed Dimensional Multi Modal Test Function F 14 to F 23
The third group of objective functions, including F 14 to F 23 , was selected from the fixed dimensional multimodal type. Complete information on these objective functions is given in Table A3 in Appendix A. This type of objective function was also suitable for evaluating the exploration power of optimization algorithms. The results of optimization of these objective functions using GBGBO and eight other algorithms are presented in Table 4. What is clear from these results is that the proposed GBGBO algorithm performed very well in such objective functions and in most cases provided the global optimal solution. This concept demonstrates the acceptable exploration ability of the GBGBO in accurately searching the problem search space. Optimization results of the F 1 to F 23 objective functions using the proposed algorithm and eight other optimization algorithms are presented in Tables 2-4. The boxplot of results for each algorithm and objective function are drawn in Figure 2 for further analysis and visual comparison of the performance of the optimization algorithms. Based on the boxplots shown in Figure 2, in the unimodal objective functions type F 1 , F 2 , F 3 , F 4 , F 6 , and F 7 , the superiority of the GBGBO over the other eight algorithms is obvious. The function F 5 of the GWO offers better performance. However, the GBGBO offers acceptable performance with little difference to the GWO. In the functions F 9 , F 10 , F 11 , F 12 , and F 13 of the high-dimensional multimodal objective functions type, the GBGBO is the best optimizer among the review algorithms. In the objective function F 8 , the GA presented better performance. In the functions F 14 , F 15 , F 16 , F 18 , F 19 , F 20 , F 21 , F 22 , and F 23 of the fixeddimensional multi-model objective function type, the proposed algorithm could provide a more efficient quasi-optimal solution with smaller values of standard deviation. The function F 17 in the GWO offers better performance. However, the GBGBO offers acceptable performance with little difference to the GWO.

Statistical Analysis
The optimization results of all three types of objective functions were presented as the mean and standard deviation of the solutions. Although these indicators make important information available, they alone were not enough to ensure that one algorithm is superior to other algorithms. This is because even after twenty independent executions,

Statistical Analysis
The optimization results of all three types of objective functions were presented as the mean and standard deviation of the solutions. Although these indicators make important information available, they alone were not enough to ensure that one algorithm is superior to other algorithms. This is because even after twenty independent executions, the superiority of one optimization algorithm over other algorithms may occur randomly, even with the lowest probability. Thus, a statistical analysis of the optimization results provided more information than the capability of an optimization algorithm. In this article, the Friedman rank test [17] was used. The results of this test are determined in Table 5. Analysis and comparison of these results show that BGBGO performs better than other algorithms in all three different types of objective functions unimodal, high dimensional multimodal, and fixed dimensional multimodal. In addition, the result of analysis on all twenty-three objective functions shows that the GBGBO ranks first among compared algorithms.

Discussion
Exploitation and exploration indexes are two important criteria and indicators in analyzing and evaluating the performance of optimization algorithms. The exploitation power means the ability of an optimization algorithm to provide a suitable quasi-optimal solution at the end of the algorithm iterations. Thus, in comparing the performance of several optimization algorithms on an optimization problem, the algorithm that can ultimately provide a quasi-optimal solution closer to the global optimal has higher exploitation power. The unimodal objective functions F 1 to F 7 have only one main solution and are therefore very suitable for evaluating the exploitation power of optimization algorithms. Comparison of the optimization results of these functions using the GBGBO and eight other algorithms in Table 2 shows the acceptable exploitation power of the GBGBO in solving optimization problems. Exploration power means the ability of an algorithm to accurately scan the search space of the optimization problem. In fact, among several optimization algorithms, an algorithm that can scan the search space well and is not limited to certain areas, as well as can pass locally optimal solutions, has higher exploration power. This indicator is especially important for optimization problems that have multiple local optimal solutions. The F 8 to F 23 objective functions had several local optimal solutions in addition to the main solution. Therefore, these objective functions were suitable for evaluating the exploration power of optimization algorithms. Analysis and comparison of the optimization results of the GBGBO and eight other algorithms on these objective functions, which are presented in Tables 3 and 4, indicated the acceptable exploration power of the proposed GBGBO algorithm in solving this type of objective functions. On the other hand, the results of the Friedman rank test showed that the acceptable power of the GBGBO in exploration and exploitation indices is not random.

Conclusions and Future Work
Optimization algorithms are one of the efficient tools in solving optimization problems. Optimization algorithms with random search space scanning are able to provide quasi-optimal solutions to optimization problems. In this paper, a new optimization algo-rithm called the good and bad groups-based optimizer (GBGBO) was presented to solve optimization problems. The GBGBO was designed based on simulation of the process of guiding the population members by two groups named good and bad groups, instead of only the best and worst members. The proposed GBGBO was mathematically modeled. The performance of the GBGBO was implemented and evaluated on a set of twenty-three standard objective functions. These objective functions were selected in three different types unimodal to evaluate exploitation power, high-dimensional, and fixed-dimensional multimodal to evaluate exploration power. Eight optimization algorithms, concretely the genetic algorithm (GA), the particle swarm optimization (PSO), the gravitational search algorithm (GSA), the teaching-learning-based optimization (TLBO), the gray wolf optimizer (GWO), the whale optimization algorithm (WOA), the tunicate swarm algorithm (TSA), and the marine predators algorithm (MPA), were selected to compare with the optimization results obtained from the GBGBO. The optimization results showed that the GBGBO is more capable of solving optimization problems than the other eight optimization algorithms and is more competitive. In addition, the Friedman rank test was used for statistical analysis of optimization results provided by optimization algorithms. Based on the results of this test, the GBGBO had a good performance in solving optimization problems and was ranked first among the compared algorithms.
The authors suggest some ideas and perspectives for future studies. The design of the binary version, as well as the multi-objective version of the GBGBO, are two special potentials for this study. Apart from this, implementing the GBGBO on various optimization problems and real-world optimization problems could be some significant contributions as well.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A
Information of the twenty-three objective functions is provided in Tables A1-A3.