GBUO: “The Good, the Bad, and the Ugly” Optimizer

: Optimization problems in various ﬁelds of science and engineering should be solved using appropriate methods. Stochastic search-based optimization algorithms are a widely used approach for solving optimization problems. In this paper, a new optimization algorithm called “the good, the bad, and the ugly” optimizer (GBUO) is introduced, based on the effect of three members of the population on the population updates. In the proposed GBUO, the algorithm population moves towards the good member and avoids the bad member. In the proposed algorithm, a new member called ugly member is also introduced, which plays an essential role in updating the population. In a challenging move, the ugly member leads the population to situations contrary to society’s movement. GBUO is mathematically modeled, and its equations are presented. GBUO is implemented on a set of twenty-three standard objective functions to evaluate the proposed optimizer’s performance for solving optimization problems. The mentioned standard objective functions can be classiﬁed into three groups: unimodal, multimodal with high-dimension, and multimodal with ﬁxed dimension functions. There was a further analysis carried-out for eight well-known optimization algorithms. The simulation results show that the proposed algorithm has a good performance in solving different optimization problems models and is superior to the mentioned optimization algorithms.


Introduction
Optimization is a vital issue, which is of great importance in a wide range of applications. Generally, it can be introduced to search for the best possible solution in a feasible region of a specific problem. The main goal is to maximize the efficiency, profit, and performance of the problem. In this regard, different optimization algorithms have been applied in various fields such as energy [1,2], protection [3], energy commitment [4], electrical engineering [5][6][7][8][9], and energy carriers [10,11] to achieve the optimal solution.
Recently, meta-heuristic algorithms (MHAs) such as genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE) have been applied as powerful methods for solving various modern optimization problems. These methods have attracted researchers' attention because of their advantages such as high performance, simplicity, few parameters, avoidance of local optimization, and derivation-free mechanism. Many MHAs have been inspired by simple principles in nature, e.g., physical and biological systems. Among these algorithms, simulated annealing [12], spring search algorithm [13,14], ant colony optimization [15,16], particle swarm optimization [17], and cuckoo search [18] can be mentioned. For instance, PSO was derived based on the swarming behavior of the birds and fishes [17,19], whereas simulated annealing (SA) was proposed by considering the metal annealing process [20]. Furthermore, their appropriate mathematical models are constructed based on evolutionary concepts, intelligent biological behaviors, and physical phenomena. MHAs do not have any dependency on the nature of the problem because they utilize a stochastic approach; hence, they do not require derived information about the problem. This is counterintuitive in a mathematical method, which generally need precise information of the problem [21]. This independence from the nature of the problem is one of the main advantages of MHAs and makes them a perfect tool to find optimal solutions for an optimization problem without concern about the problem search space's nonlinearity and constraints.
Flexibility is another advantage, enabling MHAs to apply any optimization problem without changing the algorithm's main structure. These methods act as a black box with input and output modes, in which the problem and its constraints act as inputs for these methods. Hence, this characteristic makes them a potential candidate for a user-friendly optimizer.
On the other hand, contrary to mathematical methods' deterministic nature, MHAs frequently profit from random operators. As a result, compared to traditional deterministic methods, the probability of being trapped in local optimizations decreases making them independent from the initial guess.
These methods have become more prevalent since the last three decades due to their ability to quickly explore the global search space and their independence from the problem's nature. Even though, a unique benchmark does not exist to classify MHAs in the literature, the source of inspiration is one of the most popular classification criteria. Based on inspiration source, one can classify optimization algorithms into four main categories as follows: (i) swarm-based (SB), (ii) evolutionary-based (EB), (iii) physicsbased (PB), and (iv) game-based (GB) algorithms. For convenience, some well-known optimization algorithms in the literature are summarized in Table 1. SB are based on simulating the behavior of living organisms, plants and natural processes, EB are based on simulation of genetic sciences, PB are designed based on simulation of various physical laws, and GB are based on simulation of different game rules [22,23].  [54] Orientation search algorithm Game of orientation, in which players move in the direction of a referee 2019 [55] Hide Objects Game Optimization Behavior and movements of players to find a hidden object 2020 [56] football game based optimization Simulation of behavior of clubs in footbal league. 2020 [57] Darts game optimizer Rules of the Darts game 2020 [58] Shell game optimization Rules of the shell game 2020 Each of the above-mentioned algorithms has its specific advantages and disadvantages. For instance, in thermal process which are sufficiently slow to allow time for simulation, simulated annealing guarantees that the obtained solution is optimal. Nevertheless, fine-tuning of parameters affects the convergence of the optimization problem.
In the development of MHAs, their mathematical analysis includes some open issues that require close attention. These problems are mainly of different components in MHAs that are stochastic, complex, and extremely nonlinear.
Various swarm intelligence (SI) algorithms have recently been reported. The particle swarm optimization (PSO) algorithm [17] is inspired by fishes or birds' social behavior. The artificial bee colony algorithm (ABC) [59] and the ant colony optimization (ACO) algorithm [15] are inspired by the foraging behavior of honeybees and the ants' behavior when finding the optimal path in the ant colony foraging process, respectively. The ant colony's pheromone matrix continuously evolves within the candidate solution's iteration leading to an optimal solution. This could be useful in solving path planning problems [60]. The cuckoo search algorithm (CS) [24] is a simulation of the obligate brood parasitic behavior of a certain kind of cuckoo [61]. These types of algorithms are not popular due to their high complexity. In 2011, a simulation of the cooperative foraging fruit flies' behavior was presented, resulting in the fruit fly optimization algorithm (FOA) [62]. Other examples of recently introduced SI algorithms include grey, firefly algorithm (FF) [63], wolf optimization algorithm (GWO) [64], "doctor and patient" optimization (DPO) [65], donkey theorem optimization (DTO) [66], group optimization (GO) [67], squirrel search algorithm (SSA) [68,69], dragonfly algorithm (DA) [70] among others. It is worth noting that several newly introduced MHAs, such as quasi-affine transformation evolutionary (QUATRE) [71], slime mold algorithm (SMA) [72], equilibrium optimizer (EO) [73], and Henry gas solubility (HGS) [74] show superior performance in comparison with techniques mentioned above.
QUATRE is a concurrent development framework based on quasi-affine evolution. It has been shown that this algorithm can achieve superior optimization performance for large-scale optimization problems [71,75,76]. The QUATRE algorithm can be successfully employed to extract the text feature and obtain acceptable results [32].
In recent years, the swarm intelligence algorithm as a new bionic optimization technique has been developing rapidly. However, due to the no free lunch (NFL) theorem, it is impossible to use a specific algorithm as a general method to solve all types of optimization problems [77]. The NFL theorem prompted researchers to improve classical optimization algorithms as much as possible and even introduce new algorithms to attain better performance in dealing with optimization problems.
Consequently, a novel swarm intelligence algorithm named as Harris hawks Optimization (HHO) algorithm was introduced in 2019, which is inspired by the collaborative behavior of Harris hawks in the process of hunting prey [78]. The simulation results and the performed experimentations on 29 benchmarks and different engineering optimization problems validate its high efficiency in optimization problems. The HHO algorithm has many advantages, such as few parameters adjustment, easy execution, and simple implementation. Therefore, HHO is suitable and efficient for solving practical optimization problems in many fields. For instance, it can be utilized for structure optimization [79], image segmentation [80], parameter identification [81], image denoising [82], power load distribution [83], and layout optimization [84]. It is noteworthy that, despite the attractive benefits of HHO in dealing with various optimization issues, this algorithm still has some drawbacks, namely the high complexity and the compute time consuming. In response to these problems, some scholars have proposed improvement strategies from various perspectives. For instance, introducing long-term memory into the HHO algorithm has been proposed by Hussain et al. 2019 [85], in which users are allowed to exercise based on experience, and the diversity of the population is increased.
However, disadvantages of this method include ignoring the algorithm's execution time and poor performance in high-dimensional problems. Jian et al. [80] reduced the probability of falling the HHO algorithm into a local optimum by employing dynamic control parameters and improved the global search capability by using mutation operators.
Interference terms have been added to the escape energy to control the disturbance peaks' position, enhanced by the global searchability in the next stage as reported by Fan et al. [86]. Additionally, some researchers mixed the exploration ability of other algorithms in order to improve HHO, such as simulated annealing algorithm [87], dragonfly algorithm [88], and combination of sine and cosine algorithms [89].
The main focus of the previous literature has been on the enhancement of exploratory capabilities. Meanwhile, lacking a balanced approach between search abilities leads to weakness in search results and robustness in complicated modern optimization.
In this paper, a new optimization algorithm named "the good, the bad, and the ugly" optimizer (GBUO) is proposed to solve various optimization problems. The main idea in designing GBUO is effectiveness of three population members in updating the population. GBUO is mathematically modeled and then implemented on a set of twenty-three standard objective functions.
The rest of the article is as follows: In Section 2, the proposed algorithm's steps are mathematically modeled. Simulation studies are carried out in Section 3. Then, in Section 4, the results are analyzed. Finally, in Section 5, conclusions and perspectives for future studies are presented.

"The Good, the Bad, and the Ugly" Optimizer (GBUO)
In this section, the design steps of the "the good, the bad, and the ugly" optimizer (GBUO) are explained and modeled. In GBUO, search agents scan the problem search space under the influence of three specific members of the population. Each population member is a proposed solution to the optimization problem that provides specific values for the problem variables. Thus, the population members of an algorithm can be modeled as a matrix. The population matrix of the algorithm is specified in Equation (1).
Here, X is the population matrix, X i is the i th population member, x d i is the value for d th variable specified by i th member, N is the number of population members, and m is the number of variables.
A specific value is calculated for each population member for the objective function given that each population member represents the proposed values for the optimization problem variables. The values of the objective function are specified as a matrix in Equation (2).
Here, OF is the objective function matrix and OF i (X i ) is the value of the objective function for i th population member.
The objective function's value is an indicator of whether a solution is good or bad. Based on these values, it can be determined which member provides the best quasioptimal solution and provides the worst quasi-optimal solution. In GBUO, the algorithm's population is updated according to three members entitled good, bad, and ugly. The good is a member of the population that is the best quasi-optimal solution, and the bad is a member of the population that has presented the worst quasi-optimal solution according to the value of the objective function. Ugly is a population member that leads the algorithm's population to situations in the opposite direction. In this challenging phase, those situations of the search space that offer suitable quasi-optimal solutions are discovered. These three main members are defined in the proposed optimizer using Equations (3)-(5).
Here, Good is the good member, Bad is the bad member, and Ugly is the ugly member selected randomly.
In each algorithm iteration, the position of the population members is updated in three following phases. In the first phase, the population moves towards the good member. In the second phase, the population distances itself from the bad member. Finally, in the third phase, the ugly member leads the population to positions contrary to the population's movement. The concepts expressed in these three phases are mathematically simulated using Equations (6)- (11).
The algorithm population update is modeled based on the good member in Equations (6) and (7).
Here, x d i,nbg is the new value for the d th variable of i th member updated based on the good member, X nbg i is the new status of i th member updated based on the good member, and OF nbg i is the corresponding value of the objective function. The algorithm population update is carried out based on the bad member using Equations (8) and (9).
Here, x d i,nbb is the new value for d th variable of i th member updated based on the bad member, X nbb i is the new status of i th member updated based on the bad member, and OF nbb i is the corresponding value of the objective function. The algorithm population update is modeled based on the ugly member in Equations (10) and (11).
Here, x d i,nbu is the new value for d th variable of i th member updated based on the ugly member, sign denotes the sign function, X nbu i represents the new status of i th member updated based on the ugly member, and OF nbu i is the corresponding value of the objective function. After updating all population members based on the mentioned three phases and storing the best quasi-optimal solution, the algorithm starts the next iteration and the population members are updated by using Equations (3)-(11) and according to the new values of the objective functions. This process is repeated until the algorithm is stopped. The pseudo-code of the proposed optimizer is presented in Algorithm 1. Also, various steps of the proposed GBUO are shown as a flowchart in Figure 1.

Algorithm 1. The Pseudo-Code of GBUO
Start.
Set parameters.

3.
Create an initial population.
For i=1:N N: number of population members 8.
End for i.

12.
Save the best quasi-optimal solution in this iteration. 13.
Output the best quasi-optimal solution of the objective function found by GBUO.
Input information of optimization problem.
Set parameters of optimization algorithm.
Create initial population.
Calculate objective function.

Yes
No i == N?
Output best quasi-optimal solution of the objective function found by GBUO.

Simulation Study and Results
This section evaluates GBUO performance for optimization problem resolution. For this purpose, the proposed optimizer is implemented on a set of twenty-three standard objective functions.

Algorithms Used for Comparison and Objective Functions
The results of other well-known optimization algorithms are compared with those obtained by GBUO in order to further evaluate its capability for solving optimization problems. These optimization algorithms are genetic algorithm (GA), particle swarm optimization (PSO), gravitational search algorithm (GSA), teaching-learning-based optimization (TLBO), grey wolf optimizer (GWO), grasshopper optimization algorithm (GOA), spotted hyena optimizer (SHO), and marine predators algorithm (MPA). The values used for the main controlling parameters of the comparative algorithms are specified in Table 2. The proposed optimizer's performance and the eight optimization algorithms are evaluated for optimizing twenty-three different objective functions. These objective functions are classified into three types including unimodal, multimodal, and fixed-dimension multimodal functions. Information on these objective functions is provided in Appendix A, Tables A2, A3 and A1.
The simulation and the algorithms have been implemented in the Matlab R2020a version, run in Microsoft Windows 10 with 64 bits on a Core i-7 processor with 2.40 GHz and 6GB memory. The average (Ave) and standard deviation (std) of the best obtained optimal solution until the last iteration are computed as performance evaluation metrics. Optimization algorithms utilize 20 independent runs for each objective function, where each run employs 1000 iterations to generate and report the results.

Results
In this section, simulation and implementation of optimization algorithms on standard objective functions are presented. A set of seven objective functions F 1 to F 7 is introduced as unimodal objective functions. Six objective functions F 8 to F 13 are considered multimodal objective functions. Finally, a set of ten objective functions F 14 to F 23 is introduced as fixed-dimension multimodal objective functions.
The optimization of the unimodal objective functions using GBUO and the mentioned eight optimization algorithms are presented in Table 3. According to the results in this table, GBUO and SHO are the best optimizers for F 1 to F 4 functions. After these two algorithms, TLBO is the third best optimizer for F 1 to F 4 functions. GBUO is also the best optimizer for F 5 to F 7 functions. Moreover, Table 4 presents the results for implementing the proposed optimizer compared with the eight optimization algorithms considered in this study for multimodal objective functions. According to this table, GBUO, SHO, MPA are the best optimizers for F 9 and F 11 objective functions. GBUO in F 10 function has the best performance among algorithms. After the proposed algorithm, GWO is the second and SHO is the third best optimizers for F 10 . GA for F 8 , TLBO for F 12 , and GSA for F 13 are the best optimizers. GBUO is the second-best optimizer on F 8 , F 12 , and F 13 . The results of applying the proposed optimizer and the eight other optimization algorithms on the third type objective functions are presented in Table 5. Based on the results in this table, GBUO provides the best performance in all F 14 to F 23 objective functions.

Statistical Testing
The optimization of standard test functions was presented as the average and standard deviation of the best solutions. However, these results alone are not enough to guarantee the superiority of the proposed algorithm. Even after twenty independent performances, this superiority may occur by chance despite its low probability. Therefore, the Friedman rank test is used to statistically evaluate the algorithms and further analyze the optimization results. The Friedman rank test is a non-parametric statistical test developed by Milton Friedman. Nonparametric means the test does not assume data comes from a particular distribution. The procedure involves ranking each row (or block) together, then considering the values of ranks by columns [90]. The steps for implementing the Friedman rank test are as follows: Start.
Step1: Determine the results of different groups.
Step2: Rank each row of results based on the best result (here from 1 to 9).
Step3: Calculate the sum of the ranks of each column for different algorithms.
Step4: Determine the strongest algorithm to the weakest algorithm based on the sum of the ranks of each column.
End. The Friedman rank test results for all three different objective functions: unimodal, multimodal, and fixed-dimension multimodal objective functions are presented in Table 6. Based on the results presented, for all three types of objective functions, the proposed GBUO has the first rank compared to other optimization algorithms. The overall results on all the objective functions (F 1 -F 23 ) show that GBUO is significantly superior to other algorithms.

Discussion
Optimization algorithms based on random scanning of the search space have been widely used by researchers for solving optimization problems. Exploitation and exploration capabilities are two important indicators in the analysis of optimization algorithms. The exploitation capacity of an optimization algorithm means the ability of that algorithm to achieve and provide a quasi-optimal solution. Therefore, when comparing several optimization algorithms' performance, an algorithm that provides a more appropriate quasi-optimal solution (closer to global optimal) has a higher exploitation capacity than other algorithms. An optimization algorithm's exploration capacity means that the algorithm's ability to accurately scan the search space, solving optimization problems with several local optimal solutions; the exploration capacity has a considerable effect on providing a quasi-optimal solution. In such problems, if the algorithm does not have the appropriate exploration capability, it provides non-optimal solutions by getting stuck in optimal local locals.
The unimodal objective functions F 1 to F 7 are functions that have only one global optimal solution and lack local optimal local. Therefore, this set of objective functions is suitable for analyzing the exploitation capacity of the optimization algorithms. Table 3 presents the results obtained from implementing the proposed GBUO and eight other optimization algorithms on the unimodal objective functions in order to properly evaluate the exploitation capacity. Evaluation of the results shows that the proposed optimizer provides more suitable quasi-optimal solutions than the other eight algorithms for all F 1 to F 7 objective functions. Accordingly, GBUO has a high exploitation capacity and is much more competitive than the other mentioned algorithms.
The second (F 8 to F 13 ) and the third (F 14 to F 23 ) categories of the objective functions have several local optimal solutions besides optimal solutions. Therefore, these types of objective functions are suitable for analyzing the exploration capability of the optimization algorithms. Tables 4 and 5 present the results of implementing the proposed GBUO and eight other optimization algorithms on the multimodal objective functions to tolerate capability. The results presented in these tables show that the proposed GBUO has a good exploration capability. Moreover, the proposed GBUO can also find local-optimal solutions by accurately scanning the search space and thus, does not get stuck in local optimal to the other eight algorithms. The performance of the proposed GBUO is more appropriate and competitive for solving this type of optimization problem. It is confirmed that GBUO is a useful optimizer for solving different types of optimization problems.

Conclusions
In this paper, a new optimization method called "the good, the bad, and the ugly" optimizer (GBUO) has been introduced based on the effect of three members of the population on population updating. These three influential members include the good member with the best value of the objective function, the bad member with the worst value of the objective function, and the ugly member selected randomly. In GBUO, the population is updated in three phases; in the first phase, the population moves towards the good member, in the second phase, the population moves away from the bad member, and in the third phase, the population is updated on the ugly member. In a challenging move, the ugly member leads the population to situations contrary to society's movement.
GBUO has been mathematically modeled and then implemented on a set of twentythree different objective functions. In order to analyze the performance of the proposed optimizer in solving optimization problems, eight well-known optimization algorithms, including genetic algorithm (GA), particle swarm optimization (PSO), gravitational search algorithm (GSA), teaching-learning-based optimization (TLBO), grey wolf optimizer (GWO), whale optimization algorithm (WOA), spotted hyena optimizer (SHO), and marine predators algorithm (MPA) were considered for comparison.
The results demonstrated that the proposed optimizer has desirable and adequate performance for solving different optimization problems and is much more competitive than other mentioned algorithms.
The authors suggest some ideas and perspectives for future studies. For example, a multi-objective version of the GBUO is an exciting potential for this study. Some realworld optimization problems could be some significant contributions, as well.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A
Information of the twenty-three objective functions is provided in Tables A2, A3 and A1.