Next Article in Journal
Amended Criteria of Oscillation for Nonlinear Functional Dynamic Equations of Second-Order
Next Article in Special Issue
An Integrated Decision-Making Approach for Green Supplier Selection in an Agri-Food Supply Chain: Threshold of Robustness Worthiness
Previous Article in Journal
Flexible Log-Linear Birnbaum–Saunders Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

GMBO: Group Mean-Based Optimizer for Solving Various Optimization Problems

by
Mohammad Dehghani
1,
Zeinab Montazeri
1 and
Štěpán Hubálovský
2,*
1
Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz 71557-13876, Iran
2
Department of Applied Cybernetics, Faculty of Science, University of Hradec Králové, 500 03 Hradec Králové, Czech Republic
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(11), 1190; https://doi.org/10.3390/math9111190
Submission received: 22 April 2021 / Revised: 19 May 2021 / Accepted: 21 May 2021 / Published: 24 May 2021
(This article belongs to the Special Issue Decision Making and Its Applications)

Abstract

:
There are many optimization problems in the different disciplines of science that must be solved using the appropriate method. Population-based optimization algorithms are one of the most efficient ways to solve various optimization problems. Population-based optimization algorithms are able to provide appropriate solutions to optimization problems based on a random search of the problem-solving space without the need for gradient and derivative information. In this paper, a new optimization algorithm called the Group Mean-Based Optimizer (GMBO) is presented; it can be applied to solve optimization problems in various fields of science. The main idea in designing the GMBO is to use more effectively the information of different members of the algorithm population based on two selected groups, with the titles of the good group and the bad group. Two new composite members are obtained by averaging each of these groups, which are used to update the population members. The various stages of the GMBO are described and mathematically modeled with the aim of being used to solve optimization problems. The performance of the GMBO in providing a suitable quasi-optimal solution on a set of 23 standard objective functions of different types of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal is evaluated. In addition, the optimization results obtained from the proposed GMBO were compared with eight other widely used optimization algorithms, including the Marine Predators Algorithm (MPA), the Tunicate Swarm Algorithm (TSA), the Whale Optimization Algorithm (WOA), the Grey Wolf Optimizer (GWO), Teaching–Learning-Based Optimization (TLBO), the Gravitational Search Algorithm (GSA), Particle Swarm Optimization (PSO), and the Genetic Algorithm (GA). The optimization results indicated the acceptable performance of the proposed GMBO, and, based on the analysis and comparison of the results, it was determined that the GMBO is superior and much more competitive than the other eight algorithms.

1. Introduction

There are many optimization problems in the different sciences and real-life problems that should be optimized and solved using the appropriate methods. Optimization is a science that offers a feasible solution from among the many solutions that exist to an optimization problem. Each optimization problem has three main parts: decision variables, primary objectives (including constraints and limitations), and secondary objectives (including objective functions). Therefore, the decision variables of an optimization problem must be determined in such a way that the objective function is optimized according to the constraints of the problem. After an optimization problem is defined and its various parts are identified, it must be modeled mathematically. In the next step, the designed optimization problem must be solved using the appropriate method.
Optimization problem-solving techniques are generally classified into two groups: definite deterministic and stochastic methods. Deterministic methods include two groups of gradient-based and non-gradient-based methods. Gradient-based methods, which apply information of derivatives and gradients for finding a global optimal solution, are mathematical programming techniques that consist of nonlinear and linear programming, while non-gradient-based methods use conditions other than the gradient information for finding a global optimal solution [1]. The main disadvantage of gradient-based techniques is a high probability of getting stuck in local optimal solution areas when the search space is nonlinear. On the other hand, the main disadvantage of non-gradient-based techniques is that the implementation of these methods is not easy and applying them requires a high level of mathematical readiness.
Another way to solve optimization problems is to use population-based optimization algorithms, which is a stochastic method. Population-based optimization algorithms are one of the most powerful and effective tools in solving various optimization problems. Population-based optimization algorithms use random variables and operators for scanning the problem-solving space globally, and they avoid getting caught in areas with local optimal solutions. The implementation and understanding of population-based optimization algorithms are easier than with other optimization methods. The important point is that population-based optimization algorithms do not necessarily provide global optimal solutions. However, these methods have advantages that have made them widely used and popular [2]. The quasi-optimal solutions, gradient-free nature, flexibility, and independence to the problem of these methods are important features and advantages of population-based optimization algorithms.
Optimization algorithms have been developed based on simulations of various processes and phenomena of nature, animal behaviors, plants, laws of physics, and rules of games in which there are signs of optimization and progress [3]. For example, the behavior of ants is simulated in the design of the Ant Colony Algorithm (ACO) [4], the simulation of Hooke’s physical law is used in the design of the Spring Search Algorithm (SSA) [5], and the simulation of the rules of darts game is used in the design of the Darts Game Optimizer (DGO) [6]. Optimization algorithms are able to provide a suitable solution to optimization problems based on a random scan of the problem search space.
The important point is that each optimization problem has a specific basic optimal solution called the global optimal. On the other hand, the solutions obtained using optimization algorithms are not necessarily the same as the global optimal. However, these solutions are acceptable because they are close to the global optimal. Hence, the solutions provided by the optimization algorithms are called quasi-optimal solutions. According to this concept, in optimizing an optimization problem using several optimization algorithms, an algorithm that can provide a quasi-optimal solution closer to global optimization is a better algorithm. For this reason, various optimization algorithms have been developed by researchers to provide better quasi-optimal solutions. In this regard, optimization algorithms have been applied by scientists in various fields such as video game playing [7] and wireless sensor networks [8] to achieve the appropriate quasi-optimal solution. In addition, optimization techniques can be used to automate generating models that are normally tedious to do one by one. It is also good for structural engineering and mechanical problems [9].
Optimization algorithms can be divided into four main groups—swarm-based, physics-based, game-based, and evolutionary-based optimization algorithms—based on the design idea.
Swarm-based optimization algorithms have been developed based on simulations of natural phenomena, group behaviors of animals, living organisms, and other swarm-based approaches. One of the most famous algorithms in this group is Particle Swarm Optimization (PSO), which was developed based on the simulation of the behavior of a group of birds [10]. PSO has a high convergence speed and can be easily implemented in various optimization problems. However, one of the most important drawbacks of the PSO is premature convergence. This is the result of an inadequate balance mechanism between global and local searches. Modeling the social behavior of gray wolves during hunting is based on three main hunting steps, including search for prey, siege of prey, and attack on prey, used in designing the Grey Wolf Optimizer (GWO) [11]. The GWO has a high speed in providing a quasi-optimal solution, especially for optimization problems that lack local optimal solutions. However, in complex optimization problems with local optimizations, the GWO gets stuck in local optimal solutions. The Whale Optimization Algorithm (WOA) is an optimization algorithm inspired by the bubble-net hunting method of whales. The WOA is performed into three phases: (i) encircling prey, (ii) bubble-net attack, and (iii) searching for prey [12]. In the WOA, population members rely too much on the optimal population member to find the optimal position, and if the target member is close to the local optimal, the population members are misled, and the algorithm converges to the local optimal instead of converging to the global optimal. The Tunicate Swarm Algorithm (TSA) is designed based on the imitation of jet propulsion and swarm behaviors of tunicates during navigation and foraging processes [13]. The main disadvantages of the TSA are that it has many computations and does not provide good performance for complex problems with high dimensions. The Marine Predators Algorithm (MPA) was developed based on inspiration from the widespread foraging strategy, namely, Lévy and Brownian movements in ocean predators and the optimal encounter rate policy in biological interactions between predator and prey [14]. The MPA has several regulatory and control parameters, where the determination of these parameters is an important challenge.
Physics-based optimization algorithms have been developed based on simulations of various physical laws. Simulated Annealing (SA) is one of the most famous algorithms in this group, which is based on the simulation of the process of gradual heating and cooling of metals [15]. SA offers appropriate solutions in solving simple optimization problems due to its greater focus on local search. However, the main problem of SA is that it depends a lot on the initial value of the parameters. If an inappropriate value is selected for the initial temperature parameter, this algorithm is likely to get stuck in the local optimal. The Gravitational Search Algorithm (GSA) is a physics-based optimization algorithm that was developed based on the modeling of the law of gravity between objects [16]. The high computational complexity of equations and difficulty of implementation are the most important disadvantages of the GSA. The long time required to run the GSA due to its high computations is another disadvantage of this algorithm. Simulation of the law of motion and the law of momentum in a system consisting of bullets of different weights was used in designing the Momentum Search Algorithm (MSA) [17]. Although the MSA has acceptable performance for simple optimization problems without local optimization areas, it is incapable of optimizing complex optimization problems with local optimization areas.
Game-based optimization algorithms have been developed based on the simulation of different rules of individual and group games. Football-Game-Based Optimization (FGBO) was developed based on the mathematical modeling of football league rules as well as the performance and behavior of clubs [18]. The number of population update phases in the FGBO is high, and this leads to a longer implementation time of this algorithm in solving optimization problems. Simulation of the game rules and the behavior of players in the hidden-object finding game was used in the design of Hide Objects Game Optimization (HOGO) [19].
Evolutionary-based optimization algorithms are developed based on the simulation of the laws of evolution and the concepts of genetics science. The Genetic Algorithm (GA) is the most famous and widely used evolutionary-based optimization algorithm, which was developed based on the simulation of reproduction and Darwin’s theory of evolution [20]. The concepts of the GA are very simple and, therefore, can be easily implemented in solving optimization problems. The low speed of the GA is one of the biggest disadvantages of this algorithm. The inability to solve complex optimization problems and getting caught up in local optimal solutions are other disadvantages of this algorithm.
Numerous optimization algorithms have been developed by researchers and have been used to solve various optimization problems. However, the important point about these optimization algorithms is that one optimization algorithm may perform well in solving one optimization problem but fail to provide a suitable quasi-optimal solution for another optimization problem. Therefore, a unique optimization algorithm cannot be considered the best optimizer for all optimization problems. The contribution of this paper is in designing a new optimizer that offers quasi-optimal solutions that are much more suitable than the existing methods and is very close to the optimal solutions.
In this paper, a new optimization algorithm called the Group Mean-Based Optimizer (GMBO) is presented to provide quasi-optimal solutions close to the global optimal for various optimization problems. The main idea of the GMBO is to create two composite members based on the mean of two selected good and bad groups of population members of the algorithm and then use these two composite members to update the population matrix. The various steps of the proposed GMBO are described and then mathematically modeled for implementation in optimization problems. The feature of the proposed GMBO is that it has no control parameter and, therefore, does not need to adjust the parameter. The performance of the GMBO in solving optimization problems is tested on a set of 23 standard objective functions, and then its results are compared with 8 other optimization algorithms.
The rest of the paper is organized in such a way that the proposed GMBO is introduced in Section 2. Then, simulation studies are presented in Section 3. Section 4 analyzes the simulation results. The application of the proposed GMBO in solving real-life problems is tested in Section 5. Finally, conclusions and suggestions for future studies of this research are presented in Section 6.

2. Group Mean-Based Optimizer

In this section, the proposed Group Mean-Based Optimizer (GMBO) algorithm is presented. GMBO is a population-based optimization algorithm designed based on the more effective use of the population members’ information in updating the algorithm population. In each iteration of the GMBO, two groups, named the good and bad groups, with a certain number of members, are selected. The main idea in designing the proposed GMBO is to use the two obtained combined members by averaging the two selected groups, the good group and the bad group, in updating the population.
The population members in the proposed GMBO algorithm are identified using a matrix called the population matrix. The number of rows in the population matrix indicates the number of algorithm members, and the number of columns in the population matrix indicates the number of problem variables. Each row of the population matrix as a population member is actually a proposed solution to the optimization problem. The population matrix is defined in the GMBO using Equation (1).
X = X 1 X i X N   | x 1 , 1 x 1 , d x 1 , m x i , 1 x i , d x i , m x N , 1 x N , d x N , m N × m ,
Here, X is the population matrix, X i is the i’th population member,   x i , d is the value of the d’th variable proposed by the i’th population member, N is the number of population members, and m is the number of problem variables.
Based on the values proposed by each population member for the problem variables, the objective function is evaluated. Thus, the values of the objective function are determined using a vector using Equation (2).
F = F 1 F i F N   | F X 1 F X i F X N N × 1 ,
Here, F is the vector of objective function, and F i is the objective function value based on the i’th population member.
In this step of modeling, the good and bad groups are selected based on the values of the objective function. The good group consists of a certain number of members with the best values of the objective function, and the bad group consists of a certain number of members with the worst values of the objective function. The selection of these two groups is simulated using Equations (3)–(6).
F s = F 1 s F i s F N s   | m i n i m u m   F m a x i m u m   F N × 1 ,
X s = X 1 s X i s X N s   | x 1 , 1 s x 1 , d s x 1 , m s   x i , 1 s x i , d s x i , m s x N , 1 s x N , d s x N , m s N × m ,
G N G × m = X i s   &   i = 1 : N G ,
B N B × m = X i s   &   i = N N B + 1 : N ,
Here, F s is the sorted objective function vector based on the objective function value that is sorted from best member to worst member, X s is the sorted population matrix based on the objective function value, G is the selected good group, B is the selected bad group, N G is the number of good group members, and N B is the number of bad group members.
After determining the two selected good and bad groups, in this stage, two composite members are obtained based on the averaging of these groups. This stage is simulated using Equations (7) and (8).
M G = m e a n G N G × m ,
M B = m e a n   B N B × m ,
Here, M G is the composite member based on the mean of the good group, and M B is the composite member based on the mean of the bad group.
In this step of modeling of the proposed GMBO, the population matrix is updated in three stages based on the two composite members as well as the best member of the population. This update, based on the composite member of the good group, is simulated using Equations (9) and (10).
x i , d G = x i , d + r × M G i , d x i , d × s i g n F i F M G   ,
X i = X i G ,     F i G < F i X i ,         e l s e ,
Here, x i , d G is the new value for the d’th problem variable, r is a random number in the interval of 0 1 , F M G is the objective function value of the composite member of the good group, X i G is the new position of the i’th population member, and F i G is its objective function value.
In the second stage, the population matrix is updated based on the composite member of the bad group, which is simulated using Equations (11) and (12).
x i , d B = x i , d + r × M B i , d x i , d × s i g n F i F M B   ,
X i = X i B ,     F i B < F i X i ,         e l s e ,
Here, x i , d B is the new value for the d’th problem variable, r is a random number in the interval of 0 1 , F M B is the objective function value of the composite member of the bad group, X i B is the new position of the i’th population member, and F i B is its objective function value.
In the third stage, the population matrix is also updated based on the best member of the population using Equations (13) and (14).
x i , d = x i , d + r × ( x i , d b e s t x i , d )   ,
X i = X i ,     F i < F i X i ,         e l s e ,
Here, x i , d is the new value for the d’th problem variable, r is a random number in the interval of 0 1 , X i i is the new position of the i’th population member, and F i is its objective function value.
The process of updating the population matrix is repeated until the end of the algorithm. After the last iteration of the algorithm, the best quasi-optimal solution obtained with the GMBO is available as the output of the algorithm. The various steps of implementing the proposed GMBO algorithm are presented as pseudocode in Algorithm 1 and also as a flowchart in Figure 1.
Algorithm 1. Pseudocode of the GMBO algorithm.
start GMBO.
1:Input problem information: variables, objective function, and constraints.
2:Set number of population members (N) and iterations (T).
3:Generate an initial population matrix at random.
4:Evaluate the objective function.
5:  for t = 1:T
6:     Sort population matrix based on Equations (3) and (4).
7:     Update good group based on Equation (5).
8:     Update bad group based on Equation (6).
9:     Update composite members (MG and MB) based on Equations (7) and (8).
10:       for i = 1:N
11:          Update Xi based on first stage using Equations (9) and (10).
12:          Update Xi based on second stage using Equations (11) and (12).
13:          Update Xi based on third stage using Equations (13) and (14).
14:       end for
15:     Save best quasi-optimal solution obtained with the GMBO so far.
16:  end for
17:Output best quasi-optimal solution obtained with the GMBO.
end GMBO.

3. Simulation Study and Results

In this section, the simulation studies of the proposed GMBO in terms of providing suitable quasi-optimal solutions and solving various optimization problems are presented. The performance of the GMBO in providing a quasi-suitable solution on a set of 23 standard objective functions of different types of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal functions was evaluated. Detailed information on these 23 standard objective functions is provided in Appendix A to Table A1, Table A2 and Table A3.
In order to analyze the optimization results obtained from the GMBO, these results were compared with the performance of eight other optimization algorithms, including (i) famous methods: Genetic Algorithm (GA) [20], Particle Swarm Optimization (PSO) [10], (ii) popular methods: Gravitational Search Algorithm (GSA) [16], Teaching-Learning-Based Optimization (TLBO) [21], Grey Wolf Optimizer (GWO) [11], Whale Optimization Algorithm (WOA) [12], and (iii) recently methods: Tunicate Swarm Algorithm (TSA) [13] and Marine Predators Algorithm (MPA) [14]. The experimentation was done on MATLAB (R2020a version) using a 64-bit Core i7 processor of 3.20 GHz and 16 GB of main memory. Each of the optimization algorithms was implemented twenty independent times; the optimization results are presented as the average and the standard deviation of the best solutions as “ave” and “std”, respectively.
The values used for the main controlling parameters of the comparative algorithms are specified in Table 1.

3.1. Evaluation Results on Unimodal Objective Functions

Seven objective functions of unimodal functions, including F1 to F7, have been selected to evaluate the performance of the GMBO in providing a quasi-optimal solution. For this purpose, the GMBO and eight other algorithms are implemented on these functions, and the optimization results are presented in Table 2. The GMBO is able to provide the basic optimal solution for the F1, F2, F3, and F6 objective functions. Additionally, the GMBO has had an acceptable performance in providing quasi-optimal solutions for the F4, F5, and F7 objective functions. The presented results in Table 2 show that the GMBO has an acceptable performance in solving the objective functions of the unimodal type and is more suitable than the eight competing optimization algorithms.

3.2. Evaluation Results on High-Dimensional Multimodal Objective Functions

Six F8 to F13 objective functions were selected to evaluate the performance of the GMBO and eight other optimization algorithms in optimizing high-dimensional multimodal functions. The optimization results of these objective functions are presented in Table 3. Based on this table, the GMBO is able to provide the basic optimal solution for the F9 and F11 objective functions. The GMBO generates the best quasi-optimal solutions for F10 and F12 objective functions. For the F13 objective function, the GSA is the best optimizer, and the GMBO is the second-best optimizer. The optimization results show that the GMBO obtains very competitive results in the majority of high-dimensional multimodal objective functions.

3.3. Evaluation Results on Fixed-Dimensional Multimodal Objective Functions

Ten objective functions of fixed-dimensional multimodal functions, including F14 to F23, were selected to evaluate the performance of the GMBO in providing quasi-optimal solutions. The results of the implementation of the GMBO and eight other optimization algorithms on this type of objective function are presented in Table 4. Based on this table, the GMBO is able to provide the basic optimal solution for the F14, F17, F18, F19, F20, F21, F22, and F23 objective functions. The GMBO for the F15 and F16 objective functions also provides quasi-optimal solutions that are very close to the global optimal solutions. The simulation results show that the GMBO provides more acceptable and competitive performance in solving this type of optimization problem.
Table 2, Table 3 and Table 4 present the optimization results of the F1 to F23 objective functions using the GMBO and eight other compared optimization algorithms. In order to further analyze and visually compare the performance of the optimization algorithms, a boxplot of results for each algorithm and objective function is shown in Figure 2.

3.4. Statistical Analysis

This section provides a statistical analysis of the performance of the GMBO and other optimization algorithms in optimizing the mentioned objective functions. Although presenting the optimization results as the average and standard deviation of the best solutions provides valuable information on the performance of the optimization algorithms, statistical analysis is important to ensure the superiority of the proposed GMBO algorithm and the nonrandomness of this superiority. Therefore, in this study, the Wilcoxon rank-sum test, which is a nonparametric statistical test, was applied to determine the significance of the results. The Wilcoxon rank-sum test was applied to specify whether the results obtained by the proposed GMBO were different from the other eight optimization algorithms in a statistically significant way.
A p-value specifies whether the given algorithm is statistically significant or not. If the p-value of the given algorithm is less than 0.05, then the corresponding algorithm is statistically significant. The results of the analysis using the Wilcoxon rank-sum test for the objective functions are shown in Table 5. It is observed from Table 5 that the GMBO is significant superiority to the other competitor algorithms based on the p-values, which are less than 0.05.

3.5. Sensitivity Analysis

In this section, the sensitivity of the proposed GMBO algorithm to the two parameters of the number of population members and the maximum number of iterations is analyzed.
In order to analyze the sensitivity of the proposed algorithm to the number of population members parameter, the GMBO was run independently on all 23 objective functions for the different populations with the number of 20, 30, 50, and 80 members. The results of this simulation for different numbers of population members are presented in Table 6. It is analyzed from Table 6 that for the proposed GMBO, the value of objective function decreases when the number of search agents increases. Convergence curves of GMBO for the sensitivity analysis of the number of population members are shown in Figure 3.
In order to analyze the sensitivity of the proposed GMBO algorithm to the maximum number of iterations parameter, the GMBO was run independently on all 23 objective functions for the maximum number of iterations of 100, 500, 800, and 1000. The performance of the proposed GMBO algorithm, as affected by the different maximum numbers of iterations, is presented in Table 7. The results show that GMBO converges towards the optimal solution when the maximum number of iterations is increased. Convergence curves of the GMBO for the sensitivity analysis of the maximum number of iterations are shown in Figure 4.

4. Discussion

Population-based optimization algorithms have been developed with the aim of providing a suitable quasi-optimal solution that is close to the global optimal solution. Exploitation power and exploration power are two important indicators in evaluating the performance of optimization algorithms.
Exploitation power demonstrates the ability of an optimization algorithm to provide a quasi-optimal solution to an optimization problem. In fact, based on the definition of exploitation power, an algorithm must be able to provide a quasi-optimal solution to an optimization problem after complete implementation and at the end of iterations. This indicator is especially evident in optimization problems that have only one basic optimal solution. The F1 to F7 objective functions, which are unimodal functions, have only one basic optimal solution and are, therefore, suitable for analyzing the exploitation power of optimization algorithms. The performance results of the GMBO and eight other optimization algorithms for optimizing these objective functions are presented in Table 2. These results show that the GMBO has high exploitation power and is able to provide the basic optimal solution to the problem for objective functions F1, F2, F3, F4, and F6. The GMBO also provides an acceptable quasi-optimal solution for the F5 and F7 objective functions.
Exploratory power demonstrates the ability of an optimization algorithm to accurately scan the search space of an optimization problem. According to the definition of exploratory power, an optimization algorithm should be able to explore different areas of the problem search space during implementation and successive iterations. The exploration index is especially important for optimization problems that have several local optimal solutions in addition to the basic optimal solution. High-dimensional multimodal objective functions, including F8 to F13, and fixed-dimensional multimodal objective functions, including F14 to F23, are functions that have several local optimal solutions in addition to the basic optimal solution and are, therefore, suitable for analyzing the exploration power of optimization algorithms. The optimization results of these objective functions are presented using the GMBO and eight other optimization algorithms in Table 3 and Table 4. The results show the high exploration power of the optimization algorithm in solving optimization problems with several local optimal solutions. Therefore, the GMBO is able to scan different areas of the search space well and pass through the optimal local areas with its high exploration power.
The results of the statistical analysis obtained from the Wilcoxon rank test presented in Table 5 ensure that the superiority of the GMBO over other algorithms, as well as its acceptable exploitation and exploration power, is not random.

5. Real-Life Application

In this section, the performance of the GMBO in solving problems in real-life applications is evaluated. For this purpose, the GMBO is implemented in an optimization problem, namely, pressure vessel design.
The mathematical model of this problem was adapted from [22]. Figure 5 shows the schematic view of the pressure vessel problem. Table 8 and Table 9 report the performance of the GMBO and other algorithms. The GMBO provides an optimal solution at (0.778201, 0.384739, 40.320910, 200.00000), with a corresponding objective function value of 5886.0424. Figure 6 presents the convergence analysis of the GMBO for the pressure vessel design optimization problem.

6. Conclusions and Future Works

Optimization algorithms have a special application in solving optimization problems. In this study, a new optimization algorithm called the Group Mean-Based Optimizer (GMBO) is presented to be used in solving optimization problems in various sciences. The proposed GMBO algorithm is based on the more effective use of the population members’ information by creating two new composite members by averaging two selected groups of good and bad members in updating the algorithm population. The various steps of the GMBO have been described and then modeled mathematically to apply to solving optimization problems. The performance of the GMBO in optimization was evaluated on a set of 23 standard objective functions of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal types. The results of the optimization of unimodal objective functions, including F1 to F7, showed the high power of the proposed GMBO in the exploitation index. Additionally, multimodel objective functions, including two groups of F8 to F13 and F14 to F23, showed that the GMBO has a good exploration index in solving optimization problems that also have several local optimal solutions. The optimization results obtained from the GMBO were compared with eight other optimization algorithms, including the Marine Predators Algorithm (MPA), the Tunicate Swarm Algorithm (TSA), the Whale Optimization Algorithm (WOA), the Grey Wolf Optimizer (GWO), Teaching-Learning-Based Optimization (TLBO), the Gravitational Search Algorithm (GSA), Particle Swarm Optimization (PSO), and the Genetic Algorithm (GA). The optimization results show that the proposed GMBO algorithm has an acceptable ability to solve various optimization problems and is much more competitive than other algorithms. In addition, the efficiency and effectiveness of the GMBO were also demonstrated by applying it to an engineering design problem, namely, pressure vessel design. From the experimental results, it can be concluded that the GMBO is applicable to real-life studies of problems with unknown search spaces.
We propose several perspectives and ideas for future works and studies. The design of a binary version and a multiobjective version of the GMBO is an interesting potential topic for future investigations. Apart from this, applying the GMBO to real-life optimization problems and various other optimization problems can be considered significant contributions as well.
The important issue of all population-based optimization algorithms is that since population-based optimization algorithms provide a quasi-optimal solution to an optimization problem based on random search space searches, this solution may be different from the global optimal solution. Therefore, although the proposed GMBO in this paper has performed well in optimizing optimization problems, it is still possible to introduce new optimization algorithms that can provide more appropriate quasi-optimal solutions that are closer to global optimization.

Author Contributions

Conceptualization, Š.H. and M.D.; methodology, Z.M.; software, Š.H.; validation, M.D., Z.M., and Š.H.; formal analysis, Z.M.; investigation, M.D.; resources, Š.H.; data curation, Z.M.; writing—original draft preparation, M.D.; writing—review and editing, Š.H.; visualization, Z.M.; supervision, M.D.; project administration, M.D.; funding acquisition, Š.H. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by the Project of Specific Research, PrF UHK No. 2101/2021.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The authors declare to honor the Principles of Transparency and Best Practice in Scholarly Publishing about Data.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Information on the 23 objective functions is provided in Table A1, Table A2 and Table A3.
Table A1. Unimodal test functions.
Table A1. Unimodal test functions.
Objective FunctionRangeDimensionsFmin
F 1 x = i = 1 m x i 2 100 , 100 30 0
F 2 x = i = 1 m x i + i = 1 m x i 10 , 10 300
F 3 x = i = 1 m j = 1 i x i 2 100 , 100 300
F 4 x = m a x x i   ,     1 i m   100 , 100 300
F 5 x = i = 1 m 1 100 x i + 1 x i 2 2 + x i 1 2 ) 30 , 30 300
F 6 x = i = 1 m x i + 0.5 2 100 , 100 300
F 7 x = i = 1 m i x i 4 + r a n d o m 0 , 1 1.28 , 1.28 300
Table A2. High-dimensional multimodal test functions.
Table A2. High-dimensional multimodal test functions.
Objective FunctionRangeDimensionsFmin
F 8 x = i = 1 m x i   sin x i 500 , 500 30−12569
F 9 x = i = 1 m   x i 2 10 cos 2 π x i + 10 5.12 , 5.12 300
F 10 x = 20 exp 0.2 1 m i = 1 m x i 2 exp 1 m i = 1 m cos 2 π x i + 20 + e 32 , 32 300
F 11 x = 1 4000 i = 1 m x i 2 i = 1 m c o s x i i + 1 600 , 600 300
F 12 x = π m   10 sin π y 1 + i = 1 m y i 1 2 1 + 10 sin 2 π y i + 1 + y n 1 2 + i = 1 m u x i , 10 , 100 , 4
u x i , a , i , n = k x i a n                               x i > a 0                                       a < x i < a k x i a n                       x i < a
50 , 50 300
F 13 x = 0.1   sin 2 3 π x 1 + i = 1 m x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x m + i = 1 m u x i , 5 , 100 , 4 50 , 50 300
Table A3. Fixed-dimensional multimodal test functions.
Table A3. Fixed-dimensional multimodal test functions.
Objective FunctionRangeDimensionsFmin
F 14 x = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 6 1 65.53 , 65.53 20.998
F 15 x = i = 1 11 a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 5 , 5 40.00030
F 16 x = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 5 , 5 2−1.0316
F 17 x = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π c o s x 1 + 10 [ 5 , 10 ] × [ 0 , 15 ] 20.398
F 18 x = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × 30 + 2 x 1 3 x 2 2 × 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 5 , 5 23
F 19 x = i = 1 4 c i exp j = 1 3 a i j x j P i j 2 0 , 1 3−3.86
F 20 x = i = 1 4 c i exp j = 1 6 a i j x j P i j 2 0 , 1 6−3.22
F 21 x = i = 1 5 X a i X a i T + 6 c i 1 0 , 10 4−10.1532
F 22 x = i = 1 7 X a i X a i T + 6 c i 1 0 , 10 4−10.4029
F 23 x = i = 1 10 X a i X a i T + 6 c i 1 0 , 10 4−10.5364

References

  1. Dhiman, G. SSC: A hybrid nature-inspired meta-heuristic optimization algorithm for engineering applications. Knowl. Based Syst. 2021, 222, 106926. [Google Scholar] [CrossRef]
  2. Dhiman, G.; Kaur, A. STOA: A bio-inspired based optimization algorithm for industrial engineering problems. Eng. Appl. Artif. Intell. 2019, 82, 148–174. [Google Scholar] [CrossRef]
  3. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  4. Dorigo, M.; Stützle, T. Ant colony optimization: Overview and recent advances. In Handbook of Metaheuristi; Springer International Publishing: Cham, Switzerland, 2019; pp. 311–351. [Google Scholar]
  5. Dehghani, M.; Montazeri, Z.; Dehghani, A.; Seifi, A. Spring Search Algorithm: A new meta-heuristic optimization algorithm inspired by Hooke’s law. In 2017 IEEE 4th International Conference on Knowledge-Based Engineering and Innovation (KBEI); IEEE: Tehran, Iran, 2017; pp. 210–214. [Google Scholar]
  6. Dehghani, M.; Montazeri, Z.; Givi, H.; Guerrero, J.M.; Dhiman, G. Darts game optimizer: A new optimization technique based on darts game. Int. J. Intell. Eng. Syst 2020, 13, 286–294. [Google Scholar] [CrossRef]
  7. Gaina, R.D.; Devlin, S.; Lucas, S.M.; Perez, D. Rolling horizon evolutionary algorithms for general video game playing. IEEE Trans. Games 2021. [Google Scholar] [CrossRef]
  8. Niccolai, A.; Grimaccia, F.; Mussetta, M.; Gandelli, A.; Zich, R. Social network optimization for WSN routing: Analysis on problem codification techniques. Mathematics 2020, 8, 583. [Google Scholar] [CrossRef] [Green Version]
  9. Roy, K.; Ting, T.C.H.; Lau, H.H.; Lim, J.B. Nonlinear behaviour of back-to-back gapped built-up cold-formed steel channel sections under compression. J. Constr. Steel Res. 2018, 147, 257–276. [Google Scholar] [CrossRef]
  10. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; IEEE Service Center: Piscataway, NJ, USA, 1942; p. 1948. [Google Scholar]
  11. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  12. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  13. Kaur, S.; Awasthi, L.K.; Sangal, A.; Dhiman, G. Tunicate swarm algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  14. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  15. Van Laarhoven, P.J.; Aarts, E.H. Simulated annealing. In Simulated Annealing: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 1987; pp. 7–15. [Google Scholar]
  16. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  17. Dehghani, M.; Samet, H. Momentum search algorithm: A new meta-heuristic optimization algorithm inspired by momentum conservation law. SN Appl. Sci. 2020, 2, 1–15. [Google Scholar] [CrossRef]
  18. Dehghani, M.; Mardaneh, M.; Guerrero, J.M.; Malik, O.; Kumar, V. Football game based optimization: An application to solve energy commitment problem. Int. J. Intell. Eng. Syst. 2020, 13, 514–523. [Google Scholar] [CrossRef]
  19. Dehghani, M.; Montazeri, Z.; Saremi, S.; Dehghani, A.; Malik, O.P.; Al-Haddad, K.; Guerrero, J.M. HOGO: Hide Objects Game Optimization. Int. J. Intell. Eng. Syst. 2020, 13, 216–225. [Google Scholar] [CrossRef]
  20. Bose, A.; Biswas, T.; Kuila, P. A novel genetic algorithm based scheduling for multi-core systems. In Smart Innovations in Communication and Computational Sciences; Springer: Berlin/Heidelberg, Germany, 2019; pp. 45–54. [Google Scholar]
  21. Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  22. Kannan, B.; Kramer, S.N. An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. J. Mech. Des. 1994, 116, 405–411. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the GMBO algorithm.
Figure 1. Flowchart of the GMBO algorithm.
Mathematics 09 01190 g001
Figure 2. Boxplot of composition objective function results for different optimization algorithms.
Figure 2. Boxplot of composition objective function results for different optimization algorithms.
Mathematics 09 01190 g002aMathematics 09 01190 g002b
Figure 3. Sensitivity analysis of the GMBO for the number of population members.
Figure 3. Sensitivity analysis of the GMBO for the number of population members.
Mathematics 09 01190 g003aMathematics 09 01190 g003b
Figure 4. Sensitivity analysis of the GMBO for the maximum number of iterations.
Figure 4. Sensitivity analysis of the GMBO for the maximum number of iterations.
Mathematics 09 01190 g004aMathematics 09 01190 g004b
Figure 5. Schematic view of the pressure vessel problem.
Figure 5. Schematic view of the pressure vessel problem.
Mathematics 09 01190 g005
Figure 6. Convergence analysis of the GMBO for the pressure vessel design optimization problem.
Figure 6. Convergence analysis of the GMBO for the pressure vessel design optimization problem.
Mathematics 09 01190 g006
Table 1. Parameter values for the comparative algorithms.
Table 1. Parameter values for the comparative algorithms.
AlgorithmParameterValue
MPA
Constant numberP = 0.5
Random vectorR is a vector of uniform random numbers in 0 1 .
Fish Aggregating Devices (FADs)FADs = 0.2
Binary vectorU = 0 or 1
TSA
Pmin and Pmax1, 4
c1, c2, c3random numbers lie in the range of 0 1 .
WOA
Convergence parameter (a)a: Linear reduction from 2 to 0.
r is a random vector in 0 1 .
l is a random number in 1 , 1 .
GWO
Convergence parameter (a)a: Linear reduction from 2 to 0.
TLBO
TF: teaching factorTF = round 1 + r a n d
random numberrand is a random number between 0 1 .
GSA
Alpha, G0, Rnorm, Rpower20, 100, 2, 1
PSO
TopologyFully connected
Cognitive and social constant(C1, C2) 2, 2
Inertia weightLinear reduction from 0.9 to 0.1
Velocity limit10% of dimension range
GA
TypeReal coded
SelectionRoulette wheel (Proportionate)
CrossoverWhole arithmetic (Probability = 0.8, α = 0.5 ,   1.5 )
MutationGaussian (Probability = 0.05)
Table 2. Optimization results of GMBO and other optimization algorithms on unimodal objective functions.
Table 2. Optimization results of GMBO and other optimization algorithms on unimodal objective functions.
GAPSOGSATLBOGWOWOATSAMPAGMBO
F1Ave13.24051.7740 × 10−52.0255 × 10−178.3373 × 10−601.09 × 10−582.1741 × 10−97.71 × 10−383.2715 × 10−210
std4.7664 × 10−156.4396 × 10−211.1369 × 10−324.9436 × 10−765.1413 × 10−747.3985 × 10−257.00 × 10−214.6153 × 10−210
F2Ave2.47940.34112.3702 × 10−87.1704 × 10−351.2952 × 10−340.54628.48 × 10−391.57 × 10−120
std2.2342 × 10−157.4476 × 10−175.1789 × 10−246.6936 × 10−501.9127 × 10−501.7377 × 10−165.92 × 10−411.42 × 10−120
F3Ave1536.8963589.492279.34392.7531 × 10−157.4091 × 10−151.7634 × 10−81.15 × 10−210.08640
std6.6095 × 10−137.1179 × 10−131.2075 × 10−132.6459 × 10−315.6446 × 10−301.0357 × 10−236.70 × 10−210.14440
F4Ave2.09423.96343.2547 × 10−99.4199 × 10−151.2599 × 10−142.9009 × 10−51.33 × 10−232.6 × 10−84.6× 10−297
std2.2342 × 10−151.9860 × 10−162.0346 × 10−242.1167 × 10−301.0583 × 10−291.2121 × 10−201.15 × 10−229.25 × 10−90
F5Ave310.427350.2624536.10695146.456426.860741.776728.861546.04926.1650
std2.0972 × 10−131.5888 × 10−143.0982 × 10−141.9065 × 10−1402.5421 × 10−144.76 × 10−30.42193.89 × 10−14
F6Ave14.5520.2500.44350.64231.6085 × 10−97.10 × 10−210.3980
std3.1776 × 10−151.256404.2203 × 10−166.2063 × 10−174.6240 × 10−251.12 × 10−250.19140
F7Ave5.6799 × 10−30.11340.02060.00170.00080.02053.72 × 10−40.00183.95 × 10−6
std7.7579 × 10−194.3444 × 10−172.7152 × 10−183.87896 × 10−197.2730 × 10−201.5515 × 10−185.09 × 10−50.0017.58 × 10−21
Table 3. Optimization results of the GMBO and other optimization algorithms on high-dimensional test objective functions.
Table 3. Optimization results of the GMBO and other optimization algorithms on high-dimensional test objective functions.
GAPSOGSATLBOGWOWOATSAMPAGMBO
F8Ave−8184.4142−6908.6558−2849.0724−7408.6107−5885.1172−1663.9782−5740.3388−3594.16321−5431.66
std833.2165625.6248264.3516513.5784467.5138716.349241.5811.326514.07 × 10−12
F9Ave62.411457.061316.267510.24858.5265 × 10−154.20115.70 × 10−3140.12380
std2.5421 × 10−146.3552 × 10−153.1776 × 10−155.5608 × 10−155.6446 × 10−304.3692 × 10−151.46 × 10−326.31240
F10Ave3.22182.15463.5673 × 10−90.27571.7053 × 10−140.32939.80 × 10−149.6987 × 10−128.88 × 10−16
std5.1636 × 10−157.9441 × 10−163.6992 × 10−252.5641 × 10−152.7517 × 10−291.9860 × 10−164.51 × 10−126.1325 × 10−125.29 × 10−31
F11Ave1.23020.04623.73750.60820.00370.11891.00 × 10−700
std8.4406 × 10−163.1031 × 10−182.7804 × 10−151.9860 × 10−161.2606 × 10−188.9991 × 10−177.46 × 10−700
F12Ave0.0470.48060.03620.02030.03721.74140.03680.08510.0140
std4.6547 × 10−181.8619 × 10−166.2063 × 10−187.7579 × 10−194.3444 × 10−178.1347 × 10−121.5461 × 10−20.00523.1 × 10−17
F13Ave1.20850.50840.0020.32930.57630.34562.95750.49010.2659
std3.2272 × 10−164.9650 × 10−174.2617 × 10−142.1101 × 10−162.4825 × 10−163.25391 × 10−121.5682 × 10−120.19326.95 × 10−16
Table 4. Optimization results of the GMBO and other optimization algorithms on fixed-dimensional objective functions.
Table 4. Optimization results of the GMBO and other optimization algorithms on fixed-dimensional objective functions.
EGAPSOGSATLBOGWOWOATSAMPAGMBO
F14Ave0.99862.17353.59132.27213.74080.9981.99230.9980.998
std1.5640 × 10−157.9441 × 10−167.9441 × 10−161.9860 × 10−166.4545 × 10−159.4336 × 10−162.6548 × 10−74.2735 × 10−164.97 × 10−17
F15Ave5.3952 × 10−20.05350.00240.00330.00630.00490.00040.0030.0003
std7.0791 × 10−183.8789 × 10−192.9092 × 10−191.2218 × 10−171.1636 × 10−183.4910 × 10−189.0125 × 10−44.0951 × 10−157.76 × 10−19
F16Ave−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
std7.9441 × 10−163.4755 × 10−165.9580 × 10−161.4398 × 10−153.9720 × 10−169.9301 × 10−162.6514 × 10−164.4652 × 10−162.98 × 10−16
F17Ave0.43690.78540.39780.39780.39780.40470.39910.39790.3978
std4.9650 × 10−174.9650 × 10−179.9301 × 10−177.4476 × 10−178.6888 × 10−172.4825 × 10−172.1596 × 10−169.1235 × 10−154.97 × 10−17
F18Ave4.3592333.000933333
std5.9580 × 10−163.6741 × 10−156.9511 × 10−161.5888 × 10−152.0853 × 10−155.6984 × 10−152.6528 × 10−151.9584 × 10−152.98 × 10−16
F19Ave−3.85434−3.8627−3.8627−3.8609−3.8621−3.8627−3.8066−3.8627−3.8628
std9.9301 × 10−178.9371 × 10−158.3413 × 10−157.3483 × 10−152.4825 × 10−153.1916 × 10−152.6357 × 10−154.2428 × 10−155.96 × 10−16
F20Ave−2.8239−3.2619−3.0396−3.2014−3.2523−3.2424−3.3206−3.3211−3.322
std3.97205 × 10−162.9790 × 10−162.1846 × 10−141.7874 × 10−152.1846 × 10−157.9441 × 10−165.6918 × 10−151.1421 × 10−111.99 × 10−15
F21Ave−4.3040−5.3891−5.1486−9.1746−9.6452−7.4016−5.5021−10.1532−10.1532
std1.5888 × 10−151.4895 × 10−152.9790 × 10−168.5399 × 10−156.5538 × 10−152.3819 × 10−115.4615 × 10−132.5361 × 10−111.99 × 10−15
F22Ave−5.1174−7.6323−9.0239−10.0389−10.4025−8.8165−5.0625−10.4029−10.4029
std1.2909 × 10−151.5888 × 10−151.6484 × 10−121.5292 × 10−141.9860 × 10−156.7524 × 10−158.4637 × 10−142.8154 × 10−115.16 × 10−15
F23Ave−6.5621−6.1648−8.9045−9.2905−10.1302−10.0003−10.3613−10.5364−10.5364
std3.8727 × 10−152.7804 × 10−157.1497 × 10−141.1916 × 10−154.5678 × 10−159.1357 × 10−157.6492 × 10−123.9861 × 10−111.96 × 10−15
Table 5. Obtained results from the Wilcoxon test (p ≥ 0.05).
Table 5. Obtained results from the Wilcoxon test (p ≥ 0.05).
Compared AlgorithmsUnimodalHigh-MultimodalFixed-Multimodal
GMBO vs. MPA0.0156250.06250.125
GMBO vs. TSA0.0156250.43750.007813
GMBO vs. WOA0.0156250.031250.015625
GMBO vs. GWO0.0156250.43750.015625
GMBO vs. TLBO0.0156250.43750.007813
GMBO vs. GSA0.031250.156250.015625
GMBO vs. PSO0.0156250.43750.007813
GMBO vs. GA0.0156250.43750.003906
Table 6. Results of algorithm sensitivity analysis of the number of population members.
Table 6. Results of algorithm sensitivity analysis of the number of population members.
Objective FunctionNumber of Population Members
20305080
F10000
F20000
F30000
F41.8 × 10−2878.8 × 10−2894.6 × 10−2971.2 × 10−299
F528.6093928.7498526.1650525.07082
F60000
F77.74 × 10−56.96 × 10−53.95 × 10−64.23 × 10−7
F8−3940.63−4338.67−5431.66−6589.74
F90000
F108.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
F110000
F120.1948640.0426590.01400.01155
F132.2292271.6469690.26590.1071
F142.9821050.9980040.9980.998
F150.0003290.0003080.00030.0003
F16−1.03163−1.03163−1.0316−1.0316
F170.3979040.3978870.39780.3978
F183.000004333
F19−3.86264−3.86278−3.8628−3.8628
F20−3.32194−3.32113−3.322−3.322
F21−9.70646−10.1528−10.1532−10.1532
F22−10.3934−10.4019−10.4029−10.4029
F23−7.51071−10.5348−10.5364−10.5364
Table 7. Results of algorithm sensitivity analysis of the maximum number of iterations.
Table 7. Results of algorithm sensitivity analysis of the maximum number of iterations.
Objective FunctionMaximum Number of Iterations
1005008001000
F11.3 × 10−58000
F21.08 × 10−304 × 10−1639.1 × 10−2620
F31.58 × 10−395 × 10−22700
F47.14 × 10−284.5 × 10−1457.8 × 10−2324.6 × 10−297
F527.1373827.0767826.8810326.16505
F60000
F70.0001674.31 × 10−52.5 × 10−53.95 × 10−6
F8−3468.14−4367.85−5258.08−5431.66
F90000
F104.44 × 10−154.44 × 10−154.44 × 10−158.88 × 10−16
F110000
F120.0321980.025360.01660.0140
F131.16711.16581.04630.2659
F140.9980.9980.9980.998
F150.0003510.0003080.0003080.0003
F16−1.03163−1.03163−1.03163−1.0316
F170.3979150.3978880.3978880.3978
F183.000018333
F19−3.86268−3.86278−3.86278−3.8628
F20−3.31354−3.32177−3.32197−3.322
F21−6.38282−10.1494−10.1531−10.1532
F22−10.3783−10.4004−10.402−10.4029
F23−10.5037−10.5337−10.5356−10.5364
Table 8. Comparison results for the pressure vessel design problem.
Table 8. Comparison results for the pressure vessel design problem.
Algorithm Optimum Variables Optimum Cost
TsThRL
GMBO0.7782010.38473940.320910200.000005886.0424
MPA0.7790350.38466040.327793199.650295889.3689
TSA0.83037370.416205742.75127169.34546048.7844
WOA0.7789610.38468340.320913200.000005891.3879
GWO0.8457190.41856443.816270156.381646011.5148
TLBO0.8175770.41793241.74939183.572706137.3724
GSA1.0858000.94961449.345231169.4874111,550.2976
PSO0.7523620.39954040.452514198.002685890.3279
GA1.0995230.90657944.456397179.658876550.0230
Table 9. Statistical results for the pressure vessel design problem.
Table 9. Statistical results for the pressure vessel design problem.
AlgorithmBestMeanWorstSDMedian
GMBO5886.04245884.14015891.3099024.3415888.0424
MPA5889.36895891.52475894.6238013.9105890.6497
TSA6048.78446052.62416071.2496002.8936050.2282
WOA5891.38796531.50327394.5879534.1196416.1138
GWO6011.51486477.30507250.9170327.0076397.4805
TLBO6137.37246326.76066512.3541126.6096318.3179
GSA11,550.297623,342.290933,226.25265790.62524,010.0415
PSO5890.32796264.00537005.7500496.1286112.6899
GA6550.02306643.98708005.4397657.5237586.0085
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dehghani, M.; Montazeri, Z.; Hubálovský, Š. GMBO: Group Mean-Based Optimizer for Solving Various Optimization Problems. Mathematics 2021, 9, 1190. https://doi.org/10.3390/math9111190

AMA Style

Dehghani M, Montazeri Z, Hubálovský Š. GMBO: Group Mean-Based Optimizer for Solving Various Optimization Problems. Mathematics. 2021; 9(11):1190. https://doi.org/10.3390/math9111190

Chicago/Turabian Style

Dehghani, Mohammad, Zeinab Montazeri, and Štěpán Hubálovský. 2021. "GMBO: Group Mean-Based Optimizer for Solving Various Optimization Problems" Mathematics 9, no. 11: 1190. https://doi.org/10.3390/math9111190

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop