Next Article in Journal
Progressive Iterative Approximation of Non-Uniform Cubic B-Spline Curves and Surfaces via Successive Over-Relaxation Iteration
Next Article in Special Issue
A Multi-Service Composition Model for Tasks in Cloud Manufacturing Based on VS–ABC Algorithm
Previous Article in Journal
Implications of CSR Practices for a Development Supply Chain in Alleviating Farmers’ Poverty
Previous Article in Special Issue
SIP-UNet: Sequential Inputs Parallel UNet Architecture for Segmentation of Brain Tissues from Magnetic Resonance Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified Group Teaching Optimization Algorithm for Solving Constrained Engineering Optimization Problems

1
School of Information Engineering, Sanming University, Sanming 365004, China
2
School of Education and Music, Sanming University, Sanming 365004, China
3
School of Computer Science and Technology, Hainan University, Haikou 570228, China
4
Hourani Center for Applied Scientific Research, Al-Ahliyya Amman University, Amman 19328, Jordan
5
Faculty of Information Technology, Middle East University, Amman 11831, Jordan
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(20), 3765; https://doi.org/10.3390/math10203765
Submission received: 1 September 2022 / Revised: 7 October 2022 / Accepted: 10 October 2022 / Published: 12 October 2022
(This article belongs to the Special Issue Computational Intelligence: Theory and Applications)

Abstract

:
The group teaching optimization algorithm (GTOA) is a meta heuristic optimization algorithm simulating the group teaching mechanism. The inspiration of GTOA comes from the group teaching mechanism. Each student will learn the knowledge obtained in the teacher phase, but each student’s autonomy is weak. This paper considers that each student has different learning motivations. Elite students have strong self-learning ability, while ordinary students have general self-learning motivation. To solve this problem, this paper proposes a learning motivation strategy and adds random opposition-based learning and restart strategy to enhance the global performance of the optimization algorithm (MGTOA). In order to verify the optimization effect of MGTOA, 23 standard benchmark functions and 30 test functions of IEEE Evolutionary Computation 2014 (CEC2014) are adopted to verify the performance of the proposed MGTOA. In addition, MGTOA is also applied to six engineering problems for practical testing and achieved good results.

1. Introduction

Meta-heuristic algorithms (MAs) are commonly used to solve global optimization problems. They mainly solve the optimal solution by simulating nature and human intelligence. To a certain extent, they can search globally and find the approximate solution of the optimal solution. The core of MAs is exploration and exploitation. Among them, exploration is to explore the entire search space as much as possible because the optimal solution may exist anywhere in the entire search space. Furthermore, exploitation is to use effective information as much as possible. In most cases, there are specific correlations between the optimal solutions. Use these correlations to adjust gradually, and slowly search from the initial solution to the optimal solution. In general, MAs hope to balance exploration and exploitation as much as possible.
In recent years, many scholars have studied MAs because of their advantages, such as simple operation, intuitive operation, and fast running speed. So far, hundreds of meta-heuristic algorithms have been proposed. According to different design inspirations, MAs can be divided into four categories: swarm-based, evolutionary, physical, and human-based algorithms. The algorithms represented by swarm-based algorithms include Particle Swarm Optimization (PSO) [1], Ant Lion Optimizer (ALO) [2], Bat Algorithm (BA) [3], Slap Swarm Algorithm (SSA) [4], Ant Colony Optimization (ACO) [5], Artificial Bee Colony (ABC) [6], Gray Wolf Optimization (GWO) [7], Krill Herd (KH) [8], Whale Optimization Algorithm (WOA) [9], and Remora Optimization Algorithm (ROA) [10]. Evolutionary algorithms are represented by Genetic Algorithm (GA) [11], Evolution Strategy (ES) [12], Genetic Programming (GP) [13], Biogeography-Based Optimizer (BBO) [14], Evolutionary Programming (EP) [15], Differential Evolution (DE) [16], and Virulence Optimization Algorithm (VOA) [17]. Physical-based algorithms are represented by Simulated Annealing (SA) [18], Gravitational Search Algorithm (GSA) [19], Black Hole Algorithm (BH) [20], Multi-Verse Optimization (MVO) [21], Ray Optimization (RO) [22], and Thermal Exchange Optimization (TEO) [23]. Human-based algorithms are represented by Harmony Search (HS) [24], Teaching Learning-Based Optimization (TLBO) [25], Social Group Optimization (SGO) [26], and Exchanged Market Algorithm (EMA) [27]. These algorithms have a good optimization effect on MAs.
Group Teaching Optimization Algorithm (GTOA) is an MA that was proposed in 2020 [28] and inspired by the group teaching mechanism. GTOA divides students into elite students and ordinary students for group teaching. For different students, teachers have different teaching methods. However, each student is close to the teacher in essence, so the algorithm has better exploitation ability. Each student will consolidate the contents of the teacher’s teaching during the break. However, this way does not fully reflect the idea of group teaching. Therefore, Zhang et al. proposed a group teaching optimization algorithm with information sharing (ISGTOA) to distinguish the learning styles of different groups. ISGTOA has established two teaching methods for elite and ordinary students. The two groups of students can fully communicate through these two methods. This method increases the communication between groups, improves the dependence of students, and makes the algorithm converge faster [29]. However, ISGTOA still does not distinguish between different students’ learning status and learning methods in their spare time. Each student has the same learning method at the student phase. However, the learning motivation of elite students is different from that of ordinary students. Elite students have strong learning motivation. These students have strong self-learning ability and are good at self-summarizing and learning. Ordinary students lack the motivation to learn and often need the help of other students or teachers. Elite students acquire knowledge through self-study through learning motivation, while ordinary students learn through discussion among classmates. Therefore, this paper proposes a modified group teaching optimization algorithm (MGTOA). By adding learning motivation, elite students will learn independently according to their own learning motivation, and ordinary students will communicate with each other to improve their comprehensive abilities. At the same time, random reverse learning and restart strategy are added to enhance the optimization performance of the proposed algorithm.
The main contribution of this paper can be summarized as follows:
  • A modified GTOA is proposed based on three strategies: learning motivation (LM), random opposition-based learning (ROBL), and restart strategy (RS).
  • The optimization of MGTOA in different dimensions (dim = 30/500) among 23 standard benchmark functions is evaluated, and the distribution of MGTOA in some benchmark functions is shown.
  • Test the optimization performance of MGTOA in CEC2014.
  • MGTOA is compared with seven different optimization algorithms.
  • Six process problems verify the engineering practicability of MGTOA.
The organizational structure of this paper is as follows: Section 2 gives a brief description of GTOA, while Section 3 describes the use of operators: learning motivation (LM), random opposition-based learning (ROBL), restart strategy (RS), and gives the framework of the proposed algorithm. Section 4 and Section 5 show the algorithm’s experimental results in solving benchmark and constraint engineering problems, while Section 6 summarizes the paper.

2. Group Teaching Optimization Algorithm (GTOA)

The idea of GTOA is to improve the knowledge level of the whole class by simulating the group learning mechanism. Considering that every student has different knowledge, it is very complicated in practice. In order to integrate the idea of group teaching into the optimization algorithm, this paper assumes that the population, decision variables, and fitness values are similar to the students, the subjects provided to the students, and the student’s knowledge. The algorithm is divided into the following four phases.

2.1. Ability Grouping Phase

In order to better show the advantages of group teaching, all students will be divided into two groups according to their knowledge level, and the teacher will carry out different teaching for these two groups of students. In grouping, the relatively strong group is called elite students, and the other group is called ordinary students. Teachers will be more capable of making different teaching plans when teaching. After grouping, the gap between elite and ordinary students will become larger with the teachers’ teaching. For better teaching, the grouping of GTOA is a dynamic process, and grouping is performed again after one cycle of learning.

2.2. Teacher Phase

The teacher phase refers to students acquiring knowledge through their teachers. In GTOA, teachers make two different teaching plans for two groups of students.
Teacher phase Ⅰ: Elite students have a strong ability to accept knowledge, so teachers pay more attention to improving students’ overall average knowledge level. In addition, the differences in students’ acceptance of knowledge are also considered. Therefore, elite students will learn according to the teachers’ teaching and students’ overall knowledge level. This can effectively improve the overall knowledge level.
x t e a c h e r , i   =   x i   +   a   ×   ( x T     F   ×   ( b   ×   M   +   c   ×   x i ) )
M   =   1 N i = 1 N x i
b   +   c   =   1
where N is the number of students, and x i is the ith student. x T is a teacher. M is the average knowledge of students. x t e a c h e r , i is the solution obtained through the teacher phase. a, b, and c are random numbers in [0, 1]. The value of F is a coefficient of 1 or 2, as done in Rao et al. [25].
Teacher phase Ⅱ: Considering that ordinary students have a general ability to accept knowledge. Teachers will pay more attention to ordinary students and tend to improve students’ knowledge levels from an individual perspective. Therefore, ordinary students will be more inclined to obtain teachers’ knowledge.
x t e a c h e r , i   =   x i   +   2   ×   d   ×   ( x T     x i )
where d is a random number between [0, 1].
In addition, students cannot guarantee that they will acquire knowledge in the teacher phase, so the minimum value is assigned to represent the knowledge level of students after the teacher phase.
x t e a c h e r , i   =   x t e a c h e r , i , f ( x t e a c h e r , i ) < f ( x i ) x i , f ( x t e a c h e r , i ) f ( x i )

2.3. Student Phase

Students can acquire knowledge through self-study or interaction with other students in their spare time. In order to improve their knowledge level. In the spare time, students will summarize the knowledge they have learned in the teacher phase for self-study.
x s t u d e n t , i   =   x t e a c h e r , i   +   e   ×   ( x t e a c h e r , i     x t e a c h e r , j )   +   g   ×   ( x t e a c h e r , i     x i ) , f ( x t e a c h e r , i ) < f ( x t e a c h e r , j ) x t e a c h e r , i     e   ×   ( x t e a c h e r , i     x t e a c h e r , j )   +   g   ×   ( x t e a c h e r , i     x i ) , f ( x t e a c h e r , i ) f ( x t e a c h e r , j )
where e and g are two random numbers in the range [0, 1], x s t u d e n t , i is the knowledge of student i by learning from the student phase and x t e a c h e r , j is the knowledge of student j by learning from the teacher phase. The random number of students j ∈ {1, 2, …, i − 1, i, …, N}. In Equation (6), the second and third items on the right mean learning from the other student and self-learning, respectively.
In addition, students cannot acquire knowledge through the student phase, so the minimum value is assigned to represent the knowledge level of students after the teacher phase.
x i   =   x t e a c h e r , i , f ( x t e a c h e r , i ) < f ( x s t u d e n t , i ) x s t u d e n t , i , f ( x t e a c h e r , i ) f ( x s t u d e n t , i )

2.4. Teacher Allocation Phase

Selecting excellent teachers can improve students’ learning ability. Establishing a good teacher matching mechanism is important to improve students’ knowledge level. In the GWO algorithm, the average value is obtained by selecting the best three wolves and using them to guide all wolves to prey. Inspired by the hunting behavior of the GWO algorithm, the allocation of teachers is expressed by the following formula:
x T   =   x first    , f ( x first ) f ( x first   +   x second   +   x third 3 ) x first   +   x second   +   x third 3 , f ( x first ) > f ( x first   +   x second   +   x third 3 )
where, x first , x second , and x third are the students with the first, second, and third fitness values, respectively. In order to accelerate the convergence of the algorithm, elite students and ordinary students generally use the same teacher.

2.5. The Proposed Approach

Step 1: Initialization.
(1.1) Initialization parameters.
These parameters include maximum evaluation times Tmax, current evaluation times t (t = 0), population size N, upper and lower bounds ub and lb of decision variables, dimension dim, and fitness function f(·).
(1.2) Initialize population.
Initialize solution x according to the test function. Each group of variables represents one student. It can be described as:
x   =   [ x 1 , x 2 , , x N ] T m a x   =   x 1 , 1 x 1 , 2 x 1 , dim x 2 , 1 x 2 , 2 x 2 , dim x N , 1 x N , 2 x N , dim
x i   =   l b   +   ( u b     l b )   ×   k
where k is a random number in the range [0, 1].
Step 2: Calculate fitness value.
The fitness value of the individual is calculated, and the optimal solution G is selected.
Step 3: Termination conditions
If the current iteration number t is greater than the maximum iteration number Tmax, the algorithm is terminated, and the optimal solution G is output.
Step 4: Teacher allocation phase
The first three best individuals are selected. Then, the teacher x T can be calculated by Equation (8).
Step 5: Ability grouping phase.
The students are divided into two groups based on the fitness value of the current students. The best half of the students are divided into elite group x g o o d and the remaining half into the ordinary group x b a d . The two groups share a teacher.
Step 6: Teacher phase and student phase
(6.1). For the group x g o o d , the teacher phase is implemented based on Equations (1)–(3) and (5). Then, the student phase is conducted according to Equations (6) and (7). Finally, the new students x g o o d is obtained.
(6.2) For the group x b a d , the teacher phase is implemented based on Equations (4) and (5). Then, the student phase is conducted according to Equations (6) and (7). Finally, the new students x b a d is obtained.
Step 7: Population evaluation.
Calculate the fitness value of students, select the optimal solution G, and update the current iteration number t.
t   =   t   +   1
Then, Step 3 is executed.

3. Proposed Algorithm

3.1. Learning Motivation

Each student can acquire knowledge through self-study or communication with classmates in the student phase of the original algorithm. However, it does not fit elite students and ordinary students. Elite students have stronger learning abilities and motivation than ordinary students and are more inclined to self-study. However, ordinary students are weak in learning ability and tend to study among their classmates.
Therefore, elite students obtain learning motivation D according to Equation (12) and find a new solution through Equation (13). Ordinary students can find a new solution according to Equation (14).
D   =   ( 1     i ) / N   ×   sin ( 2   ×   π   ×   r )
x s t u d e n t , i   =   x t e a c h e r , i   +   D   ×   x t e a c h e r , i
where r is a random number of [0, 1].
x s t u d e n t , i   =   x t e a c h e r , i   +   e   ×   ( x t e a c h e r , i     x t e a c h e r , j )   +   g   ×   ( x t e a c h e r , i     M ) , f ( x t e a c h e r , i ) < f ( x t e a c h e r , j ) x t e a c h e r , i     e   ×   ( x t e a c h e r , i     x t e a c h e r , j )   +   g   ×   ( x t e a c h e r , i     M ) , f ( x t e a c h e r , i ) f ( x t e a c h e r , j )

3.2. Random Opposition-Based Learning

Opposition-based learning (OBL) is a new computational intelligence scheme [30]. In the past few years, OBL has been successfully applied to various population-based evolutionary algorithms [31,32,33,34,35]. Random opposition-based learning increases the random value of [0, 1] on the basis of opposition-based learning, so as to obtain the random solution within the range of inverse solution. It not only expands the search range but also strengthens the diversity of the population so that the algorithm has a stronger exploration ability and convergence ability. The specific formula is as follows:
x n e w i   =   ( u b   +   l b )     ( x T     t ) / T max   ×   r a n d   ×   x i
where x n e w i represents the solution obtained after random opposition-based learning.
It is determined whether the solution obtained by the random opposition-based learning is better than the original solution:
x i   =   x i   , f ( x i ) f ( x n e w i ) x n e w i , f ( x i ) > f ( x n e w i )

3.3. Restart Strategy (RS)

Restart strategy [36] can reassign the value of the solution which has been trapped in the local optimal state for a long time and make the poor solution jump out of the local optimal state. Adding the restart strategy can strengthen the exploration ability of the algorithm and make the algorithm converge better.
First, set the restart threshold Limit according to Equation (17). Then, set the vector trial for each solution, and record whether a better solution is obtained after each iteration. The initial value is 0. If a better solution is obtained, the corresponding trial is 0. Otherwise, the corresponding trial is increased by 1. If the vector trial is greater than the Limit, the restart strategy is executed. The Limit’s function image is shown in Figure 1.
L i m i t   =   ln ( t )
The restart strategy generates two new solution T1 and T2, respectively, by Equations (18) and (19) and performs boundary processing by Equation (20), and then selects a better solution to replace the original solution. After that, the corresponding trial is reassigned to 0. The formula of the restart strategy is as follows:
T 1   =   l b   +   r a n d ( )   ×   ( u b     l b )
T 2   =   r a n d ( )   ×   ( u b   +   l b )     x i
T 2   =   l b   +   r a n d ( )   ×   ( u b     l b ) i f   T 2 u b | | T 2 l b

3.4. MGTOA Complexity Analysis

The time complexity depends on the number of students (N), the dimension of the given problem (dim), the number of iterations of the algorithm (T), and the evaluation cost (C). Therefore, the time complexity of MGTOA can be presented as:
O ( MGTOA )   =   O ( define   parameters )   +   O ( population   initialization )   + O ( dunction   evaluation   cost )   +   O ( location   update )
where the time complexities of the components of Equation (22) can be defined as follows:
  • Initialization of problem definition demands O(1) time.
  • Initialization of population creation demands O(N × dim) time.
  • Updating the population position includes the teacher and student phases and the required time O(2 × T × N × dim).
  • Time required for random opposition-based learning O(T × N × dim).
  • Time required for restart strategy O(2 × T × N × dim/Limit).
  • The cost time of the calculation function includes the calculation time cost of the algorithm itself, the calculation time cost of the random opposition-based learning strategy, and the calculation time cost of the restart strategy. The calculation time cost of the algorithm itself is O(T × N × C). The calculation time cost of the random opposition-based learning strategy is O(T × N × C). The calculation time cost of the restart strategy takes into account the change of the Limit value, so the time cost is O(T × N × C/Limit). The total time cost is O(2 × T × N × C + T × N × C/Limit).
Therefore, the time complexity of MGTOA is expressed as:
O ( MGTOA )   =   O ( 1   +   N   ×   d i m   +   T   ×   N   ×   C   ×   ( 2   +   1 L i m i t )   +   T   ×   N   ×   d i m   ×   ( 3   +   2 L i m i t ) )
Because 1 ≪ T × N × C, 1 ≪ T × N × dim, N × dimT × N × C, and N × dimT × N × dim, Equation (22) can be simplified to Equation (23).
O ( MGTOA ) O ( T   ×   N   ×   C   ×   ( 2   +   1 L i m i t )   +   T   ×   N   ×   d i m   ×   ( 3   +   2 L i m i t ) )
It can be seen from the above analysis that the time complexity of MGTOA is improved compared with GTOA, but after these strategies are added, the optimization effect of MGTOA is significantly improved. The following experiments also prove the feasibility of MGTOA.

3.5. MGTOA Implementation

Combining GTOA with the three strategies mentioned above, an improved group teaching optimization algorithm (MGTOA) is presented. Through the above three improvement strategies, the exploration ability of MGTOA is strengthened, making the calculation more global. Pseudocode is shown in Algorithm 1.
Algorithm 1 Pseudo-code of MGTOA
1. Initialization parameters t, Tmax, ub, lb, N, dim.
2. Initialize population x according to Equations (9) and (10).
3. The fitness values of all individuals are calculated, and the optimal solution G is selected.
4. While t < Tmax
5.   Define the teacher according to Equation (8).
6.   Students are divided into elite students (Xgood) and ordinary students (Xbad). The number of elite students is Ngood.
7.   for i = 1:N
8.    if i < Ngood
9.     The teacher phase is achieved according to Equations (1)–(3), and (5).
10.   else
11.    The teacher phase is realized according to Equations (4) and (5).
12.   end
13.   Carry out boundary processing for the updated students.
14.   Calculate the average knowledge level of elite students (M)
15.   if i < Ngood
16.    for j = 1:dim
17.      The elite students get the learning motivation D according to Equation (12) and carry out the student phase through Equations (13) and (7).
18.    end
19.   else
20.     Ordinary students carry out the student phase according to Equations (14) and (7).
21.   end
22. end
23. Carry out boundary processing for the updated students.
24. An inverse solution is generated using a random opposition-based learning strategy by Equation (15), and the student position is updated according to Equation (16).
25.  Calculate the new fitness value of the students and judge whether it is better. If it is better, replace the fitness value and the corresponding trial = 0. Otherwise, trial will add 1
26. Define Limit according to Equation (17).
27.  for i = 1:N
28.   while trial(i) < Limit
29.    T1 and T2 are generated by Equations (18) and (19), and T2 is subjected to boundary processing using Equation (20). Assign a smaller position to xi.
30.    trial(i) = 0
31.   end
32. end
33. t = t + 1
34. end
The MGTOA flow chart is shown in Figure 2.

4. Experimental Results and Discussion

All the experiments in this paper are completed on a computer with the 11th Gen Intel(R) Core(TM) i7-11700 processor with the primary frequency of 2.50GHz, 16GB memory, and the operating system of 64-bit windows 11 using matlab2021a.
In this section, MGTOA is brought into two different types of benchmark functions to evaluate the performance of the modified algorithm. First, the performance of MGTOA in solving simple optimization problems is evaluated by experiments on 23 standard benchmark functions. After that, it is verified by CEC 2014 test function, which contains 30 test functions. Finally, in order to verify that the proposed MGTOA has better performance, the MGTOA is compared with GTOA [28], Genetic Algorithms (GA) [11], Sine Cosine Algorithm (SCA) [37], Bald Eagle Search (BES) [38], Remora Optimization Algorithm (ROA) [10], Arithmetic Optimization Algorithm (AOA) [39], Whale Optimization Algorithm (WOA) [9], Teaching Learning-Based Optimization Algorithm (TLBO) [25], and Balanced Teaching Learning-Based Optimization Algorithm (BTLBO) [40]. The parameter settings of these algorithms are shown in Table 1.

4.1. Experiments on Standard Benchmark Functions

In this section, the performance of MGTOA is tested using 23 mathematical benchmark functions and compared with other nine algorithms. This benchmark contains seven unimodal, six multimodal, and ten fixed-dimension multimodal functions. As shown in Table 2, where F is the mathematical function, dim is the dimension, Range is the interval of the search space, and Fmin is the optimal value the corresponding function can achieve. Among the 23 standard benchmark functions, set the population number N = 30, the maximum number of iterations T = 500, and the dimension dim = 30/500. All the algorithms run independently 30 times to obtain the optimal fitness value, the average fitness value, and the standard deviation.
Figure 3 shows a partial graph of 23 benchmark functions, the historical distribution of MGTOA in the benchmark function, the historical trajectory of the first individual, and the convergence curve of MGTOA. It can be clearly seen from the figure that MGTOA obtained the results and position distribution after each optimization.
Table 3, Table 4 and Table 5 are the statistical charts of 23 benchmark functions of MGTOA and nine other comparison algorithms, and the best values obtained are shown in bold black. It can be seen that MGTOA has been significantly improved compared with other algorithms and has achieved good results in 23 benchmark functions. In F1–F4, the statistical value of MGTOA has been significantly improved compared with GTOA, and the fitness value has reached the theoretical optimum. BES has achieved good results in F1, and AOA has achieved good results in F2. Other comparison algorithms are insufficient in F1–F4. In 30 dimensions, MGTOA did not obtain the minimum fitness value in F6 and F12, which indicates that MGTOA is insufficient. However, in the 500 dimensions, MGTOA has achieved the minimum fitness value. This shows that the optimization performance of MGTOA is more stable and has good effects in different dimensions. TLBO and BTLBO cannot guarantee good effects in high dimensions. Among other functions, MGTOA has achieved good results. BES and ROA have achieved good results in F9–F11, but other functions are not as effective as MGTOA. In Table 5, because the function is relatively simple, many algorithms can obtain better fitness values. MGTOA obtains the best fitness value in most functions and has good stability. ROA, WOA, BTLBO, and TLBO also achieved good results. The performance of other algorithms is relatively weak, and they only have good results in some functions. The above analysis verifies that MGTOA has a good effect on 23 benchmark functions.
In order to more intuitively see the optimization ability of each algorithm, the convergence curves of each algorithm in 23 mathematical benchmark functions are shown in Figure 4, Figure 5 and Figure 6, which can more intuitively understand the convergence effect of each algorithm. From F1–F4, it can be seen that the MGTOA optimization effect is obviously superior to other algorithms. MGTOA has good convergence ability in the early stage of the algorithm and can quickly find the optimal value. In Figure 4, MGTOA is inferior to TLBO and BTLBO in F6 and F12. However, in Figure 5, the value obtained by MGTOA is significantly smaller, and other algorithms cannot converge. It shows that MGTOA has better performance when dealing with high-dimensional problems. In other functions, MGTOA can converge quickly and get a good value. It is further proved that MGTOA has better optimization ability in 23 benchmark functions.
In addition, the Wilcoxon rank sum test is a nonparametric statistical test method that can find more complex data distribution. The general data analysis is only aimed at the average value and standard deviation of the current data and is not compared with the data of multiple algorithms runs. Therefore, the Wilcoxon rank sum test is required for further verification. Table 6 shows the experimental results of MGTOA and nine other different algorithms running 30 times in 23 mathematical benchmark functions. Among them, the significance level is 5%. The value of less than 5% indicates a significant difference between the two algorithms. Otherwise, it indicates that the two algorithms are close. It can be seen from the table that the p of most functions is less than 0.05, but some functions are greater than 0.05. MGTOA has four functions p = 1 compared with BES. MGTOA has three functions p = 1 compared with ROA. Compared with AOA, MGTOA has three functions p = 1 when dim = 30. This is because both these algorithms and MGTOA can find the theoretical optimal fitness value in these functions. Compared with MGTOA, WOA, BTLBO and TLBO have many p less than 0.05 in F9–F11. It shows that the optimization effect of these algorithms in this function is similar. However, in most cases, the p obtained by MGTOA and other algorithms is less than 0.05. In general, MGTOA has achieved good results in the Wilcoxon rank sum test.
Table 3. Results of benchmark functions (F1–F13) under 30 dimensions.
Table 3. Results of benchmark functions (F1–F13) under 30 dimensions.
FMetricMGTOAGTOA [28]GA [11]SCA [37]BES [38]ROA [10]AOA [39]WOA [9]BTLBO [40]TLBO [25]
F1min01.13 × 10−149.69 × 10−38.36 × 10−2003.05 × 10−1936.03 × 10−861.47 × 10−975.3 × 10−81
mean05.92 × 10−62.37 × 10−216.502.99 × 10−3192.56 × 10−302.21 × 10−689.71 × 10−966.99 × 10−79
std02.42 × 10−56.57 × 10−336.5001.40 × 10−291.21 × 10−672.03 × 10−951.76 × 10−78
F2min03.43 × 10−73.64 × 10−11.47 × 10−62.38 × 10−2291.70 × 10−18201.7 × 10−593.54 × 10−491.24 × 10−40
mean05.83 × 10−44.91 × 10−11.43 × 10−22.43 × 10−1612.91 × 10−16201.92 × 10−492.63 × 10−485.07 × 10−40
std01.67 × 10−37.28 × 10−22.31 × 10−21.33 × 10−1601.56 × 10−16108.92 × 10−492.63 × 10−485.35 × 10−40
F3min03.58 × 10−119.24 × 1031.82 × 10301.59 × 10−3211.96 × 10−1322.11 × 1042.29 × 10−371.15 × 10−19
mean05.10 × 10−42.16 × 1048.87 × 1031.76 × 10−271.02 × 10−2842.9 × 10−34.08 × 1045.54 × 10−331.3 × 10−16
std02.63 × 10−38.05 × 1035.64 × 1039.64 × 10−2705.97 × 10−31.37 × 1041.51 × 10−325.06 × 10−16
F4min03.04 × 10−72.15 × 10−19.569.19 × 10−2377.35 × 10−1807.99 × 10−518.243.52 × 10−402.09 × 10−33
mean04.8 × 10−42.82 × 10−136.49.34 × 10−1714.87 × 10−1522.25 × 10−254.02.27 × 10−391.43 × 10−32
std08.52 × 10−43.86 × 10−213.102.63 × 10−1512.12 × 10−224.33 × 10−391.30 × 10−32
F5min5.39 × 10−628.925.544.69.27 × 10−426.627.82.73 × 10−12.29 × 10−123.4
mean8.90 × 10−128.970.78.17 × 10421.827.128.528.023.924.7
std4.72.79 × 10−230.71.83 × 105123.63 × 10−13.08 × 10−14.64 × 10−16.67 × 10−16.32 × 10−1
F6min1.4 × 10−54.427.835.523.21 × 10−41.69 × 10−22.691.14 × 10−14.84 × 10−92.66 × 10−8
mean1.11 × 10−35.678.0725.81.351.04 × 10−13.244.47 × 10−11.06 × 10−61.17 × 10−6
std1.18 × 10−37.71 × 10−11.38 × 10−1492.81.21 × 10−12.74 × 10−13.27 × 10−12.64 × 10−63.77 × 10−6
F7min4.97 × 10−81.06 × 10−48.88 × 10−21.7 × 10−22.03 × 10−34.43 × 10−62.47 × 10−79.51 × 10−53.60 × 10−45.57 × 10−4
mean3.79 × 10−54.49 × 10−41.98 × 10−11.1 × 10−16.19 × 10−31.85 × 10−48 × 10−53.32 × 10−38.10 × 10−41.13 × 10−3
std3.29 × 10−52.6 × 10−47.02 × 10−21.08 × 10−13.54 × 10−31.71 × 10−48.71 × 10−53.65 × 10−34.16 × 10−45.20 × 10−4
F8min−1.26 × 104−6.16 × 103−5.94 × 103−4.73 × 103−9.97 × 103−1.26 × 104−6.19 × 103−1.26 × 104−9.87 × 103−9.45 × 103
mean−1.26 × 104−5.09 × 103−4.73 × 103−3.78 × 103−5.80 × 103−1.24 × 104−5.41 × 103−1.03 × 104−7.60 × 103−7.67 × 103
std6.00 × 10−26.87 × 1026.85 × 1023.61 × 1023.23 × 1033.26 × 1024.37 × 1021.68 × 1031.01 × 1031.21 × 103
F9min05.7 × 10−111.371.25 × 10−100008.049.95
mean02.44 × 10−52.7434.70007.58 × 10−1519.014.4
std08.93 × 10−58.29 × 10−130.50002.47 × 10−148.826.97
F10min8.88 × 10−162.55 × 10−89.25 × 10−24.78 × 10−28.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−164.44 × 10−154.44 × 10−15
mean8.88 × 10−163.36 × 10−41.33 × 10−115.41.13 × 10−158.88 × 10−168.88 × 10−164.8 × 10−155.03 × 10−156.34 × 10−15
std07.52 × 10−43.09 × 10−27.969.01 × 10−16002.35 × 10−151.35 × 10−151.8 × 10−15
F11min01.06 × 10−126.76 × 10−41.83 × 10−2002.66 × 10−2000
mean01.11 × 10−58.6 × 10−29.91 × 10−1001.88 × 10−11.18 × 10−202.76 × 10−6
std04.1 × 10−52.7 × 10−13.07 × 10−1001.59 × 10−16.45 × 10−201.51 × 10−5
F12min3.52 × 10−73.44 × 10−11.558.5 × 10−11.48 × 10−62.78 × 10−33.98 × 10−17.12 × 10−32.49 × 10−105.78 × 10−11
mean2.15 × 10−56.21 × 10−11.725.51 × 1041.33 × 10−11.07 × 10−25.24 × 10−14.39 × 10−21.02 × 10−81.26 × 10−8
std2.01 × 10−51.88 × 10−15.17 × 10−21.74 × 1053.28 × 10−16.41 × 10−34.53 × 10−21 × 10−12.68 × 10−83.63 × 10−8
F13min1.17 × 10−62.451.37 × 10−34.824.66 × 10−57.92 × 10−22.651.72 × 10−12.13 × 10−71.59 × 10−7
mean2.65 × 10−42.874.46 × 10−31.89 × 1051.222.4 × 10−12.825.69 × 10−19.96 × 10−28.4 × 10−2
std3.56 × 10−42.26 × 10−13.29 × 10−35.58 × 1051.471.38 × 10−19.65 × 1022.44 × 10−18.27 × 10−21.23 × 10−1
Table 4. Results of benchmark functions (F1–F13) under 500 dimensions.
Table 4. Results of benchmark functions (F1–F13) under 500 dimensions.
FMetricMGTOAGTOA [28]GA [11]SCA [37]BES [38]ROA [10]AOA [39]WOA [9]BTLBO [40]TLBO [25]
F1min03.10 × 10−1367.29.04 × 104005.60 × 10−12.86 × 10−813.95 × 10−851.12 × 10−68
mean01.37 × 10−470.62.03 × 10502.17 × 10−3186.43 × 10−11.49 × 10−679.97 × 10−841.34 × 10−67
std06.23 × 10−42.638.08 × 104004.45 × 10−28.06 × 10−671.03 × 10−831.78 × 10−67
F2min02.76 × 10−71.35 × 10231.86.28 × 10−2259.78 × 10−1773.81 × 10−129.26 × 10−554.58 × 10−437.22 × 10−35
mean05.98 × 10−31.40 × 1021.07 × 1022.93 × 10−1533.10 × 10−1511.82 × 10−31.3 × 10−472.06 × 10−422.16 × 10−34
std01.28 × 10−23.0357.51.6 × 10−1521.70 × 10−1501.7 × 10−36.57 × 10−471.53 × 10−421.35 × 10−34
F3min09.38 × 10−94.86 × 1055.09 × 10607.16 × 10−29913.71.88 × 1072.77 × 10−134.58 × 10−3
mean01.23 × 10−17.17 × 1056.75 × 1068.68 × 1055.08 × 10−2613.42 × 1033.32 × 1075.59 × 10−63.8 × 10−1
std05.99 × 10−11.39 × 1051.42 × 1064.43 × 10601.85 × 1041.23 × 1072.41 × 10−51.27
F4min05.2 × 10−89.52 × 10−198.68.66 × 10−2172.23 × 10−1731.63 × 10−155.12.13 × 10−355.69 × 10−28
mean02.8 × 10−49.71 × 10−199.01.3 × 10−1287.74 × 10−1521.81 × 10−181.31.22 × 10−341.42 × 10−27
std04.51 × 10−41.04 × 10−23.42 × 10−17.13 × 10−1284.13 × 10−1511.65 × 10−222.18.42 × 10−359.58 × 10−28
F5min4.12 × 10−84.99 × 1024.90 × 1031.12 × 1091.13 × 1024.94 × 1024.99 × 1024.96 × 1024.96 × 1024.96 × 102
mean1.16 × 1024.99 × 1025.14 × 1031.95 × 1094.32 × 1024.95 × 1024.99 × 1024.96 × 1024.97 × 1024.97 × 102
std2.14 × 1023.20 × 10−21.76 × 1025.05 × 1081.67 × 1022.94 × 10−11.03 × 10−14.20 × 10−16.30 × 10−14 × 10−1
F6min9.08 × 10−51.22 × 1023.35 × 1027.45 × 1041.14 × 10−27.291.14 × 10220.37071.6
mean16.81.23 × 1023.45 × 1022.42 × 10530.615.31.16 × 10232.675.475.5
std35.37.36 × 10−15.849.01 × 104536.511.389.532.392.11
F7min2.6 × 10−87.82 × 10−54.30 × 1039.38 × 1038.2 × 10−45.87 × 10−61.21 × 10−58.52 × 10−57.16 × 10−48.04 × 10−4
mean3.43 × 10−56.1 × 10−44.56 × 1031.44 × 1045.75 × 10−32.08 × 10−41.06 × 10−44.37 × 10−31.3 × 10−31.66 × 10−3
std3.22 × 10−56.31 × 10−42.74 × 1023.4 × 1034.01 × 10−31.66 × 10−49.94 × 10−55.6 × 10−34.27 × 10−45.37 × 10−4
F8min−2.09 × 105−2.85 × 104−3.63 × 104−1.73 × 104−2.08 × 105−2.09 × 105−2.58 × 104−2.09 × 105−7.28 × 104−6.17 × 104
mean−2.09 × 105−2.15 × 104−3.29 × 104−1.53 × 104−1.61 × 105−2.05 × 105−2.3 × 104−1.7 × 105−3.39 × 104−4.39 × 104
std1.183.12 × 1031.91 × 1031.23 × 1032.49 × 1041.02 × 1041.58 × 1033.1 × 1041.10 × 1041.21 × 104
F9min002.26 × 1034.19 × 102000000
mean09.27 × 10−52.41 × 1031.14 × 103008.08 × 10−69.09 × 10−1400
std03.52 × 10−472.55.78 × 102007.61 × 10−63.66 × 10−1300
F10min8.88 × 10−162.26 × 10−82.8510.78.88 × 10−168.88 × 10−167.09 × 10−38.88 × 10−167.99 × 10−157.99 × 10−15
mean8.88 × 10−162.84 × 10−42.9118.48.88 × 10−168.88 × 10−168.02 × 10−34.91 × 10−157.99 × 10−152.22
std05.35 × 10−42.92 × 10−24.11004.3 × 10−42.23 × 10−1504.17
F11min02.10 × 10−122.28 × 10−11.13 × 103006.52 × 103000
mean04.72 × 10−53.05 × 10−12.08 × 103009.99 × 1033.7 × 10−1803.7 × 10−18
std02.56 × 10−42.66 × 10−17.11 × 102003.09 × 1032.03 × 10−1702.03 × 10−17
F12min1.69 × 10−81.092.734.03 × 1091.24 × 10−59.45 × 10−31.073.97 × 10−23.84 × 10−13.67 × 10−1
mean1.08 × 10−51.142.806.28 × 1092.42 × 10−14.54 × 10−21.081.01 × 10−14.3 × 10−14.27 × 10−1
std2.10 × 10−53.37 × 10−24.75 × 10−21.36 × 1094.89 × 10−12.79 × 10−21.2 × 10−25.11 × 10−22.75 × 10−22.96 × 10−2
F13min5.42 × 10−115010.26.62 × 1093.46 × 10−33.0850.110.749.849.8
mean1.87 × 10−35010.81.06 × 101012.28.6950.219.349.849.8
std 5.68 × 10−34.52 × 10−34.73 × 10−12.11 × 10921.13.884.54 × 10−26.021.01 × 10−28.82 × 10−3
Table 5. Results of benchmark functions (F14–F23).
Table 5. Results of benchmark functions (F14–F23).
FMetricMGTOAGTOA [28]GA [11]SCA [37]BES [38]ROA [10]AOA [39]WOA [9]BTLBO [40]TLBO [25]
F14min9.98 × 10−19.98 × 10−12.989.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−1
mean9.98 × 10−11.169.431.833.065.7510.23.689.98 × 10−19.98 × 10−1
std3.38 × 10−115.27 × 10−13.581.891.395.054.053.4700
F15min3.07 × 10−43.07 × 10−44.35 × 10−43.94 × 10−43.14 × 10−43.08 × 10−43.54 × 10−43.19 × 10−43.07 × 10−43.07 × 10−4
mean3.08 × 10−41.89 × 10−31.29 × 10−29.83 × 10−47.02 × 10−34.32 × 10−41.22 × 10−27.49 × 10−43.25 × 10−43.50 × 10−4
std2.54 × 10−75.34 × 10−32.75 × 10−23.98 × 10−48.72 × 10−32.27 × 10−42.66 × 10−25.11 × 10−47.35 × 10−51.19 × 10−4
F16min−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03
mean−1.03−1.03−1.00−1.03−9.54 × 10−1−1.03−1.03−1.03−1.03−1.03
std5.25 × 10−166 × 10−162.07 × 10−28.02 × 10−52.25 × 10−11.25 × 10−71.21 × 10−72.04 × 10−96.78 × 10−166.65 × 10−16
F17min3.98 × 10−13.98 × 10−14.50 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−1
mean3.98 × 10−13.98 × 10−11.503.99 × 10−15.81 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−1
std6.21 × 10−701.481.6 × 10−36.48 × 10−16.34 × 10−67.93 × 10−81.48 × 10−500
F18min333.23333333
mean3329.534.3939.3333
std2.06 × 10−58.4 × 10−1522.21.33 × 10−41.511.61 × 10−411.61.71 × 10−41.31 × 10−159.79 × 10−16
F19min−3.86−3.86−3.86−3.86−3.85−3.86−3.86−3.86−3.86−3.86
mean−3.86−3.86−3.71−3.85−3.65−3.86−3.85−3.86−3.86−3.86
std9.49 × 10−62.63 × 10−155.48 × 10−11.09 × 10−21.89 × 10−12.8 × 10−35.9 × 10−37.31 × 10−32.71 × 10−152.71 × 10−15
F20min−3.32−3.32−3.32−3.11−3.19−3.32−3.17−3.32−3.32−3.32
mean−3.29−3.26−3.28−2.70−2.91−3.22−3.04−3.22−3.3−3.31
std5.27 × 10−27.66 × 10−25.55 × 10−25.27 × 10−12.4 × 10−11.24 × 10−18.89 × 10−21.71 × 10−13.67 × 10−23.71 × 10−2
F21min−10.2−10.2−5.05−7.36−10.2−10.2−6.85−10.2−10.2−10.2
mean−10.2−8−1.47−2.27−6.55−10.1−3.93−7.94−10.1−9.56
std2.90 × 10−42.751.492.062.761.6 × 10−21.722.797.74 × 10−21.81
F22min−10.4−10.4−5.07−5.62−10.4−10.4−6.05−10.4−10.4−10.4
mean−10.4−8.04−1.67−3.29−5.43−10.4−3.38−7.76−10.4−10
std1.29 × 10−42.971.251.852.612.41 × 10−21.492.917.38 × 10−161.35
F23min−10.5−10.5−5.13−5.93−10.5−10.5−6.69−10.5−10.5−10.5
mean−10.5−7.98−1.76−3.73−5.55−10.5−3.52−6.27−10.3−10.1
std1.28 × 10−43.241.401.542.732.1 × 10−21.263.261.211.74
Table 6. Experimental results of Wilcoxon rank sum test on 23 standard benchmark functions.
Table 6. Experimental results of Wilcoxon rank sum test on 23 standard benchmark functions.
FdimMGTOA
vs.
GTOA [28]
MGTOA
vs.
GA [11]
MGTOA
vs.
SCA [37]
MGTOA
vs.
BES [38]
MGTOA
vs.
ROA [10]
MGTOA
vs.
AOA [39]
MGTOA
vs.
WOA [9]
MGTOA
vs.
BTLBO [40]
MGTOA
vs.
TLBO [25]
F1301.73 × 10−61.73 × 10−61.73 × 10−611.25 × 10−11.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
5001.73 × 10−61.73 × 10−61.73 × 10−613.13 × 10−21.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F2301.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−611.73 × 10−61.73 × 10−61.73 × 10−6
5001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F3301.73 × 10−61.73 × 10−61.73 × 10−62.5 × 10−13.79 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
5001.73 × 10−61.73 × 10−61.73 × 10−63.1 × 10−21.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F4301.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
5001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F5301.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−62.13 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
5001.73 × 10−61.73 × 10−61.73 × 10−61.38 × 10−36.16 × 10−41.73 × 10−66.16 × 10−41.73 × 10−61.73 × 10−6
F6301.73 × 10−61.73 × 10−61.73 × 10−62.6 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
5001.73 × 10−61.73 × 10−61.73 × 10−62.89 × 10−14.72 × 10−21.73 × 10−68.97 × 10−21.73 × 10−61.73 × 10−6
F7303.18 × 10−61.73 × 10−61.73 × 10−61.73 × 10−66.64 × 10−42.7 × 10−22.13 × 10−61.73 × 10−61.73 × 10−6
5001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.36 × 10−52.22 × 10−41.73 × 10−61.73 × 10−61.73 × 10−6
F8301.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−63.59 × 10−41.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
5001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−63.18 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F9301.73 × 10−61.73 × 10−61.73 × 10−61115 × 10−11.73 × 10−63.79 × 10−6
5002.56 × 10−61.73 × 10−61.73 × 10−6111.32 × 10−4111
F10301.73 × 10−61.73 × 10−61.73 × 10−61112.29 × 10−52.57 × 10−77.86 × 10−7
5001.73 × 10−61.73 × 10−61.73 × 10−6111.73 × 10−62.04 × 10−54.32 × 10−81.11 × 10−6
F11301.73 × 10−61.73 × 10−61.73 × 10−6111.73 × 10−66.25 × 10−211
5001.73 × 10−61.73 × 10−61.73 × 10−6111.73 × 10−6111
F12301.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.92 × 10−6
5001.73 × 10−61.73 × 10−61.73 × 10−61.48 × 10−32.77 × 10−31.73 × 10−62.77 × 10−31.73 × 10−61.73 × 10−6
F13301.73 × 10−61.73 × 10−61.73 × 10−62.6 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.11 × 10−32.41 × 10−4
5001.73 × 10−62.8 × 10−31.73 × 10−62.26 × 10−32.77 × 10−31.73 × 10−61.71 × 10−31.73 × 10−61.73 × 10−6
F1424.07 × 10−21.73 × 10−61.73 × 10−61.73 × 10−64.07 × 10−51.73 × 10−62.13 × 10−61.73 × 10−61.73 × 10−6
F1542.41 × 10−41.73 × 10−61.73 × 10−61.73 × 10−65.75 × 10−61.73 × 10−61.73 × 10−67.71 × 10−41.41 × 10−1
F16211.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−611
F1721.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−65.79 × 10−54.53 × 10−46.64 × 10−41.73 × 10−61.73 × 10−6
F1821.73 × 10−61.73 × 10−65.79 × 10−51.73 × 10−61.04 × 10−39.75 × 10−13.61 × 10−31.73 × 10−61.73 × 10−6
F1931.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−62.84 × 10−51.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F2063.38 × 10−33.68 × 10−21.73 × 10−61.73 × 10−61.48 × 10−22.13 × 10−63.85 × 10−33.59 × 10−41.48 × 10−2
F2141.48 × 10−21.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−62.77 × 10−3
F2246.16 × 10−41.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.48 × 10−2
F2349.27 × 10−31.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.92 × 10−61.73 × 10−63.59 × 10−4

4.2. Experiments on CEC2014 Test Suite

In this section, we use CEC2014 test function to verify the performance of MGTOA. Set the population number N = 30, the maximum number of iterations T = 500, and the dimension dim = 10. The specific contents of CEC 2014 functions are shown in Table 7.
As can be seen in Table 8, MGTOA has achieved good results compared with GTOA. However, some results are inferior to BTLBO and TLBO. In CEC1–CEC3, MGTOA got the third place after BTLBO and TLBO. In CEC4–CEC5, MGTOA got the first. In CEC6–CEC9, the value of MGTOA is due to GTOA. In CEC10–CEC16, MGTOA has found a better value and is more stable. Among other functions, only the value of CEC20–CEC22 has not achieved the best result, and other functions have obtained better solutions.
Figure 7 shows the convergence curves of MGTOA and the other nine algorithms. It can be seen that MGTOA has achieved good results and is obviously superior to GTOA. However, MGTOA has some shortcomings in some functions. In Unimodal Functions, MGTOA has achieved the third place in CEC1 and CEC2. It can be seen from the figure that the convergence ability of MGTOA in the later period is insufficient. The convergence ability of TBLO and BTLBO is better than that of MGTOA. However, in CEC3, MGTOA achieved good results. In Simple Multimodal Functions, MGTOA has some shortcomings in CEC6 and CEC8. Among other functions, MGTOA has better convergence ability, which can converge quickly and get a good solution. Hybrid Function 1 and Composition Functions test the overall ability of the algorithm, and MGTOA has also achieved good results. In CEC25 and CEC27, MGTOA can jump out of the local optimum to find a better solution. Compared with GTOA, MGTOA has greatly improved its overall performance. It can be concluded that MGTOA has a good optimization effect in CEC2014.
Table 9 shows the Wilcoxon rank sum test results obtained by running MGTOA and other nine different algorithms 30 times in CEC2014. It can be seen from the table that most p is less than 0.05, but many of the composite functions of CEC22–CEC30 have p greater than 0.05. This is because these functions are relatively simple, and many algorithms can find a better value. In CEC11 and CEC14, there are three p greater than 0.05. In CEC16 and CEC18, one p is greater than 0.05. In CEC16 and CEC18, one p is greater than 0.05, and the rest is less than 0.05. It shows that there is a big gap between MGTOA and these algorithms. Moreover, only two p values between MGTOA and GTOA are greater than 0.05, which indicates that MGTOA is very different from GTOA, which demonstrates that MGTOA achieves good results in Wilcoxon rank sum detection.

5. Constrained Engineering Design Problems

The previous experiments have shown the optimization effect of MGTOA. In order to clearly understand the practical effect of MGTOA on engineering problems, six engineering problems are selected for our experiment, namely: welding beam design, pressure vessel design, tension/pressure spring design, three-bar truss design, car crashworthiness design, and gear train design. The detailed experimental results are as follows.

5.1. Welded Beam Design Problem

The design problem of the welded beam is to minimize the cost of the welded beam under four decision variables and seven constraints. Four variables need to be optimized: weld width h, connecting beam thickness b, connecting beam length l, and beam height t. The specific model of this problem is from literature [7]. The objective function f of this problem is shown in Figure 8.
The mathematical formulation of this problem is shown below:
Consider:
x   =   x 1   x 2   x 3   x 4   =   [ h   l   t   b ]
Minimize:
f ( x )   =   1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 )
Subject to:
g 1 ( X )   =   τ ( x ) 13,600 0 ,
g 2 ( X )   =   σ ( x ) 30,000 0 ,
g 3 ( X )   =   x 1 x 4 0 ,
g 4 ( X )   =   0.10471 x 1 2 + 0.04811 x 3 x 4 ( 14 + x 2 ) 5 ,
g 5 ( X )   =   0.125 x 1 0 ,
g 6 ( X )   =   δ ( x ) 0.25 0 ,
g 7 ( X )   =   6000 P c ( x ) 0
Variable Range:
τ ( x )   =   ( τ ) 2 + 2 τ τ x 2 2 R + ( τ ) 2 ,
τ   =   6000 2 x 1 x 2 , τ M R J , M   =   6000 ( 14 + x 2 2 ) ,
R   =   x 2 2 4 + ( x 1 + x 3 2 ) 2 ,
J   =   2 { 2 x 1 x 2 [ x 2 2 12 + ( x 1 + x 3 2 ) 2 ] } ,
σ ( x )   =   504,000 x 3 2 x 4 , δ ( x )   =   65,856,000 ( 30 × 10 6 ) x 3 3 x 4
P c ( x )   =   4.013 ( 30 × 10 6 ) x 3 2 x 4 6 36 196 ( 1 x 3 30 × 10 6 4 ( 12 × 10 6 ) 28 )
The results obtained from the welded beam design are shown in Table 10. It can be seen that the optimal solution obtained by MGTOA is much smaller than that obtained by GTOA, and the minimum weight is obtained when compared with other algorithms. It is proven that MGTOA has a better effect on the welded beam design.

5.2. Pressure Vessel Design Problem

The purpose of pressure vessel design problems is to meet production needs while reducing the total cost of the container. The four design variables are shell thickness Ts, head thickness Th, inner radius R, and container length L regardless of the head. Where Ts and Th are integers of 0.625 and R and L are continuous variables. The specific constraints are referred to in [43]. The schematic diagram of optimal structure design is shown in Figure 9.
The mathematical formulation of this problem is shown below:
Consider:
x   =   x 1     x 2     x 3     x 4   =   T s     T h     R     L
Minimize:
f x   =   0.6224 x 1 x 2 x 3 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
Subject to:
g 1 ( x )   =   x 1 + 0.0193 x 3 0
g 2 ( x )   =   x 3 + 0.00954 x 3 0
g 3 ( x )   =   π x 3 2 x 4 + 4 3 π x 3 3 + 1,296,000 0
g 4 ( x )   =   x 4 240 0
Variable Range:
0 x 1 99 , 0 x 2 99 , 10 x 3 200 , 10 x 4 200
The results of pressure vessel design problems are shown in Table 11. In the table, Ts = 0.754364, Th = 0.366375, R = 40.42809, L = 198.5652 of MGTOA. While for GTOA, Ts = 0.778169, Th = 0.38465, R = 40.3196, L = 200, the variables obtained by MGTOA are obviously better. The final cost of MGTOA is 5752.402458. Compared with other algorithms, the cost of MGTOA is greatly reduced, and good results are achieved, which shows that MGTOA has excellent effects on this problem.

5.3. Tension/Compression Spring Design Problem

As shown in Figure 10, the tension/pressure spring design problem’s purpose is to reduce the spring’s weight under four constraint conditions. The constraint conditions include the minimum deviation (g1), shear stress (g2), impact frequency (g3), and outer diameter limit (g4). The specific constraints are referred to in [48]. Corresponding decision variables include wire diameter d, average coil diameter D, and adequate coil number N. f(x) is the minimum spring mass.
The mathematical formulation of this problem is shown below:
Consider:
x   =   [ x 1   x 2   x 3 ]   =   [ d   D   N ]
Minimize:
f ( x )   =   ( x 3 + 2 ) × x 2 × x 1 2
Subject to:
g 1 ( x )   =   1 x 3 × x 2 3 71,785 × x 1 4 0
g 2 ( x )   =   4 × x 2 2 x 1 × x 2 12,566 × x 1 4 + 1 5108 × x 1 2 1 0
g 3 ( x )   =   1 140.45 × x 1 x 2 2 × x 3 0
g 4 ( x )   =   x 1 + x 2 1.5 1 0
Variable Range:
0.05 x 1 2.0 ; 0.25 x 2 1.3 ; 2.0 x 3 15.0
The results of tension/pressure spring design problem are shown in Table 12. In the table, the wire diameter d of MGTOA has obtained the optimal value. Although the average coil diameter D and the effective coil number N have not reached the optimal value, the difference is not significant, and the minimum weight has been finally obtained, which can effectively indicate that MGTOA has a good effect on this problem.

5.4. Three-Bar Truss Design Problem

The main purpose of studying the design of a three-bar truss is to reduce the structure’s weight under the action of the total supporting load P. The geometry of this problem is given in Figure 11, where the cross-sectional area represents the design variable. Due to the system’s symmetry, it is necessary to determine the cross sections with A1 (=x1) and A2 (=x2). Among them, the constraint conditions refer to literature [4].
The mathematical formulation of this problem is shown below:
Consider:
x   =   [ x 1   x 2 ]   =   [ A 1   A 2 ]
Minimize:
f ( x )   =   ( 2 2 x 1 + x 2 ) l
Subject to:
g 1 ( x )   =   2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 ,
g 2 ( x )   =   x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 ,
g 3 ( x )   =   1 2 x 1 + x 1 P σ 0 ,
l   =   100   cm , P   =   2   kN / cm 3 , σ   =   2   kN / cm 3
Variable Range:
0 x 1 , x 2 1 ,
The results of the three-bar truss design problem are shown in Table 13. It can be seen from the results obtained by each algorithm that the results of each algorithm are very different, which also indicates that it is difficult to better optimize the problem. However, it can be seen from the table that MGTOA has achieved the best results among these algorithms and has a particular improvement compared with other algorithms.

5.5. Car Crashworthiness Design Problem

The frequently used car crashworthiness design problem is considered firstly proposed by Gu et al. This problem also belongs to a minima problem with eleven variables, subject to ten constraints. Figure 12 shows the finite element model of this problem. The decision variables are, respectively, the internal thickness of B-pillar, the thickness of B-pillar reinforcement, the internal thickness of floor, the thickness of cross beam, the thickness of door beam, the thickness of door belt line reinforcement, the thickness of roof longitudinal beam, the internal material of B-pillar, the internal material of floor, the height of obstacle, and the impact position of obstacle. The constraints are, respectively, the abdominal load, the upper viscosity standard, the middle viscosity standard, the lower viscosity standard, the upper rib deflection, the middle rib deflection, the lower rib deflection Pubic symphysis force, B-pillar midpoint speed, and B-pillar front door speed. The constraints refer to literature [49].
The mathematical formulation of this problem is shown below:
Minimize:
f ( x )   =   Weight ,
Subject to:
g 1 ( x )   =   F a   ( load   in   abdomen ) 1   kN ,
g 2 ( x )   =   V × C u   ( dummy   upper   chest ) 0.32   m / s ,
g 3 ( x )   =   V × C m   ( dummy   middle   chest ) 0.32   m / s ,
g 4 ( x )   =   V × C l   ( dummy   lower   chest ) 0.32   m / s ,
g 5 ( x )   =   Δ ur   ( upper   rib   deflection ) 32   mm ,
g 6 ( x )   =   Δ mr   ( middle   rib   deflection ) 32   mm ,
g 7 ( x )   =   Δ lr   ( lower   rib   deflection ) 32   mm ,
g 8 ( x )   =   F   ( Public   force ) p 4   kN ,
g 9 ( x )   =   V MBP   ( Velocity   of   V Pillar   at   middle   point ) 9.9   mm / ms ,
g 10 ( x )   =   V FD   ( Velocity   of   front   door   at   V Pillar ) 15.7   mm / ms ,
Variable Range:
0.5 x 1 x 7 1.5 ,   x 8 , x 9 ( 0.192 , 0.345 ) ,   30 x 10 , x 11 30
Table 14 shows the results of the car crashworthiness design problem. In MGTOA, the variables x1, x3, x4, and x7 all reached 0.5, and the final weight obtained the best solution compared with other algorithms.

5.6. Gear Train Design Problem

The gear train design problem aims to minimize the gear ratio. This problem has four parameters. The gear transmission is as follows:
Gear   ratio = angular   velocity   of   output   shaft angular   velocity   of   input   shaft
The parameters of this problem are discrete, and the increment is one because this problem only defines the tooth profiles of four gears (nA, nB, nC, nD). The constraints refer to literature [60]. Specific schematic diagram is shown in Figure 13.
The mathematical formulation of this problem is shown below:
Consider:
x   =   [ x 1   x 2   x 3   x 4 ]   =   [ n A   n B   n C   n D ]
Minimize:
f ( x )   =   1 6.931 x 3 x 2 x 1 x 4 2
Variable Range:
12 x 1 , x 2 , x 3 , x 4 60
Table 15 shows the results of MGTOA and other comparison algorithms in gear train design. It can be seen from the table that MGTOA has obtained the optimal solution for the design of the gear train, which is greatly improved compared with GTOA.

6. Conclusions

This paper proposes a modified GTOA (MGTOA), which improves the student phase of the algorithm according to the different learning motivations of different students and adopts the random opposition-based learning and restart strategy to enhance the optimization performance of the algorithm. The effect of MGTOA was tested using 23 standard benchmark functions and CEC2014 test functions and compared with nine other state-of-the-art algorithms. The proposed MGTOA was further verified by the Wilcoxon rank sum test. The data analysis illustrates that MGTOA has an excellent optimization effect. Compared with GTOA, the optimization performance of MGTOA has been greatly improved, with better optimization capability and lower error. However, the exploitation capability of MGTOA is weaker than that of BTLBO algorithms. In future work, we will strengthen the exploitation capability of MGTOA. After that, MGTOA will be used to solve the three-dimensional path planning problem of UAVs, text clustering problem, feature selection problem, scheduling in cloud computing, parameter estimation, image segmentation, intrusion detection problem, and others.

Author Contributions

Conceptualization, H.R. and H.J.; methodology, H.R.; software, H.R. and H.J.; validation, H.J. and D.W.; formal analysis, H.R., H.J., and D.W.; investigation, C.W. and S.L.; resources, Q.L. and C.W.; data curation, H.R. and L.A.; writing—original draft preparation, H.R. and H.J.; writing—review and editing, D.W. and L.A.; visualization, H.R., H.J. and D.W.; supervision, H.J. and D.W.; funding acquisition, H.J. and L.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by Sanming University National Natural Science Foundation Breeding Project (PYT2105), Fujian Natural Science Foundation Project (2021J011128), Fujian University Students’ Innovation and Entrepreneurship Training Program. The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: (22UQU4320277DSR09).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the support of Fujian Key Lab of Agriculture IOT Application and IOT Application Engineering Research Center of Fujian Province Colleges and Universities, as well as the anonymous reviewers for helping us to improve the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fearn, T. Particle swarm optimization. NIR News 2014, 25, 27. [Google Scholar]
  2. Assiri, A.S.; Hussien, A.G.; Amin, M. Ant lion optimization: Variants, hybrids, and applications. IEEE Access. 2020, 8, 77746–77764. [Google Scholar] [CrossRef]
  3. Yang, X.; Hossein Gandomi, A. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef] [Green Version]
  4. Hussien, A.G. An enhanced opposition-based salp swarm algorithm for global optimization and engineering problems. J. Ambient. Intell. Humaniz. Comput. 2022, 13, 129–150. [Google Scholar] [CrossRef]
  5. Dorigo, M.; Birattari, M.; Stützle, T. Ant Colony Optimization. IEEE. Comput. Intell. M 2006, 1, 28–39. [Google Scholar] [CrossRef]
  6. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Global Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  7. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  8. Gandomi, A.H.; Alavi, A.H. Krill herd: A new bio-inspired optimization algorithm. Commun. Nonlinear Sci. 2012, 17, 4831–4845. [Google Scholar] [CrossRef]
  9. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  10. Jia, H.; Peng, X.; Lang, C. Remora optimization algorithm. Exp. Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
  11. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  12. Beyer, H.G.; Schwefel, H.P. Evolution strategies–A comprehensive introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
  13. Banzhaf, W.; Koza, J.R. Genetic programming. IEEE Intell. Syst. 2000, 15, 74–84. [Google Scholar] [CrossRef] [Green Version]
  14. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef] [Green Version]
  15. Sinha, N.; Chakrabarti, R.; Chattopadhyay, P. Evolutionary programming techniques for economic load dispatch. IEEE Trans. Evol. Comput. 2003, 7, 83–94. [Google Scholar] [CrossRef]
  16. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Global Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  17. Jaderyan, M.; Khotanlou, H. Virulence Optimization Algorithm. Appl. Soft. Comput. 2016, 43, 596–618. [Google Scholar] [CrossRef]
  18. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  19. Rashedi, E.; Nezamabadi-Pour, H.S. GSA: A Gravitational Search Algorithm. Inform. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  20. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inform. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  21. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2015, 27, 495–513. [Google Scholar] [CrossRef]
  22. Kaveh, A.; Khayatazad, M. A new meta-heuristic method: Ray optimization. Comput. Struct. 2012, 112, 283–294. [Google Scholar] [CrossRef]
  23. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  24. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A New Heuristic Optimization Algorithm: Harmony Search. Simulation 2001, 2, 60–68. [Google Scholar] [CrossRef]
  25. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems. Inform. Sci. 2012, 183, 1–15. [Google Scholar] [CrossRef]
  26. Satapathy, S.; Naik, A. Social group optimization (SGO): A new population evolutionary optimization technique. Complex Intell Syst. 2016, 2, 173–203. [Google Scholar] [CrossRef] [Green Version]
  27. Naser, G.; Ebrahim, B. Exchange market algorith. Appl. Soft Comput. 2014, 19, 177–187. [Google Scholar]
  28. Zhang, Y.; Jin, Z. Group teaching optimization algorithm: A novel metaheuristic method for solving global optimization problems. Expert Syst. Appl. 2020, 148, 113246. [Google Scholar] [CrossRef]
  29. Zhang, Y.; Chi, A. Group teaching optimization algorithm with information sharing for numerical optimization and engineering optimization. J. Intell. Manuf. 2021, 1–25. [Google Scholar] [CrossRef]
  30. Ahandani, M.A.; Alavi-Rad, H. Opposition-based learning in the shuffled differential evolution algorithm. Soft Comput. 2012, 16, 1303–1337. [Google Scholar] [CrossRef]
  31. Shang, J.; Sun, Y.; Li, J.; Zheng, C.; Zhang, J. An Improved Opposition-Based Learning Particle Swarm Optimization for the Detection of SNP-SNP Interactions. BioMed Res. Int. 2015, 12, 524821. [Google Scholar] [CrossRef] [Green Version]
  32. Wang, H.; Wu, Z.; Rahnamayan, S.; Liu, Y.; Ventresca, M. Enhancing particle swarm optimization using generalized opposition-based learning. Inform. Sci. 2011, 181, 4699–4714. [Google Scholar] [CrossRef]
  33. Liu, Q.; Li, N.; Jia, H.; Qi, Q.; Abualigah, L. Modified remora optimization algorithm for global optimization and multilevel thresholding image segmentation. Mathematics 2022, 10, 1014. [Google Scholar]
  34. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Opposition-Based Differential Evolution. IEEE Trans. Evol. Comput. 2008, 12, 64–79. [Google Scholar] [CrossRef] [Green Version]
  35. Zhou, Y.; Hao, J.K.; Duval, B. Opposition-based Memetic Search for the Maximum Diversity Problem. IEEE Trans. Evol. Comput. 2017, 21, 731–745. [Google Scholar] [CrossRef] [Green Version]
  36. Zhang, H.; Wang, Z.; Chen, W.; Heidari, A.A.; Wang, M.; Zhao, X.; Liang, G.; Chen, H.; Zhang, X. Ensemble mutation-driven salp swarm algorithm with restart mechanism: Framework and fundamental analysis. Exp. Syst. Appl. 2021, 165, 113897. [Google Scholar] [CrossRef]
  37. Mirjalili, S. SCA: A Sine Cosine Algorithm for Solving Optimization Problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  38. Alsattar, H.A.; Zaidan, A.A.; Zaidan, B.B. Novel meta-heuristic bald eagle search optimisation algorithm. Artif. Intell. Rev. 2020, 53, 2237–2264. [Google Scholar] [CrossRef]
  39. Abualigah, L.; Diabat, A.; Mirjalili, S.; Elaziz, M.A.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Method Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  40. Ahmad, T.; Keyvan, R.Z.; Ravipudi, V.R. An efficient Balanced Teaching-Learning-Based optimization algorithm with Individual restarting strategy for solving global optimization problems. Inform. Sci. 2021, 576, 68–104. [Google Scholar]
  41. Babalik, A.; Cinar, A.C.; Kiran, M.S. A modification of tree-seed algorithm using Deb’s rules for constrained optimization. Appl. Soft Comput. 2018, 63, 289–305. [Google Scholar] [CrossRef]
  42. Hussien, A.G.; Amin, M.; Abd El Aziz, M. A comprehensive review of moth-flame optimisation: Variants, hybrids, and applications. J. Exp. Theor. Artif. Intell. 2020, 32, 705–725. [Google Scholar] [CrossRef]
  43. Wen, C.; Jia, H.; Wu, D.; Rao, H.; Li, S.; Liu, Q.; Abualigah, L. Modified Remora Optimization Algorithm with Multistrategies for Global Optimization Problem. Mathematics 2022, 10, 3604. [Google Scholar] [CrossRef]
  44. He, Q.; Wang, L. An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng. Appl. Artif. Intel. 2007, 20, 89–99. [Google Scholar] [CrossRef]
  45. He, Q.; Wang, L. A hybrid particle swarm optimization with a feasibilitybased rule for constrained optimization. Appl. Math. Comput. 2007, 186, 1407–1422. [Google Scholar]
  46. Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  47. Laith, A.; Dalia, Y.; Mohamed, A.E.; Ahmed, A.E.; Mohammed, A.A.A.; Amir, H.G. Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar]
  48. Liu, Q.; Li, N.; Jia, H.; Qi, Q.; Abualigah, L.; Liu, Y. A hybrid arithmetic optimization and golden sine algorithm for solving industrial engineering design problems. Mathematics 2022, 10, 1567. [Google Scholar] [CrossRef]
  49. Zheng, R.; Jia, H.; Abualigah, L.; Wang, S.; Wu, D. An improved remora optimization algorithm with autonomous foraging mechanism for global optimization problems. Math. Biosci. Eng. 2022, 19, 3994–4037. [Google Scholar] [CrossRef]
  50. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  51. Hui, L.; Cai, Z.; Yong, W. Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization. Appl. Soft Comput. 2010, 10, 629–640. [Google Scholar]
  52. Tsai, J.-F. Global optimization of nonlinear fractional programming problems in engineering design. Eng. Optimiz. 2005, 37, 399–409. [Google Scholar] [CrossRef]
  53. Min, Z.; Luo, W.; Wang, X. Differential evolution with dynamic stochastic selection for constrained optimization. Inform. Sci. 2008, 178, 3043–3074. [Google Scholar]
  54. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper Optimisation Algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef] [Green Version]
  55. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Zong, W.G.; Gandomi, A.H. Reptile search algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2021, 191, 116158. [Google Scholar] [CrossRef]
  56. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine predators algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  57. Houssein, E.H.; Neggaz, N.; Hosney, M.E.; Mohamed, W.M.; Hassaballah, M. Enhanced Harris hawks optimization with genetic operators for selection chemical descriptors and compounds activities. Neural Comput. Appl. 2021, 33, 13601–13618. [Google Scholar] [CrossRef]
  58. Long, W.; Jiao, J.; Liang, X.; Cai, S.; Xu, M. A random opposition-based learning grey wolf optimizer. IEEE Access 2019, 7, 113810–113825. [Google Scholar] [CrossRef]
  59. Wang, S.; Sun, K.; Zhang, W.; Jia, H. Multilevel thresholding using a modified ant lion optimizer with opposition-based learning for color image segmentation. Math. Biosci. Eng. MBE 2021, 18, 3092–3143. [Google Scholar] [CrossRef] [PubMed]
  60. Absalom, E.E.; Jeffrey, O.A.; Laith, A.; Seyedali, M.; Amir, H.G. Prairie Dog Optimization Algorithm. Neural Comput. Appl. 2022, 1–49. [Google Scholar]
  61. Sadollah, A.; Bahreininejad, A.; Eskandar, H.; Hamdi, M. Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems. Appl. Softw. Comput. 2013, 13, 2592–2612. [Google Scholar] [CrossRef]
Figure 1. Limit schematic diagram.
Figure 1. Limit schematic diagram.
Mathematics 10 03765 g001
Figure 2. Flowchart for proposed MGTOA.
Figure 2. Flowchart for proposed MGTOA.
Mathematics 10 03765 g002
Figure 3. Results of MGTOA in 23 benchmark functions.
Figure 3. Results of MGTOA in 23 benchmark functions.
Mathematics 10 03765 g003aMathematics 10 03765 g003b
Figure 4. Convergence curves for the optimization algorithms for standard benchmark functions (F1–F13) with dim = 30.
Figure 4. Convergence curves for the optimization algorithms for standard benchmark functions (F1–F13) with dim = 30.
Mathematics 10 03765 g004
Figure 5. Convergence curves for the optimization algorithms for standard benchmark functions (F1–F13) with dim = 500.
Figure 5. Convergence curves for the optimization algorithms for standard benchmark functions (F1–F13) with dim = 500.
Mathematics 10 03765 g005
Figure 6. Convergence curves for the optimization algorithms for standard benchmark functions (F14–F23).
Figure 6. Convergence curves for the optimization algorithms for standard benchmark functions (F14–F23).
Mathematics 10 03765 g006
Figure 7. Convergence curves for the optimization algorithms for test functions on CEC2014.
Figure 7. Convergence curves for the optimization algorithms for test functions on CEC2014.
Mathematics 10 03765 g007aMathematics 10 03765 g007b
Figure 8. The welded beam design.
Figure 8. The welded beam design.
Mathematics 10 03765 g008
Figure 9. The pressure vessel design.
Figure 9. The pressure vessel design.
Mathematics 10 03765 g009
Figure 10. Tension/Compression Spring Design.
Figure 10. Tension/Compression Spring Design.
Mathematics 10 03765 g010
Figure 11. Pressure Vessel Design Problem.
Figure 11. Pressure Vessel Design Problem.
Mathematics 10 03765 g011
Figure 12. Car Crashworthiness Design.
Figure 12. Car Crashworthiness Design.
Mathematics 10 03765 g012
Figure 13. Gear train design problem.
Figure 13. Gear train design problem.
Mathematics 10 03765 g013
Table 1. Parameter settings for the comparative algorithms.
Table 1. Parameter settings for the comparative algorithms.
AlgorithmParametersValue
BTLBO [40]TF1 or 2
θ0 or 1
TLBO [25]TF1 or 2
WOA [9]Coefficient vectors A 1
Coefficient vectors C [−1, 1]
Helical parameter b0.75
Helical parameter l[−1, 1]
AOA [39]MOP_Max1
MOP_Min0.2
A5
Mu0.499
ROA [10]C0.1
BES [38]α[1.5, 2.0]
r[0, 1]
SCA [37]α2
GA [11]TypeReal coded
SelectionRoulette wheel (Proportionate)
CrossoverWhole arithmetic
(Probability = 0.7)
MutationGaussian
(Probability = 0.01)
GTOA [28]--
MGTOALimitlg(t)
Table 2. Details of 23 benchmark functions.
Table 2. Details of 23 benchmark functions.
TypeFdimRangeFmin
Unimodal
benchmark functions
F 1 ( x )   =   i = 1 n x i 2 30/100/500[−100, 100]0
F 2 ( x )   =   i = 1 n | x i |   +   i   =   1 n | x i | 30/100/500[−10, 10]0
F 3 ( x )   =   i = 1 n ( j 1 i x j ) 2 30/100/500[−100, 100]0
F 4 ( x )   =   max { | x i | , 1 i n } 30/100/500[−100, 100]0
F 5 ( x )   =   i = 1 n     1 [ 100 ( x i   +   1     x i 2 ) 2   +   ( x i     1 ) 2 ] 30/100/500[−30, 30]0
F 6 ( x )   =   i = 1 n ( x i   +   5 ) 2 30/100/500[−100, 100]0
F 7 ( x )   =   i = 1 n i   ×   x i 4   +   r a n d o m [ 0 , 1 ) 30/100/500[−1.28, 1.28]0
Multimodal
benchmark functions
F 8 ( x )   =   i = 1 n   x i sin ( | x i | ) 30/100/500[−500, 500]−418.9829 × dim
F 9 ( x )   =   i = 1 n [ x i 2     10 cos ( 2 π x i )   +   10 ] 30/100/500[−5.12, 5.12]0
F 10 ( x )   =   20 exp ( 0.2 1 n i = 1 n x i 2     exp ( 1 n i = 1 n cos ( 2 π x i ) )   +   20   +   e ) 30/100/500[−32, 32]0
F 11 ( x )   =   1 400 i = 1 n x i 2     Π i   =   1 n cos ( x i i )   +   1 30/100/500[−600, 600]0
F 12 ( x )   =   π n { 10 sin ( π y 1 )   +   i = 1 n     1 ( y i     1 ) 2 [ 1   +   10 sin 2 ( π y i   +   1 ) ]   +   ( y n     1 ) 2 }   + i = 1 n u ( x i , 10 , 100 , 4 ) , where   y i   =   1   +   x i   +   1 4 , u ( x i , a , k , m )   =   k ( x i     a ) m x i > a 0 a < x i < a k ( x i     a ) m    x i < a 30/100/500[−50, 50]0
F 13 ( x )   =   0.1 ( sin 2 ( 3 π x 1 )   +   i = 1 n ( x i     1 ) 2 [ 1   +   sin 2 ( 3 π x i   +   1 ) ]   + ( x n     1 ) 2 [ 1   +   sin 2 ( 2 π x n ) ] )   +   i = 1 n u ( x i , 5 , 100 , 4 ) 30/100/500[−50, 50]0
Fixed-dimension multimodal
benchmark functions
F 14 ( x )   =   ( 1 500   +   j = 1 25 1 j   +   i = 1 2 ( x i     a i j ) 6 ) 1 2[−65, 65]1
F 15 ( x )   =   i = 1 11 a i     x 1 ( b i 2   +   b i x 2 ) b i 2   +   b i x 3   +   x 4 2 4[−5, 5]0.00030
F 16 ( x )   =   4 x 1 2     2.1 x 1 4   +   1 3 x 1 6   +   x 1 x 2     4 x 2 2   +   x 2 4 2[−5, 5]−1.0316
F 17 ( x )   =   ( x 2     5.1 4 π 2 x 1 2   +   5 π x 1     6 ) 2   +   10 ( 1     1 8 π ) cos x 1   +   10 2[−5, 5]0.398
F 18 ( x )   =   [ 1   +   ( x 1   +   x 2   +   1 ) 2 ( 19     14 x 1   +   3 x 1 2     14 x 2   +   6 x 1 x 2   +   3 2 2 ) ]   ×   [ 30   +   ( 2 x 1     3 x 2 ) 2   ×   ( 18     32 x 2   +   12 x 1 2   +   48 x 2     36 x 1 x 2   +   27 x 2 2 ) ] 5[−2, 2]3
F 19 ( x )   =   i = 1 4 c i exp ( j = 1 3 a i j ( x j     p i j ) 2 ) 3[−1, 2]−3.86
F 20 ( x )   =   i = 1 4 c i exp ( j = 1 6 a i j ( x j     p i j ) 2 ) 6[0, 1]−3.32
F 21 ( x )   =   i = 1 5 [ ( X     a i ) ( X     a i ) T   +   c i ]     1 4[0, 10]−10.1532
F 22 ( x )   =   i = 1 7 [ ( X     a i ) ( X     a i ) T   +   c i ]     1 4[0, 10]−10.4028
F 23 ( x )   =   i = 1 10 [ ( X     a i ) ( X     a i ) T   +   c i ]     1 4[0, 10]−10.5363
Table 7. Details of 30 CEC2014 benchmark functions.
Table 7. Details of 30 CEC2014 benchmark functions.
NameNO.FunctionsFmin
Unimodal FunctionsCEC 1Rotated High Conditioned Elliptic Function100
CEC 2Rotated Bent Cigar Function200
CEC 3Rotated Discus Function300
Simple Multimodal FunctionsCEC 4Shifted and Rotated Rosenbrock’s Function400
CEC 5Shifted and Rotated Ackley’s Function500
CEC 6Shifted and Rotated Weierstrass Function600
CEC 7Shifted and Rotated Griewank’s Function700
CEC 8Shifted Rastrigin’s Function800
CEC 9Shifted and Rotated Rastrigin’s Function900
CEC 10Shifted Schwefel’s Function1000
CEC 11Shifted and Rotated Schwefel’s Schwefel’s Function1100
CEC 12Shifted and Rotated Katsuura Function1200
CEC 13Shifted and Rotated HappyCat Function1300
CEC 14Shifted and Rotated HGBat Function1400
CEC 15Shifted and Rotated Expanded Griewank’splus
Rosenbrock’s Function
1500
CEC 16Shifted and Rotated Expanded Scaffer’s F6 Function1600
Hybrid Function 1CEC 17Hybrid Function 1 (N = 3)1700
CEC 18Hybrid Function 2 (N = 3)1800
CEC 19Hybrid Function 3 (N = 4)1900
CEC 20Hybrid Function 4 (N = 4)2000
CEC 21Hybrid Function 5 (N = 5)2100
CEC 22Hybrid Function 6 (N = 5)2200
Composition FunctionsCEC 23Composition Function 1 (N = 5)2300
CEC 24Composition Function 2 (N = 3)2400
CEC 25Composition Function 3 (N = 3)2500
CEC 26Composition Function 4 (N = 5)2600
CEC 27Composition Function 5 (N = 5)2700
CEC 28Composition Function 6 (N = 5)2800
CEC 29Composition Function 7 (N = 3)2900
CEC 30Composition Function 8 (N = 3)3000
Search Range: [−100, 100] dim
Table 8. Results of algorithms on the CEC2014 test suite.
Table 8. Results of algorithms on the CEC2014 test suite.
CECMetricMGTOAGTOA [28]GA [11]SCA [37]BES [38]ROA [10]AOA [39]WOA [9]BTLBO [40]TLBO [25]
CEC 1min1.13 × 1073.30 × 1084.18 × 1083.58 × 1085.35 × 1081.81 × 1088.42 × 1081.27 × 1081.76 × 1061.35 × 106
mean7.02 × 1077.36 × 1089.95 × 1085.44 × 1089.45 × 1084.23 × 1081.36 × 1092.56 × 1085.36 × 1066.54 × 106
std4.06 × 1073.16 × 1083.86 × 1081.84 × 1083.72 × 1082.1 × 1084.95 × 1081.08 × 1085.46 × 1064.13 × 106
CEC 2min1.69 × 1083.19 × 10103.12 × 10102.43 × 10104.8 × 10102.11 × 10105.93 × 10105.05 × 1093.79 × 1036.05 × 103
mean4.21 × 1095.18 × 10104.36 × 10103.1 × 10106.67 × 10103.2 × 10107.33 × 10107.99 × 1099.18 × 1059.76 × 105
std3.56 × 1091.3 × 10109.53 × 1095.62 × 1091.54 × 10101.12 × 10101.12 × 10103.85 × 1094.92 × 1065.05 × 106
CEC 3min1.80 × 1046.36 × 1046.11 × 1046.04 × 1048.35 × 1045.72 × 1047.61 × 1046.93 × 1041.37 × 1032.14 × 104
mean4.61 × 1047.68 × 1041.74 × 1067.68 × 1041.82 × 1056.89 × 1048.55 × 1041.46 × 1056.31 × 1033.44 × 104
std6.52 × 1031.86 × 1046.76 × 1061.81 × 1041.46 × 1058.82 × 1031.59 × 1047.68 × 1044.53 × 1031.13 × 104
CEC 4min4.06 × 1024.33 × 1033.31 × 1032.04 × 1037.52 × 1031.38 × 1038.54 × 1039.5 × 1024.82 × 1024.78 × 102
mean5.23 × 1028.75 × 1036.11 × 1033.01 × 1031.34 × 1043.57 × 1031.54 × 1041.47 × 1035.38 × 1025.34 × 102
std33.43.32 × 1033.13 × 1031.07 × 1033.91 × 1031.93 × 1034.11 × 1035.22 × 10248.140
CEC 5min5.2 × 1025.21 × 1025.2 × 1025.21 × 1025.21 × 1025.21 × 1025.21 × 1025.21 × 1025.21 × 1025.21 × 102
mean5.2 × 1025.21 × 1025.21 × 1025.21 × 1025.21 × 1025.21 × 1025.21 × 1025.21 × 1025.21 × 1025.21 × 102
std1.26 × 10−18.36 × 10−21.35 × 10−18.9 × 10−27.53 × 10−29.95 × 10−26.16 × 10−21.1 × 10−16.04 × 10−26.73 × 10−2
CEC 6min6.2 × 1026.34 × 1026.32 × 1026.35 × 1026.38 × 1026.33 × 1026.37 × 1026.35 × 1026.18 × 1026.16 × 102
mean6.31 × 1026.38 × 1026.35 × 1026.38 × 1026.41 × 1026.35 × 1026.39 × 1026.4 × 1026.21 × 1026.2 × 102
std2.63.0732.673.173.912.793.853.273.17
CEC 7min7.02 × 1021.02 × 1039.78 × 1028.99 × 1021.1 × 1037.95 × 1021.25 × 1037.3 × 1027 × 1027 × 102
mean7.25 × 1021.21 × 1031.07 × 1039.69 × 1021.28 × 1039.09 × 1021.39 × 1037.51 × 1027 × 1027.01 × 102
std22.11.35 × 10287.453.21.33 × 102921.29 × 10225.71.323.21
CEC 8min8.98 × 1021.03 × 1031.07 × 1031.06 × 1031.1 × 1031 × 1031.11 × 1039.97 × 1028.62 × 1028.6 × 102
mean9.69 × 1021.07 × 1031.11 × 1031.09 × 1031.13 × 1031.05 × 1031.16 × 1031.06 × 1038.91 × 1028.91 × 102
std23.831.930.628.126.835.237.951.221.619
CEC 9min9.83 × 1021.16 × 1031.16 × 1031.19 × 1031.23 × 1031.14 × 1031.2 × 1031.15 × 1039.86 × 1029.75 × 102
mean1.05 × 1031.2 × 1031.21 × 1031.22 × 1031.25 × 1031.17 × 1031.23 × 1031.23 × 1031.02 × 1031.01 × 103
std28.733.234.526.83832.827.868.127.335.4
CEC 10min2.81 × 1036.52 × 1036 × 1037.37 × 1037.6 × 1035.49 × 1036.8 × 1035.8 × 1033.79 × 1033.06 × 103
mean4.33 × 1037.3 × 1036.54 × 1038.18 × 1038.36 × 1036.44 × 1037.56 × 1036.6 × 1034.96 × 1034.93 × 103
std7.19 × 1027.33 × 1026.39 × 1024.82 × 1025.9 × 1027.79 × 1026.27 × 1028.51 × 1029.58 × 1021.71 × 103
CEC 11min1.26 × 1031.83 × 1032.64 × 1032.32 × 1032.32 × 1031.73 × 1031.8 × 1031.82 × 1031.33 × 1031.42 × 103
mean1.96 × 1032.19 × 1033.08 × 1032.59 × 1032.68 × 1032.18 × 1032.17 × 1032.39 × 1031.56 × 1032.04 × 103
std2.79 × 1023.24 × 1023.38 × 1022.65 × 1023.04 × 1024.18 × 1023.04 × 1023.53 × 1021.41 × 1024.34 × 102
CEC 12min1.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 103
mean1.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 1031.2 × 103
std9.13 × 10−23.46 × 10−18.91 × 10−13.53 × 10−13.9 × 10−14.11 × 10−13.44 × 10−14.41 × 10−11.32 × 10−13.2 × 10−1
CEC 13min1.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 103
mean1.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 1031.3 × 103
std7.62 × 10−29.25 × 10−11.061.8 × 10−11.278.07 × 10−11.272.31 × 10−16.65 × 10−29.29 × 10−2
CEC 14min1.4 × 1031.4 × 1031.41 × 1031.4 × 1031.41 × 1031.4 × 1031.42 × 1031.4 × 1031.4 × 1031.4 × 103
mean1.4 × 1031.41 × 1031.42 × 1031.4 × 1031.42 × 1031.4 × 1031.43 × 1031.4 × 1031.4 × 1031.4 × 103
std8.69 × 10−26.639.51.219.725.1212.23.22 × 10−11.63 × 10−11.42 × 10−1
CEC 15min1.5 × 1031.5 × 1031.6 × 1031.51 × 1031.53 × 1031.5 × 1031.64 × 1031.5 × 1031.5 × 1031.5 × 103
mean1.5 × 1031.64 × 1032.38 × 1041.57 × 1034.03 × 1031.63 × 1034.77 × 1031.51 × 1031.5 × 1031.5 × 103
std15.46 × 1021.1 × 1052.2 × 1025.63 × 1034.47 × 1026.16 × 1037.087.64 × 10−18.25 × 10−1
CEC 16min1.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 103
mean1.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 1031.6 × 103
std1.8 × 10−14.4 × 10−13.08 × 10−12.87 × 10−12.99 × 10−13.03 × 10−13.38 × 10−14.32 × 10−14.31 × 10−13.3 × 10−1
CEC 17min1.89 × 1032.47 × 1032.12 × 1061.6 × 1045 × 1044.42 × 1038.19 × 1041.83 × 1041.98 × 1032.65 × 103
mean5.49 × 1037.59 × 1031.13 × 1071.03 × 1051.05 × 1061.55 × 1055.67 × 1054.66 × 1052.5 × 1034.47 × 103
std2.83 × 1031.55 × 1041.48 × 1071.57 × 1052.75 × 1062.25 × 1055.48 × 1057.91 × 1051.01 × 1032.12 × 103
CEC 18min1.86 × 1031.89 × 1031.02 × 1061.19 × 1041.49 × 1042.94 × 1032.35 × 1032.81 × 1031.83 × 1031.95 × 103
mean5.73 × 1031.97 × 1035.55 × 1077.88 × 1041.15 × 1061.41 × 1041.76 × 1041.93 × 1041.89 × 1035.9 × 103
std3.25 × 1037.04 × 1015.88 × 1071.05 × 1054.68 × 1069.68 × 1031.26 × 1043.48 × 10465.44.92 × 103
CEC 19min1.9 × 1031.9 × 1031.91 × 1031.91 × 1031.91 × 1031.9 × 1031.91 × 1031.9 × 1031.9 × 1031.9 × 103
mean1.9 × 1031.91 × 1031.95 × 1031.91 × 1031.92 × 1031.91 × 1031.95 × 1031.91 × 1031.9 × 1031.9 × 103
std7.98 × 10−12.73.22 × 1011.0414.29.7730.42.19.63 × 10−18.29 × 10−1
CEC 20min2.04 × 1032.1 × 1033.63 × 1043.21 × 1034.53 × 1033.79 × 1035.79 × 1033.18 × 1032.02 × 1032.1 × 103
mean5.12 × 1036.43 × 1033.6 × 1071.16 × 1043.03 × 1051.03 × 1041.37 × 1041.78 × 1042.08 × 1032.55 × 103
std2.65 × 1031.15 × 1046.32 × 1079.7 × 1039.12 × 1054.92 × 1039.2 × 1032.44 × 10461.36.62 × 102
CEC 21min2.28 × 1032.55 × 1032.8 × 1057.79 × 1037.91 × 1033.58 × 1036.76 × 1031.54 × 1042.12 × 1032.27 × 103
mean6.23 × 1034.47 × 1035.37 × 1062.07 × 1044.94 × 1051.84 × 1041.74 × 1069.71 × 1052.29 × 1032.52 × 103
std4.38 × 1034.9 × 1037.59 × 1061.03 × 1049.91 × 1054.06 × 1042.64 × 1062.23 × 1061.51 × 1022.11 × 102
CEC 22min2.22 × 1032.24 × 1032.42 × 1032.25 × 1032.3 × 1032.23 × 1032.27 × 1032.24 × 1032.21 × 1032.22 × 103
mean2.32 × 1032.33 × 1032.63 × 1032.29 × 1032.44 × 1032.29 × 1032.43 × 1032.33 × 1032.23 × 1032.24 × 103
std5.34 × 1018.78 × 1011.79 × 1024.29 × 1011.21 × 10290.31.24 × 10296.96.6730.3
CEC 23min2.5 × 1032.5 × 1032.66 × 1032.64 × 1032.5 × 1032.5 × 1032.5 × 1032.63 × 1032.63 × 1032.63 × 103
mean2.5 × 1032.5 × 1032.74 × 1032.65 × 1032.6 × 1032.5 × 1032.5 × 1032.64 × 1032.63 × 1032.63 × 103
std05.32 × 10−11.1 × 1029.5395.502.66 × 10−126.92.42 × 10−122.02 × 10−12
CEC 24min2.51 × 1032.54 × 1032.57 × 1032.55 × 1032.57 × 1032.6 × 1032.57 × 1032.56 × 1032.51 × 1032.51 × 103
mean2.59 × 1032.58 × 1032.60 × 1032.56 × 1032.6 × 1032.6 × 1032.59 × 1032.58 × 1032.52 × 1032.53 × 103
std2.31 × 1012.94 × 1012.06 × 1011.18 × 1019.0414.218.527.516.435.5
CEC 25min2.63 × 1032.7 × 1032.7 × 1032.7 × 1032.69 × 1032.7 × 1032.7 × 1032.69 × 1032.63 × 1032.63 × 103
mean2.7 × 1032.7 × 1032.71 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 1032.66 × 1032.66 × 103
std5.771.09 × 1016.15.586.7701.3813.430.630.9
CEC 26min2.7 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 1032.7 × 103
mean2.7 × 1032.71 × 1032.71 × 1032.7 × 1032.7 × 1032.7 × 1032.72 × 1032.7 × 1032.7 × 1032.7 × 103
std6.25 × 10−23.03 × 1013.11 × 1012.91 × 10−11.318.133.418.25.84 × 10−27.31 × 10−2
CEC 27min2.7 × 1032.9 × 1033.13 × 1032.73 × 1032.88 × 1032.9 × 1032.9 × 1033.11 × 1032.7 × 1032.7 × 103
mean2.89 × 1033 × 1033.25 × 1033.04 × 1033.17 × 1032.88 × 1032.91 × 1033.16 × 1032.75 × 1032.96 × 103
std4.86 × 1011.45 × 1021.04 × 1021.47 × 1021.86 × 10264.380.21.72 × 1021.1 × 1021.75 × 102
CEC 28min3 × 1033 × 1033.65 × 1033.24 × 1033 × 1033 × 1033 × 1033.23 × 1033.18 × 1033.18 × 103
mean3 × 1033.16 × 1033.99 × 1033.32 × 1033.47 × 1033 × 1033.09 × 1033.45 × 1033.22 × 1033.24 × 103
std02.07 × 1023.04 × 1027.3 × 1011.92 × 10202.71 × 1022.02 × 10247.371.2
CEC 29min3.1 × 1033.33 × 1035.69 × 1055.56 × 1038.13 × 1033.41 × 1033.1 × 1033.54 × 1033.22 × 1033.37 × 103
mean3.1 × 1031.87 × 1061.06 × 1073.17 × 1041.63 × 1063.17 × 1052.53 × 1064.71 × 1053.38 × 1032.55 × 105
std5.473.59 × 1061.49 × 1074.6 × 1043.1 × 1068.49 × 1051.11 × 1071.26 × 1062.03 × 1026.54 × 105
CEC 30min3.2 × 1034.13 × 1031.26 × 1044.56 × 1035.93 × 1033.99 × 1035.49 × 1034.42 × 1033.53 × 1033.5 × 103
mean3.2 × 1031.76 × 1041.2 × 1055.73 × 1032.7 × 1045.49 × 1031.58 × 1057.62 × 1033.86 × 1033.8 × 103
std5.444.25 × 1041.64 × 1051.5 × 1035.38 × 1042.21 × 1037 × 1059.8 × 1033.12 × 1024.13 × 102
Table 9. Experimental results of Wilcoxon rank sum test on the CEC2014 test suite.
Table 9. Experimental results of Wilcoxon rank sum test on the CEC2014 test suite.
CECMGTOA
vs.
GTOA [28]
MGTOA
vs.
GA [11]
MGTOA
vs.
SCA [37]
MGTOA
vs.
BES [38]
MGTOA
vs.
ROA [10]
MGTOA
vs.
AOA [39]
MGTOA
vs.
WOA [9]
MGTOA
vs.
BTLBO [40]
MGTOA
vs.
TLBO [25]
CEC 11.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−62.88 × 10−61.73 × 10−63.72 × 10−51.73 × 10−61.73 × 10−6
CEC 21.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−62.13 × 10−61.73 × 10−66.16 ×10 −41.73 × 10−61.73 × 10−6
CEC 33.88 × 10−62.13 × 10−66.98 × 10−61.73 × 10−64.29 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.92 × 10−6
CEC 41.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−62.13 × 10−61.73 × 10−61.36 × 10−41.73 × 10−61.73 × 10−6
CEC 51.73 × 10−63.18 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.92 × 10−62.35 × 10−69.63 × 10−4
CEC 67.51 × 10−55.71 × 10−42.6 × 10−52.35 × 10−63.06 × 10−48.47 × 10−62.88 × 10−61.92 × 10−61.73 × 10−6
CEC 71.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.92 × 10−61.73 × 10−62.30 × 10−21.73 × 10−61.73 × 10−6
CEC 81.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−62.35 × 10−61.73 × 10−69.32 × 10−61.73 × 10−61.73 × 10−6
CEC 92.88 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−63.18 × 10−61.73 × 10−61.73 × 10−6
CEC 101.73 × 10−64.29 × 10−61.73 × 10−61.73 × 10−61.6 × 10−41.73 × 10−63.88 × 10−64.29 × 10−63.72 × 10−5
CEC 111.31 × 10−12.13 × 10−66.98 × 10−66.32 × 10−56.29 × 10−14.72 × 10−24.72 × 10−23.52 × 10−63.29 × 10−1
CEC 122.88 × 10−61.73 × 10−61.73 × 10−61.73 × 10−64.29 × 10−68.94 × 10−41.92 × 10−64.72 × 10−21.92 × 10−6
CEC 132.61 × 10−41.73 × 10−63.88 × 10−61.73 × 10−61.89 × 10−41.73 × 10−63.32 × 10−45.75 × 10−64.11 × 10−3
CEC 141.83 × 10−31.73 × 10−61.49 × 10−51.73 × 10−61.29 × 10−31.73 × 10−62.13 × 10−17.66 × 10−15.44 × 10−1
CEC 153.52 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.02 × 10−51.73 × 10−67.69 × 10−61.73 × 10−61.73 × 10−6
CEC 164.49 × 10−21.73 × 10−66.89 × 10−55.79 × 10−51.65 × 10−14.29 × 10−62.22 × 10−41.73 × 10−63.18 × 10−6
CEC 175.67 × 10−31.73 × 10−62.13 × 10−61.73 × 10−65.32 × 10−31.73 × 10−62.60 × 10−66.34 × 10−67.16 × 10−4
CEC 181.92 × 10−61.73 × 10−62.13 × 10−61.73 × 10−63 × 10−23.71 × 10−13.68 × 10−21.73 × 10−64.53 × 10−4
CEC 193.11 × 10−51.73 × 10−63.88 × 10−61.92 × 10−62.26 × 10−31.73 × 10−61.80 × 10−51.92 × 10−62.35 × 10−6
CEC 202.43 × 10−21.73 × 10−63.50 × 10−21.64 × 10−59.84 × 10−34.72 × 10−27.73 × 10−31.73 × 10−62.60 × 10−6
CEC 212.16 × 10−51.73 × 10−63.16 × 10−35.75 × 10−62.96 × 10−31.83 × 10−31.92 × 10−61.73 × 10−61.73 × 10−6
CEC 221.04 × 10−21.92 × 10−64.49 × 10−24.11 × 10−38.94 × 10−13.72 × 10−54.41 × 10−11.73 × 10−68.47 × 10−6
CEC 231.73 × 10−61.73 × 10−61.73 × 10−61.22 × 10−4115.61 × 10−64.32 × 10−84.32 × 10−8
CEC 248.59 × 10−22.29 × 10−13.72 × 10−56.89 × 10−16.45 × 10−24.54 × 10−13.49 × 10−11.73 × 10−63.52 × 10−6
CEC 257.73 × 10−35.75 × 10−62.41 × 10−33.91 × 10−24.38 × 10−11.88 × 10−19.91 × 10−11.73 × 10−61.80 × 10−5
CEC 261.48 × 10−43.11 × 10−53.11 × 10−53.11 × 10−52.22 × 10−42.84 × 10−51.83 × 10−31.73 × 10−62.88 × 10−6
CEC 271.32 × 10−27.69 × 10−64.73 × 10−67.03 × 10−63.75 × 10−11.88 × 10−12.35 × 10−66.34 × 10−64.99 × 10−3
CEC 281.73 × 10−61.73 × 10−61.73 × 10−63.79 × 10−612.50 × 10−13.79 × 10−61.73 × 10−61.73 × 10−6
CEC 291.64 × 10−51.73 × 10−61.73 × 10−61.73 × 10−64.73 × 10−61.86 × 10−21.73 × 10−61.11 × 10−28.19 × 10−5
CEC 301.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.92 × 10−62.35 × 10−61.73 × 10−67.52 × 10−23.39 × 10−1
Table 10. Experimental results of Welded Beam Design.
Table 10. Experimental results of Welded Beam Design.
AlgorithmOptimal Values for VariablesBest Weight
hltb
MGTOA0.2053513.2684199.0698750.2056211.701633939
GTOA [28]0.205733.4704899.0366240.205731.724852
TSA [41]0.2441576.2230668.295550.2444052.38241101
MFO [42]0.20573.47039.03640.20571.72452
MVO [21]0.2054633.4731939.0445020.2056951.72645
RO [22]0.2036873.5284679.0042330.2072411.735344
Table 11. Experimental results of Pressure Vessel Design.
Table 11. Experimental results of Pressure Vessel Design.
AlgorithmOptimal Values for VariablesBest Cost
TsThRL
MGTOA0.7543640.36637540.42809198.56525752.402458
GTOA [28]0.7781690.3846540.31962005885.333
CPSO [44]0.81250.437542.0913176.74656061.0777
HPSO [45]0.81250.437542.0984176.63666059.7143
CS [46]0.81250.437542.09845176.63666059.714335
AO [47]1.0540.18280659.621939.8055949.2258
Table 12. Experimental results of tension/Compression Spring Design.
Table 12. Experimental results of tension/Compression Spring Design.
AlgorithmOptimal Values for VariablesBest Weight
dDV
MGTOA0.050.3743968.5490780.009875
IROA [49]0.0537990.469515.8110.010614
HHO [50]0.0517960.35930511.138860.012665
GWO [7]0.051690.35673711.288850.012666
MFO [42]0.0519940.36410910.868420.012667
DE [16]0.0516090.35471411.410830.01267
Table 13. Experimental results of Three-Bar Truss Design.
Table 13. Experimental results of Three-Bar Truss Design.
AlgorithmOptimal Values for VariablesBest Weight
x1x2
MGTOA0.7884130.408121263.8523
PSO-DE [51]0.7886750.408248263.8958
Tsa [52]0.7880.408263.68
DEDS [53]0.7886750.408248263.8958
GOA [54]0.7888980.40762263.8959
RSA [55]0.788730.40805263.8928
Table 14. Experimental results of Car Crashworthiness Design.
Table 14. Experimental results of Car Crashworthiness Design.
AlgorithmMGTOAGTOAMPA [56]HHOCM [57]ROLGWO [58]MALO [59]
x10.50.6628330.50.5001640.5012550.5
x21.2278941.2172471.228231.2486121.2455511.2281
x30.50.7342380.50.6595580.5000460.5
x41.2034721.112661.20491.0985151.1802541.2126
x50.50.6131970.50.7579890.5000350.5
x61.0659130.6701971.23930.7672681.165881.308
x70.50.6156940.50.5000550.5000880.5
x80.3450.2717340.344980.3431050.3448950.3449
x90.1920.231940.1920.1920320.2995830.2804
x100.3673450.1749330.440352.8988053.595080.4242
x110.9698720.4622941.78504-2.290184.6565
Best Weight23.1912525.7060723.1998224.4835823.2224323.2294
Table 15. Experimental results of gear train design.
Table 15. Experimental results of gear train design.
AlgorithmOptimal Values for VariablesBest Gear Ratio
nAnBnCnD
MGTOA43.9053616.0127319.5915949.119972.70086 × 10−12
GTOA54.6895537.076891257.137868.88761 × 10−10
CS [46]431619492.7009 × 10−12
GA [11]491619432.7019 × 10−12
ABC [6]491619432.7009 × 10−12
MBA [61]431619492.7009 × 10−12
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rao, H.; Jia, H.; Wu, D.; Wen, C.; Li, S.; Liu, Q.; Abualigah, L. A Modified Group Teaching Optimization Algorithm for Solving Constrained Engineering Optimization Problems. Mathematics 2022, 10, 3765. https://doi.org/10.3390/math10203765

AMA Style

Rao H, Jia H, Wu D, Wen C, Li S, Liu Q, Abualigah L. A Modified Group Teaching Optimization Algorithm for Solving Constrained Engineering Optimization Problems. Mathematics. 2022; 10(20):3765. https://doi.org/10.3390/math10203765

Chicago/Turabian Style

Rao, Honghua, Heming Jia, Di Wu, Changsheng Wen, Shanglong Li, Qingxin Liu, and Laith Abualigah. 2022. "A Modified Group Teaching Optimization Algorithm for Solving Constrained Engineering Optimization Problems" Mathematics 10, no. 20: 3765. https://doi.org/10.3390/math10203765

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop