Next Article in Journal
Affordable Open-Source Quartz Microbalance Platform for Measuring the Layer Thickness
Next Article in Special Issue
A Combined Approach to Infrared Small-Target Detection with the Alternating Direction Method of Multipliers and an Improved Top-Hat Transformation
Previous Article in Journal
The Spectrum of Light Emitted by LED Using a CMOS Sensor-Based Digital Camera and Its Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fuzzy Strategy Grey Wolf Optimizer for Complex Multimodal Optimization Problems

College of Computer and Electronic Information Engineering, Guangxi University, Nanning 530004, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(17), 6420; https://doi.org/10.3390/s22176420
Submission received: 18 July 2022 / Revised: 14 August 2022 / Accepted: 22 August 2022 / Published: 25 August 2022
(This article belongs to the Special Issue Intelligent Monitoring, Control and Optimization in Industries 4.0)

Abstract

:
Traditional grey wolf optimizers (GWOs) have difficulty balancing convergence and diversity when used for multimodal optimization problems (MMOPs), resulting in low-quality solutions and slow convergence. To address these drawbacks of GWOs, a fuzzy strategy grey wolf optimizer (FSGWO) is proposed in this paper. Binary joint normal distribution is used as a fuzzy method to realize the adaptive adjustment of the control parameters of the FSGWO. Next, the fuzzy mutation operator and the fuzzy crossover operator are designed to generate new individuals based on the fuzzy control parameters. Moreover, a noninferior selection strategy is employed to update the grey wolf population, which makes the entire population available for estimating the location of the optimal solution. Finally, the FSGWO is verified on 30 test functions of IEEE CEC2014 and five engineering application problems. Comparing FSGWO with state-of-the-art competitive algorithms, the results show that FSGWO is superior. Specifically, for the 50D test functions of CEC2014, the average calculation accuracy of FSGWO is 33.63%, 46.45%, 62.94%, 64.99%, and 59.82% higher than those of the equilibrium optimizer algorithm, modified particle swarm optimization, original GWO, hybrid particle swarm optimization and GWO, and selective opposition-based GWO, respectively. For the 30D and 50D test functions of CEC2014, the results of the Wilcoxon signed-rank test show that FSGWO is better than the competitive algorithms.

1. Introduction

Many complex optimization problems in industrial applications have multiple global optimal solutions or near-optimal solutions that provide decision-makers with different decision preferences, and such problems are often referred to as multimodal optimization problems (MMOPs) [1]. Air service network design is an example of a MMOP, which requires all feasible routes to transport goods to the destination; if the current route cannot be executed due to weather conditions, an alternative route with a similar cost can be selected from the feasible routes to transport goods [2]. Other examples of MMOPs include structural damage detection [3], image segmentation [4], flight control systems [5], job shop scheduling [6], truss structure optimization [7], protein structure prediction [8], and electromagnetic design [9]. Most MMOPs are nonconvex and nonlinear; classical numerical optimization methods are sensitive to nonconvexity and nonlinearity, so they encounter difficulties in solving MMOPs. In contrast, evolutionary algorithms are not sensitive to the nonconvexity and nonlinearity of optimization problems and have been widely used to solve MMOPs, such as genetic algorithms [10], evolutionary algorithms [11], particle swarm optimization [12], ant colony optimization [13], cuckoo search algorithms [14], memetic algorithms [15], niching chaos optimization [16], grey wolf optimization [17], harmony search algorithms [18], fireworks algorithms [19], and gravitational search algorithms [20]. Among these evolutionary algorithms for solving MMOPs, grey wolf optimizers (GWOs) have the advantages of easy implementation and requiring few parameters [21]. Moreover, GWOs use three leading wolves to guide the search and more easily escape local optima, making GWOs competitive for solving MMOPs [22,23,24]. Because the solution space of MMOPs is very complex, GWOs have difficulty balancing convergence and diversity [25,26], so GWOs cannot easily estimate the position of the optimal solution, and the obtained solutions are relatively poor [27]. To address these drawbacks of GWOs, many improved variants of GWOs have been developed, which can be divided into two categories. The first category is to improve the control parameters of GWOs. GWOs have two control parameters, a and C; the former is a linearly attenuated search step, and the latter is the neighborhood radius coefficient, both of which can be used to control the diversity and convergence of GWOs. In [28,29,30,31], methods such as nonlinear functions and chaotic sequences are proposed to construct parameter a, which further enhances the diversity of individuals and improves the ability of GWOs to bounce from local optima. In most GWOs, a and C are treated as independent parameters, while the opinion presented in [32] is that they are related, and a method to calculate C with a is proposed, which further enhances the diversity of GWOs. The second category is to design new individual update strategies. Individual update strategies, such as Levy flight [33,34], Cauchy operator [35], opposition-based learning [36], refraction learning [37], and chaotic opposition-based approaches [38], can make some individuals in the population change greatly, thereby increasing the diversity of GWOs. Update strategies of other evolutionary algorithms, such as the whale optimization algorithm (WOA) [39], covariance matrix adaptation-evolution strategy (CMA-ES) [40], minimum conflict algorithm (MCA) [41], and grasshopper optimization algorithm (GOA) [42], are applied to improve the individual update strategies of GWOs and can also effectively improve the diversity and convergence of GWOs.
By using GWOs to solve MMOPs, the convergence speed and quality of solutions must be further improved, and there are three main reasons for these defects. (i) The absolute value operation in the search direction leads to the loss of negative signs in some dimensions, resulting in an incorrect search and affecting the speed of convergence. (ii) The new individuals of GWOs are mutated in all dimensions, which is a phenomenon of search divergence and affects the convergence and quality of solutions. (iii) GWOs allow worse new individuals to be updated into the population, resulting in the population being unable to effectively estimate the region where the optimal solution is located and reducing the convergence speed of the algorithm. To address these drawbacks of GWOs, a fuzzy strategy grey wolf optimizer (FSGWO) for MMOPs is proposed in this paper. The role of the fuzzy strategy is to automatically adjust the control parameters of the algorithm to realize the adaptive balance of diversity and convergence. By using fuzzy control parameters, new evolutionary operations are designed to increase the algorithm’s abilities to explore new regions and local search, and to improve the quality of the solutions to MMOPs. The main contributions of this paper are as follows: (i) A new grey wolf individual update strategy is proposed. First, both global and local search information is added to the fuzzy search direction, which is used to guide individual mutation and enhance the individual’s ability to detect the optimal solution. Second, the fuzzy cross-operator is applied to generate new individuals and avoid the mutation of new individuals in all dimensions, which is a method to control the phenomenon of individual search divergence. Finally, a noninferior selection strategy is employed to update the population, allowing only better new individuals to be updated into the population, which improves the ability of the grey wolf population to estimate the location of the optimal solution and helps accelerate the convergence. (ii) Binary joint normal distribution is used as a fuzzy method to realize the adaptive adjustment of the control parameters of FSGWO. The two control parameters of the FSGWO are considered to have an intrinsic correlation, which is modeled by a binary joint normal distribution. In the iterative process, the parameters of the binary joint normal distribution method are adaptively updated with information about the current optimal solutions to automatically control the convergence speed. (iii) The FSGWO is verified on the 30 30D and 50D test functions of IEEE CEC2014 and five engineering application problems and compared with state-of-the-art competitive algorithms. The results show that the proposed algorithm has advantages over competitive algorithms in solving MMOPs, and the proposed improvement ideas are feasible for balancing the diversity and convergence of traditional GWOs. Specifically, for the 50D test functions of CEC2014, the average calculation accuracy of FSGWO is 33.63%, 46.45%, 62.94%, 64.99%, and 59.82% higher than those of the equilibrium optimizer algorithm (EO), modified particle swarm optimization (MPSO), original GWO, hybrid particle swarm optimization and grey wolf optimizer (HPSOGWO), and selective opposition-based GWO (SOGWO), respectively, which indicates that the FSGWO can significantly improve the calculation accuracy when solving high-dimensional MMOPs. For the 30D and 50D test functions of CEC2014, the results of the Wilcoxon signed-rank test show that the proposed algorithm is better than the competitive algorithms.
The remainder of this paper is arranged as follows: Section 2 introduces the principle of the original GWO. Section 3 provides the key details and the flowchart of FSGWO. Section 4 presents the experimental results, and Section 5 is the discussion. Finally, Section 6 summarizes the study.

2. Related Work

2.1. Algorithm Flow of the Original Grey Wolf Optimizer

The grey wolf population is denoted by X = { X 1 , X 2 , , X M }, where M is the number of grey wolves. Xp is an individual in X. In the original GWO [21], the three best solutions appearing in the iterative process are called the three leading wolves and are denoted by X α , X β , and X δ . The three leading wolves are used to guide grey wolf individuals to round up prey, which is also known as updating individuals.
The search direction D α from X p to X α is calculated as
D α   =   C 1     X α     X p
where C 1 is the neighborhood radius, which is a random vector between (0, 2). The operator ⨀ represents the vector dot product operation. C1 X α is a neighborhood point of X α . The operator |∙| is an absolute value operator.
A mutant individual of X p generated by X α is written as
X 1   =   X α     A 1     D α
where A 1 is the search step, which is a random vector between (−2, 2).
Similarly, a mutant individual of X p generated by X β can be presented as
X 2   =   X β     A 2     D β
where Dβ = | C 2 X β X p | is the search direction. C2 is a random vector between (0, 2), and A 2 is a random vector between (−2, 2).
In the same way, a mutant individual of Xp generated by X δ is
X 3   =   X δ     A 3     D δ
where Dδ = | C 3 Xδ X p | is the search direction. C3 is a random vector between (0, 2), and A 3 is a random vector between (−2, 2).
The new individual Xu, generated from the above three mutant individuals, can be expressed as
X u   =   X 1   +   X 2   +   X 3 3
Finally, Xp in X is replaced with Xu, completing the update of the individual X p .
This update strategy of GWO has two drawbacks. (i) Compared with Xp, all dimensions of Xu are mutated, which leads to divergence when solving high-dimensional MMOPs and reduces the quality of solutions. (ii) Moreover, GWO uses only three leading wolves as heuristic information to guide individuals to search for the optimal solution. The distribution of the optimal solutions of MMOPs is more complex, and the three leading wolves cannot quickly estimate the region of the optimal solution; thus, the algorithm slowly converges.
The pseudocode of the original GWO algorithm is shown in Algorithm 1.
Algorithm 1. Pseudocode of the original GWO algorithm [21].
Initialize the grey wolf population Xi (i = 1, 2, …, M)
Initialize a, A, and C
Calculate the fitness of each search agent
X α = the best search agent
Xβ = the second-best search agent
X δ = the third-best search agent
while (t < Max number of iterations)
   for each search agent
    Update the position of Xi
   end for
   Update a, A, and C
   Calculate the fitness of all search agents
   Update Xα, X β , and Xδ
   t = t + 1
end while
return X α

2.2. Fuzzy Adaptive Control Parameters

For GWOs, the design of adaptive control parameters is a challenge. In [28,29,30,31], nonlinear functions are used to design adaptive control parameters for GWOs, but such adaptive parameters are not effective in solving MMOPs, and the quality of the obtained solutions is not high. The main reason for this result is that the nonlinear function cannot know the complexity of the solution space of MMOPs and cannot use the current iteration information to estimate the position of the optimal solution.
In recent studies, fuzzy methods have been used to address the issues of adaptive control parameters of evolutionary algorithms. To achieve an optimal balance of exploitation and exploration in the chicken swarm optimization algorithm [43], the fuzzy system is applied to adaptively adjust the number of chickens and random factors. In [44], the fuzzy system is used for the design of the crossover rate control parameter of the differential evolution algorithm, which improves the diversity of the population. In [45], the fuzzy inference system is employed to automatically tune the control parameters of the whale optimization algorithm, which improves the convergence of the algorithm. The common feature of these fuzzy methods is updating the control parameters with the information of the optimal solution in the current iteration.
Fuzzy methods provide new ideas for the design of adaptive control parameters for GWOs. Inspired by this, in this study, bivariate joint normal distribution is used as a fuzzy method to design adaptive control parameters of GWO and new evolutionary operators are designed based on these fuzzy control parameters. Finally, the improved GWO is employed to solve MMOPs.

3. The Proposed Algorithm

The flowchart of the FSGWO is shown in Figure 1. The algorithm parameters and population are initialized first; the fitness of each individual in the initial population is calculated, and the three individuals with the best fitness values are selected as the three initial leading wolves. In the iterative part of the algorithm, new control parameters are obtained by sampling the binary joint normal distribution, and then a new individual Xu is generated through mutation and crossover operations. If Xu is better than Xp, X p is replaced with Xu; otherwise, it is not replaced. Finally, the parameters of the bivariate joint normal distribution are adaptively updated and the algorithm continues to the next iteration. After the end of the iterations, the best leader wolf is taken as the optimal solution and output.
The key points of the FSGWO are described below.

3.1. Mutation Strategy with a Fuzzy Search Direction

The mutation of the grey wolf is realized by adding a fuzzy search direction to the grey wolf. The term fuzzy search direction refers to the product of the fuzzy step ra and the search direction D c , and its calculation method is described as follows.
First, three leading wolves are used to estimate the current position Xc of the prey, and X c is given by
X c   =   X α   +   X β   +   X δ 3
The fuzzy search direction of Xp is defined as
D c   =   X c     X p   +   X p 1     X p 2
where X p 1 and X p 2 are two individuals randomly selected in X, and X p X p 1 X p 2 . The expression Xc X p represents the search information from Xp to the prey, which belongs to the global search information. The expression X p 1 X p 2 represents the search information between individuals and belongs to the local search information.
There are three differences between Equation (7) and Equation (1). (i) Equation (7) has no absolute value operator and retains the heuristic effect of negative signs on the search. (ii) In the early stage of iterations, the positions of the three leading wolves are generally scattered. Therefore, Equation (7) uses the average value of the three leading wolves to estimate the position of the prey, which can reduce the adverse effects caused by the dispersion of guiding positions and help accelerate the convergence. (iii) Grey wolves have the habit of hunting collectively and surround the prey by exchanging information on the location of their prey. Equation (7) uses X p 1 X p 2 to realize the exchange of prey location information between grey wolves, but there is no method for exchanging prey location information between grey wolves in Equation (1).
The mutant individual X ν of Xp is generated by the fuzzy search direction, written as
X ν   =   X p   +   r a     D c
where r a is the fuzzy step, which is a random vector between (0, 1). ra is a control parameter of FSGWO, and its generation method is described later. The expression r a Dc represents the fuzzy search direction, which is the product of the fuzzy step r a and the search direction Dc. The expression Xp + r a Dc starts from X p and searches for prey in the fuzzy search direction of ra D c .

3.2. Fuzzy Crossover Operator

All dimensions of Xν are mutated. To make the search stable, selecting the values on some dimensions from X ν and then copying them into the corresponding dimensions of Xu is necessary. This is achieved by a fuzzy crossover operator. The term fuzzy crossover operator refers to a crossover operator that uses the fuzzy crossover factor rb.
The term j is denoted as the jth dimension of X p and Xν. The crossover operation on the jth dimension can be expressed by
X u j   =   X v j ,   if   r     r b j X p j ,   else
where r b is the fuzzy crossover factor, which is a random vector between (0, 1). r b j is the value in the jth dimension of r b . r is a random number (scalar) that follows the standard uniform distribution. The expression r r b j indicates that the value of the jth dimension of X ν is copied to the jth dimension of Xu using the roulette strategy.
After completing the operation of Equation (9), a dimension w is randomly specified, and then the mutation operation X u ω = X y ω is performed to generate a new individual Xu.
The term fuzzy control parameter means that control parameters r a and rb are not directly related in formulas, but they can affect the diversity and the convergence of FSGWO, and there is an inherent fuzzy correlation between r a and rb. Therefore, these two control parameters are related in this paper, and a bivariate joint normal distribution is used to describe that fuzzy relationship. The expression r c j = [ r a j , r b j ] is a binary variable, where r a j and r b j are values on the jth dimension of r a and rb, respectively. r c j follows a binary joint normal distribution with a mean of μ and a covariance of , denoted as
r c j     N ( μ , Σ )
where u = [ u r a , u r b ], ura and u r b are both scalars. The covariance matrix is defined as
Σ   =   s 1   ×   s 2 0 0 s 1   ×   s 3
where s1 is a random number (scalar) following the standard uniform distribution. The terms s 2 and s3 are random numbers (scalars) following the standard normal distribution. The values of s 2 and s3 obtained by sampling the standard normal distribution may be greater than 1, so the diagonal elements of ∑ have diversity, which makes r c j also have diversity.
By sampling Equation (10), a matrix r c with d rows and two columns can be obtained as follows:
r c   =   r c 1 r c 2 r c d   =   r a r b   =   r a 1 r b 1 r a 2 r b 2 r a d r b d
where d is the dimensionality of the MMOPs. The terms ra and r b are fuzzily related by Equation (10), so the crossover operation of Equation (9) is referred to as the fuzzy crossover operation. The control parameters in rc can be used by a d-dimensional individual in a complete mutation and crossover operation.

3.3. Updated Parameters of the Bivariate Joint Normal Distribution

To improve the diversity of r a and rb, it is necessary to update μ and ∑ before each iteration of FSGWO.

3.3.1. Update of μ

Updating μ with a fuzzy perturbation is described as follows.
X p before and after the update is denoted by X p o l d and X p n e w , respectively. The fitness values of X p o l d and X p n e w are denoted by f( X p o l d ) and f( X p n e w ), respectively. The absolute value |f( X p o l d ) − f( X p n e w )| represents the change rate of the fitness value before and after the update of X p . In population X, the individual with the largest |f( X p o l d ) − f( X p n e w )| is denoted by X m , where m is the ID of Xm. X m can be written as
X m   =   arg max X p X f X p o l d     f X p n e w
The control parameters of Xm are stored in the mth row of r c , denoted by rmc = [ r a m , r b m ]. r c m can be regarded as heuristic information for updating μ, namely, fuzzy perturbation. Updating μ with r c m can be written as
μ   =   ( 1     c )   ×   μ   +   c   ×   r c m
where c is a conversion factor, which is a constant between (0, 1). The expression c × r c m takes part of r c m as heuristic information to update μ. To avoid excessive perturbation and cause the algorithm to diverge, c is usually 0.1 or 0.2.

3.3.2. Update of ∑

The updated method of is relatively simple. First, s 1 is obtained by sampling the standard uniform distribution; s2 and s 3 are sampled via the standard normal distribution. Finally, a new can be obtained by substituting s1, s 2 , and s3 into Equation (11).

3.4. Steps of FSGWO

The steps of FSGWO are shown in Algorithm 2.
Algorithm 2. Steps of the FSGWO algorithm.
Input: 
grey wolf population size M, maximum number of iterations T, dimension d of the problem, upper bound ub and lower bound lb of variables.
Output: 
optimal solution X α .
1:
Initialize the wolf population X, parameters c, μ, and .
2:
Calculate the fitness value of f(Xp) for each grey wolf. The fitness of the entire population is denoted by f(Xold).
f(Xnew) = f(Xold).
3:
Initialize the three leading wolves X α , Xβ, and X δ .
4:
for t = 1 to T
4.1:
Generate the control parameters matrix rc with Equation (12).
4.2:
for p = 1 to M
4.2.1:
Generate the new mutant individual X ν with Equation (8).
4.2.2:
Generate Xu with Equation (9); then randomly specify a dimension ω, and perform a crossover operation X u ω = X ν ω to produce a new individual Xu.
4.2.3:
Calculate the fitness value f(Xu) of Xu.
4.2.4:
If f(Xu) < f( X p )
   Xp = Xu.
   In f(Xnew), f( X p ) = f(Xu).
End if
4.2.5:
End for
4.3:
According to f(Xnew), update Xα, X β , and Xδ.
4.4:
Calculate X m with Equation (13) and then obtain r c m from r c .
Update μ and with Equation (14) and Equation (11).
4.5:
f(Xold) = f(Xnew).
4.6:
End for
5:
Output optimal solution Xα.
Some details of the algorithm steps in Algorithm 2 are described below.
(i)
In step 1, the initial value of μ is [0.5, 0.5], and the initial value of is [0.1, 0; 0, 0.1]. The value of c is 0.1 or 0.2, which means taking 10% or 20% of r c m as a fuzzy perturbation.
(ii)
In step 4.1, the values of the elements in r c , which are sampled from the binary joint normal distribution N(μ, ), may be out of (0, 1); the element’s value is corrected to 0.999 if it crosses the upper bound or 0.001 if it crosses the lower bound.
(iii)
In steps 4.2.1 and 4.2.2, the value of each element in and Xu is between [lb, ub]; if the value of an element exceeds the upper bound, it is corrected to rand × ub, or rand × lb if the value of an element exceeds the lower bound, where rand is a random number following a standard uniform distribution.
(iv)
In step 4.4, the value of the element in μ is between (0, 1); if the value of an element in μ exceeds the upper bound, it is corrected to 0.99, or 0.01 if the value of an element exceeds the lower bound.
(v)
Comparing the FSGWO algorithm flow in Algorithm 2 with the original GWO algorithm flow in Algorithm 1 shows that the operations lacking in Algorithm 1 mainly include fuzzy control parameters in step 4.1, the fuzzy crossover operation in step 3, noninferior selection in step 4.2.4, and fuzzy perturbation in step 4.4.

3.5. Analysis of Computational Complexity

M is the population size, d is the dimension of the problem, and T is the number of iterations.
According to the algorithm steps shown in Algorithm 2, the computational cost of FSGWO is concentrated in the iterative part. The computational cost of an iteration mainly includes the population mutant of O(M × d), the population crossover of O(M × d), the calculation of population fitness of O(M), an update of the leading wolfs of O(M), and an update of parameters of O(1). The computational cost of T iterations is O(T × (M × d + M × d + M + M + 1)). Therefore, the computational complexity of FSGWO is O(T × M × d).
According to the algorithm steps shown in Algorithm 1, the computational cost of the original GWO is concentrated in the iterative part. The computational cost of an iteration mainly includes the population mutant guided by the three leading wolves of O(M × d + M × d + M × d), the calculation of the population fitness of O(M), an update of the leading wolfs of O(M), and an update of the parameters of O(1). The computational cost of T iterations is O(T × (3 × M × d + 2 × M + 1)). Therefore, the computational complexity of the original GWO is O(T × M × d).
According to the above analysis, the computational complexity of FSGWO is the same as that of GWO.

4. Results

In this section, the FSGWO algorithm is verified on 30 test functions of IEEE CEC2014 [46] and 5 engineering application problems.
The compared algorithms include GWO [21], HPSOGWO [47], SOGWO [36], EO [48], and MPSO [49]. GWO is the original GWO. HPSOGWO is an improved GWO with a particle swarm individual update strategy. SOGWO uses selective opposition to enhance the diversity of GWO, and the convergence speed is faster. EO is inspired by control volume mass balance models used to estimate both dynamic and equilibrium states. MPSO is a particle swarm optimization algorithm using chaotic nonlinear inertia weights and has a good balance of diversity and convergence.
The key parameters of the competitive algorithms are shown in Table 1, and the computer source codes of those algorithms were provided by the original papers. The parameter values of the competition algorithm were taken from the original paper and the default settings of the source codes. The setting method of the control parameter values of FSGWO is described in Algorithm 2. In Table 1, N is the population size, which is uniformly taken as 50 in this paper. In EO, a1 is a constant value that controls exploration ability and a2 is a constant value used to manage exploitation ability; GP is a parameter used to balance exploration and exploitation. In MPSO, c1 and c2 are referred to as the acceleration factors; ω1 and ω2 are inertia weights used to balance exploration and exploitation. In GWO, parameter a is the neighborhood radius. In HPSOGWO, rand is a random number between (0, 1) and ω is an inertia weight. In SOGWO, parameter a is the neighborhood radius. In FSGWO, c is a conversion factor between (0, 1); r a and rb are adaptive control parameters.

4.1. Results of the Test Functions of CEC2014

The IEEE Congress on Evolutionary Computation 2014 (CEC2014) test suite had 30 complex optimization functions [46], where F1–F3 were unimodal functions and F4–F30 were multimodal functions. In this paper, the proposed algorithm was verified on 30 complex functions (F1–F30) of CEC2014. According to the requirements of the competition, the value range of each dimension decision variable was [−100, 100], and the maximum number of computations of the fitness function was d*104. The experiment was repeated 51 times. The absolute value |f(x) − f(x*)| was the final result of a calculation, where f(x) was the optimal value of the function obtained by the algorithm, and f(x*) was the theoretical optimal value of the function. The smaller the value of |f(x) − f(x*)| was, the closer the optimal value obtained by the algorithm was to the theoretical optimal value. If |f(x) − f(x*)| < 10−8, the calculation result was 0.

4.1.1. Results of 30-Dimensional Test Functions

Table 2 shows the calculation results of the related algorithms, in which the mean (Mean) and standard deviation (STD) of the index were calculated using the results of 51 runs of each algorithm. In Table 2, the optimal mean and standard deviation for each function are highlighted with a bold font and gray background.
From the mean results in Table 2, the calculated results of FSGWO on unimodal functions F1–F3 were at least four orders of magnitude better than those of GWO, HPSOGWO, and SOGWO. For the F1 function, the exponent of the result of FSGWO was e + 03, while the exponents of the results of GWO, HPSOGWO, and SOGWO were all e + 07. For the F2 function, the exponent of the result of FSGWO was e + 00, while the exponents of the results of GWO, HPSOGWO, and SOGWO were e + 09, e + 09, and e + 08. For the F3 function, the exponent of the result of FSGWO was e + 00, while the exponents of the results of GWO, HPSOGWO, and SOGWO were all e + 04. The results of the unimodal function show that FSGWO had excellent local optimization ability.
From the mean in Table 2, it can be seen that the calculation results of the proposed algorithm for 24 functions (F1–F5, F8–F12, F14, F16–F23, and F26–F30) were better than those of the competitive algorithms, accounting for 80.0%, which indicates that FSGWO could obtain high-quality solutions for MMOPs.
From the STD in Table 2, it can be seen that the results of the proposed algorithm for 23 functions (F1–F5, F8–F12, F14, F16–F23, F26, and F28–F30) were better than those of the competitive algorithms, accounting for 76.7%, which indicates that the FSGWO algorithm had good stability and convergence.
The mean results in Table 2 were analyzed by the Wilcoxon signed-rank test, and the significance level was 0.05. The results of the Wilcoxon test are shown in Table 3. In Table 3, all p values were less than 0.05, which meant that the mean values of FSGWO were significantly different from those of the other algorithms. The results of Table 2 and Table 3 show that the quality of solutions obtained by FSGWO was better than that of the competitive algorithms for the 30 30D test functions.
According to the data in Table 2, the percentage of improvement in calculation accuracy between FSGWO and each competing algorithm can be calculated, and the results are shown in Table 4. In Table 4, a negative number means that the calculation accuracy of FSGWO for this test function was not as good as that of the competitive algorithm and the term Average represents the average percentage of improvement in the calculation accuracy of FSGWO over the competitive algorithm for the 30 test functions. Table 4 shows that for the 30 30D test functions, the average calculation accuracy of FSGWO was 46.98%, 54.35%, 64.84%, 69.02%, and 62.27% higher than those of EO, MPSO, GWO, HPSOGWO, and SOGWO, respectively. The calculation accuracy of FSGWO was significantly higher than those of the competitive algorithms.
Figure 2 shows box plots of the related algorithms for the 30 test functions which were drawn with 51 calculation results of related algorithms. In Figure 2, the short red line represents the median; the black box represents the upper quartile (Q3) and the lower quartile (Q1); and the blue solid prism represents an outlier.
For 27 functions (F1–F5, F7–F23, and F26–F30), the box length of FSGWO was shorter than those of the competitive algorithms or comparable to them, which meant that the proposed algorithm had good convergence; therefore, the results of 51 runs were relatively concentrated, and the box length was shorter.
For 28 functions (F1–F23 and F26–F30), the median of the proposed algorithm was smaller than those of the competitive algorithms or comparable to them, which indicated that the proposed algorithm had good diversity and local optimization ability and could find high-quality solutions.
For 28 functions (F1–F25 and F28–F30), the number of outliers of FSGWO was less than those of the compared algorithms, or the outliers were mainly distributed around the median, which indicated that the calculation results of FSGWO were close to the normal distribution, and the robustness of FSGWO was better than those of the compared algorithms.
Figure 3 shows the convergence curves of the related algorithms for the 30 30D test functions, where the abscissa t is the number of iterations, and the ordinate f(x) is the average fitness of 51 independent experiments of each algorithm.
From the perspective of the changing tendencies of the convergence curves in the early stage of the iterations, the fitness values of the proposed algorithm decreased faster than those of the competitive algorithms for 29 functions (F1–F5 and F7–F30), which indicated that FSGWO had good diversity and convergence and could quickly locate the optimal solution in the early stage of the iterations.
From the overall change tendencies of the curves and the final convergence positions, for the 25 functions (F1–F5, F8–F12, F14–F23, and F26–F30), the convergence curves of FSGWO were better than those of the competitive algorithms.

4.1.2. Results of 50-Dimensional Test Functions

The IEEE CEC2014 test functions were set to 50 dimensions. Table 5 shows the calculation results of the related algorithms for the 50-dimensional functions. In Table 5, the optimal mean and standard deviation for each function are highlighted with a bold font and gray background. As the dimensions increased, so did the complexity of the MMOPs. For 23 functions (F1–F5, F7–F12, F14, F16–F18, F20–F23, F26, and F28–F30), the mean values of FSGWO were better than those of the competitive algorithms, accounting for 76.7%.
The Wilcoxon signed-rank test was used to analyze the mean values in Table 5, with a significance level of 0.05. The results of the test are shown in Table 6. Table 6 shows that all p values were less than 0.05, which meant that the calculation results of FSGWO for the 50-dimensional functions were significantly different from those of the competitive algorithms.
According to the data in Table 5, the percentage of improvement in calculation accuracy between FSGWO and each competing algorithm can be calculated, and the results are shown in Table 7. In Table 7, a negative number means that the calculation accuracy of FSGWO for this test function was not as good as that of the competitive algorithm and the term Average represents the average percentage of improvement in the calculation accuracy of FSGWO over the competitive algorithm for 30 test functions. Table 7 shows that for the 30 50-dimensional test functions, the average calculation accuracy of FSGWO was 33.63%, 46.45%, 62.94%, 64.99%, and 59.82% higher than those of EO, MPSO, GWO, HPSOGWO, and SOGWO, respectively. The calculation accuracy of FSGWO was significantly higher than those of the competitive algorithms for the 50D test functions.
Figure 4 shows the convergence curves of the related algorithms for the 50-dimensional functions, which were plotted with an average of 51 calculations of each algorithm. From the changing tendencies of the convergence curves and the final convergence positions, the convergence tendencies of the proposed algorithm for 25 test functions (F1–F5, F7–F14, F16–F18, F20–F23, and F26–F30) were better than those of the competitive algorithms.

4.1.3. Verification of the Validity of the Fuzzy Control Parameters

In this paper, the correlation between control parameters r a and rb is modeled by the bivariate joint normal distribution N(μ, ). A comparative experiment was conducted to verify the effectiveness of that modeling idea. The control parameters r a and rb were assumed to be independent random variables. In addition, both parameters followed the standard uniform distribution; the other parts of the FSGWO algorithm remained unchanged, and the new FSGWO at this time was denoted as FSGWO1.
The 30-dimensional functions (F1–F30) were solved by FSGWO and FSGWO1, and the calculation results are shown in Table 8. In Table 8, the optimal mean and standard deviation for each function are highlighted with a bold font and gray background. For 24 complex multimodal functions (F1–F5, F7–F12, F14–F16, F18, F19, F21–F23, F25, F26, and F28–F30), the mean values of FSGWO were better than those of FSGWO1, accounting for 80.0%. The Wilcoxon signed-rank test was used to analyze the mean values in Table 8, and the significance level was 0.05. The results of the test are shown in Table 9. The p value (2.7610e – 03) was less than the significance level, indicating that there was a substantial difference in Mean between FSGWO and FSGWO1.
Figure 5 demonstrates the convergence curves of FSGWO and FSGWO1 for the 30 30D test functions, which were plotted with the averages of 51 runs of the 2 algorithms. Figure 5 shows that the change tendencies and the final convergence positions of FSGWO were better than or comparable to those of FSGWO1 for 29 functions (F1–F5 and F7–F30).
According to the results of Table 8, Table 9 and Figure 5, the optimization ability of FSGWO was better than that of FSGWO1. Therefore, it is valid that binary joint normal distribution is used as a fuzzy method to realize the adaptive adjustment of the control parameters r a and rb, and this fuzzy method helps improve the convergence speed of FSGWO and the quality of the solution.

4.1.4. Verification of the Effectiveness of the Fuzzy Perturbation Strategy

In Equation (14), the fuzzy perturbation r c m is used to update the mean μ of the bivariate joint normal distribution. A comparative experiment was used to verify that r c m was an effective design. r c m was extracted from r c by m of Equation (13), and m was randomly generated instead of using Equation (13); the other parts of FSGWO remained unchanged, and the new FSGWO was denoted as FSGWO2.
The 30-dimensional functions were solved with FSGWO and FSGWO2, and the results are shown in Table 10. In Table 10, the optimal mean and standard deviation for each function are highlighted with a bold font and gray background. The mean values of FSGWO were better than those of FSGWO2 for 24 functions (F1–F5, F7–F12, F14, F15, F17–F19, F21–F24, F26, and F28–F30), accounting for 80.0%. Table 11 is the result of the Wilcoxon test for the mean values of Table 10, and the significance level was 0.05. The p value was 2.9719e-03, which was less than the significance level. The results of the Wilcoxon test indicated that there was a significant difference in the mean between FSGWO and FSGWO2.
Figure 6 shows the convergence curves of FSGWO and FSGWO2, which were plotted with the averages of 51 runs of the 2 algorithms. Figure 6 shows that FSGWO converged faster than FSGWO2 or was comparable to FSGWO2 for 29 functions (F1–F5 and F7–F30), accounting for 96.7%.
The results in Table 10 and Table 11 and Figure 6 show that the design idea of fuzzy perturbation is effective.

4.2. Results for Economic Load Dispatch Problems of Power Systems

Economic load dispatch (ELD) is a complex optimization problem with constraints in power systems [50,51]. The task of ELD is to reasonably dispatch the load of the system to each generator to minimize the fuel cost of the system and satisfy the relevant constraints.

4.2.1. The Basic Model of the ELD Problem

The economic load dispatch of thermal power units is discussed in this paper. The fuel cost of an ELD can be approximately expressed as
min   F   =   i = 1 N G F i P i   =   i = 1 N G a i P i 2   +   b i P i   +   c i   +   e i   ×   sin f i   ×   P i min P i
where i is the ID of a generator unit and NG is the total number of generators, which is the dimension of the ELD. For the ith unit, P i min is the output power, P i is the minimum output power, F i (Pi) is the generation cost function, and a i , bi, c i , ei and f i are the power generation cost coefficients. The absolute value operator |·| converts the negative domain of the sine function into a positive domain to generate multimodality; thus, the ELD is a constrained high-dimensional multimodal optimization problem.
The main constraints of ELDs are as follows.
(i)
Power balance constraints.
i   =   1 N G P i   =   P D   +   P L
These constraints require that the total power generation of each unit is equal to the system load PD and the transmission loss P L .
(ii)
Generating capacity constraints.
P i min     P i     P i max
where Pmini and P i m a x are the lower and upper limits of the output power of the ith unit, respectively. These constraints require that P i is between [ P i m i n , P i m a x ].
(iii)
Ramp rate limits.
Δ P i     P i t     P i t 1     Δ P i
where P i t 1 and P i t are the output power of the ith unit in the (t − 1)th period and the tth period, respectively; Δ P i is the maximum change rate of the output power of the ith unit in the two adjacent periods. When multiperiod load dispatch is involved, | P i t 1 P i t | cannot exceed Δ P i .
  • (iv) Prohibited operating zones.
P i     P _ i p z   a n d   P i     P ¯ i p z
where P _ i p z and P ¯ i p z are the lower and upper limits of the prohibited operation zones of the ith unit, respectively. As there are physical limitations of generator components and unstable factors such as steam valve or bearing vibration, the output power of the ith unit is prohibited in some zones.

4.2.2. Results of ELD Cases

The data of static ELD cases were taken from the IEEE CEC2011 competition dataset [51]; the numbers of units were 40 and 140, respectively. According to the requirements of the competition, the maximum number of calculations of the fitness function was 15,000. The best result (best), mean (mean), median (median), the worst result (worst), and standard deviation (STD) were calculated with 25 independent running results of the algorithm.
In addition to the six algorithms shown in Table 1, the compared algorithms included the island-based harmony search (iHS) [52], intellects-masses optimizer (IMO) [53], modified intellects-masses optimizer (MIMO) [53], adaptive population-based simplex (APS 9) [54], enhanced salp swarm algorithm (ESSA) [55], and the genetic algorithm with a new multiparent crossover (GA-MPC) [56]. iHS is a multipopulation evolutionary algorithm with an island-based harmony search. IMO is a dual-population culture algorithm, and its parameters hardly need to be adjusted. MIMP is an IMO algorithm with a trust domain reflection strategy and strong local search abilities. APS 9 is an improved adaptive population-based simplex method. ESSA is a multistrategy enhanced salp swarm algorithm. GA-MPC is a genetic algorithm with three consecutive parents.
Table 12 shows the results of the related algorithms for the 40-unit case. The results of the 1st–6th algorithms are calculated and presented in this paper, and the results of the 7th–12th algorithms were taken from the original papers. In Table 12, the best values of the indicators are highlighted with bold font and a gray background; the symbol—indicates that the data are not provided in the original paper.
From the mean index in Table 12, the exponents of the results of related algorithms were all e + 05, and the coefficients of the results were also very close, which meant that all algorithms could approximate the optimal solution; the nuance of the results was mainly caused by the different local optimization abilities of each algorithm. The Best, Mean, and Median of the FSGWO were 1.2260e + 05, 1.2514e + 05, and 1.2519e + 05, respectively, which were better than those of competitive algorithms, indicating that FSGWO had relatively strong local optimization ability and stability.
Table 13 shows the results of the related algorithms for the 140-unit case. In Table 13, the best values of the indicators are highlighted with a bold font and gray background. The best, mean, and median of FSGWO were 1.7551e + 06, 1.8119e + 06, and 1.8107e + 06, respectively, which were still better than those of the competitive algorithms, indicating that the proposed algorithm still had excellent optimization performance for the high-dimensional ELD problem. The simplex of ESSA and trust region of MIMO were both classical numerical optimization strategies; Table 13 shows that the fuzzy search strategy of FSGWO was competitive with those numerical optimization strategies when used to solve high-dimensional ELD problems.
According to the results of Table 12 and Table 13, when FSGWO was used to solve high-dimensional ELD problems, its optimization ability was better than those of the competitive algorithms, and higher-quality solutions could be obtained by FSGWO.

4.3. Design of Three-Bar Truss

In structural engineering, a truss is a triangulated system that provides an efficient way to span long distances. Because members of a truss incur only axial force, the purpose of a truss design is to use less material and maintain the effectiveness of the entire system. A reduction in the amount of material used is usually expressed as a reduction in the diameter of a member. A three-bar planar truss structure is shown in Figure 7 [57]. In this problem, x1, x2, and x3 are the normalized diameters of the three members, and x3 has the same diameter as x1. The aim of this study is to achieve the minimum volume of a three-bar truss by minimizing the values of x1 and x2.
The problem shown in Figure 7 can be expressed as an optimization problem:
min   f ( x 1 , x 2 )   =   ( 2 2 x 1   +   x 2 )   ×   l s . t . 2 x 1   +   x 2 2 x 1 2   +   2 x 1 x 2   ×   r     ρ     0 x 2 2 x 1 2   +   2 x 1 x 2   ×   r     ρ     0 1 x 1   +   2 x 2   ×   r     ρ     0 l   =   100 cm , r   =   2 KN / cm 2 , ρ   =   2 KN / cm 2
where x1 and x2 are between [0, 1].
FSGWO was applied to solve the optimization problem of Equation (20). The competitive algorithms included the memory-based grey wolf optimizer (m-GWO) [58], modified sine cosine algorithm (m-SCA) [59], moth-flame optimization (MFO) [60], and cuckoo search (CS) [57]. Table 14 shows the results of related algorithms for the three-bar truss design problem. In Table 14, the best object function value is highlighted with a bold font and gray background. The data for the competitive algorithms are taken from the original papers. From Table 14, the objective function value of FSGWO is 263.8958, which is better than those of the competitive algorithms.

4.4. Design of Pressure Vessel

Figure 8 shows a cylindrical pressure vessel which has a hemispherical head at the end and is designed according to the ASME boiler and pressure vessel code [57]. This problem has four decision variables, which are the thickness of the shell (Ts), the thickness of the head (Th), the inner radius (R), and the length of the cylindrical section without considering the head (L). The goal of this problem is to minimize the cost of producing this capacity and satisfy the relevant conditions.
Four decision variables of the pressure vessel are represented by x1, x2, x3, and x4. The problem shown in Figure 8 can be expressed as an optimization problem:
min   f ( x 1 , x 2 , x 3 , x 4 )   =   0.6224 x 1 x 2 x 4 +   1.7781 x 1 2 x 3   +   3.1661 x 1 2 x 4   +   19.84 x 1 2 x 3 s . t .   x 1   +   0.0193 x 3     0   x 2   +   0.00954 x 3     0   π x 3 2 x 4     4 3 π x 3 3   +   1296000     0 x 4     240     0
where x1 and x2 are between [0, 99] and x3 and x4 are between [10, 200].
FSGWO was applied to solve the optimization problem of Equation (21). The competitive algorithms included the grey wolf optimization method based on a beetle antenna strategy (BGWO) [61], the improved grey wolf optimizer (I-GWO) [62], moth-flame optimization with orthogonal learning and Broyden-Fletcher-Goldfarb-Shanno (BFGSOLMFO) [63] and the slime mould algorithm (SMA) [64]. Table 15 shows the results of related algorithms for the pressure vessel problem. In Table 15, the best object function value is highlighted with a bold font and gray background. The data for the competitive algorithms are taken from the original papers. From Table 15, the objective function value of FSGWO is 5885.3328, which is better than those of the competitive algorithms.

4.5. Design of Gear Train

Figure 9 shows a gear train design problem, in which there are four gears A, B, C and D [59]. The numbers of teeth of the four gears are represented by variables x1, x2, x3, and x4. The number of teeth is an integer between [12,60]. The goal of this problem is to minimize the gear ratio and keep it close to the optimal value of 1/6.931.
The problem of Figure 9 can be expressed as an optimization problem:
min   f ( x 1 , x 2 , x 3 , x 4 )   =   1 6.931     x 1 x 3 x 2 x 4 2 s . t .   12     x i     60 ,   and   x i     Z +   i   =   1 , 2 , 3 , 4 .
FSGWO was applied to solve the optimization problem of Equation (22). The competitive algorithms included m-SCA [59], CS [57], the linear prediction evolution algorithm (LPE) [65], and the hybrid grey wolf optimizer and sine cosine algorithm (GWOSCA) [66]. Table 16 shows the results of related algorithms for the gear train design problem. In Table 16, the best object function values are highlighted with a bold font and gray background. The data for the competitive algorithms are taken from the original papers. From Table 16, the objective function value of FSGWO is 2.7009e−12, which is as good as those of m-SCA, CS, and LPE. Moreover, Table 16 shows that gear train design is a typical multimodal optimization problem. For the objective function value of 2.7009e−12, there are three different nearly optimal solutions that correspond to different gear train designs. According to the cost, volume, weight, and reliability of the gear train, decision makers find a design that meets their requirements among these different solutions.

4.6. Design of Cantilever Beam

Figure 10 shows a cantilever beam design problem in which there are five nodes [59]. A node in Figure 10 is regarded as a square hollow cross-section with constant thickness. The first node is fixedly supported, and there is an external vertical force acting at the end of the fifth node. The variable xi represents the width of the cross-section of the ith node and its value is between [0.01, 100]. The goal of this problem is to minimize the weight of the cantilever beam.
The problem of Figure 10 can be expressed as an optimization problem:
min   f ( x 1 , x 2 , x 3 , x 4 , x 5 )   =   0.0624   ×   i = 1 5 x i s . t . 61 x 1 3   +   37 x 2 3   +   19 x 3 3   +   7 x 4 3   +   1 x 5 3     1 0.01     x i     100 ,   i   =   1 , 2 , 3 , 4 , 5
FSGWO was applied to solve the optimization problem of Equation (23). The competitive algorithms included CS [57], BGWO [61], m-SCA [59], and MFO [60]. Table 17 shows the results of related algorithms for the cantilever beam design problem. In Table 17, the best object function values are highlighted with a bold font and gray background. The data for the competitive algorithms are taken from the original papers. From Table 17, the objective function value of FSGWO is 1.33996, which is as good as that of BGWO.

5. Discussion

The convergence curves of Figure 3 and Figure 4 show that the fitness values of the FSGWO algorithm decreased faster than those of the competitive algorithms in the early stage of iterations. This advantage is related to the improvement of FSGWO in population updates. Step 4.2.4 in Algorithm 2 utilizes the noninferior selection strategy for population updating, which allows only better new individuals to be updated into the population. As the entire grey wolf population can be used to estimate the position of the optimal solution, the probability of detecting the region where the optimal solution is located is also increased, and FSGWO has a faster convergence speed in the early stage of iterations. In contrast, the traditional GWO uses only three leading wolves to estimate the region of the optimal solution, and the probability of finding the optimal solution is relatively low; thus, the convergence curve slowly decreases.
The results in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16 and Table 17 and Figure 2 show that the FSGWO algorithm can obtain higher-quality solutions, which indicates that the proposed algorithm has strong optimization ability and stability. These advantages of FSGWO come from the following improvements. (i) The fuzzy direction Dc is added with both global and local search information, which enhances the ability of FSGWO to approximate the optimal solution, thereby producing a high-quality solution. (ii) The new individual Xu generated by the fuzzy crossover operator does not mutate in all dimensions, which effectively controls the divergence of the algorithm and improves the stability and robustness of FSGWO. (iii) The binary joint normal distribution and fuzzy perturbation can adaptively adjust the control parameters r a and rb of FSGWO, which not only reduces the blindness of the selection of control parameters but also helps improve the local search ability and stability of FSGWO.
Other evolutionary algorithms also have control parameters, and the proposed modeling idea of control parameters is also suitable for those evolutionary algorithms. For example, an evolutionary algorithm has four control parameters, denoted as r 1 , r2, r 3 , and r4. The internal relation of these four control parameters can be modeled by a quaternary joint normal distribution N(μ, ), and μ is written as
μ   =   μ 1 , μ 2 , μ 3 , μ 4
where μ 1 , μ2, μ 3 , and μ4 are scalars between (0, 1), and their initial values are 0.5. The covariance matrix can be expressed as
Σ   =   s 1   ×   s 2 0 0 0 0 s 1   ×   s 3 0 0 0 0 s 1   ×   s 4 0 0 0 0 s 1   ×   s 5
where s 1 is a random number following the standard uniform distribution. The four terms s2 s 5 are random numbers following the standard normal distribution. The initial values of s 1 –s5 are all 0.1. A set of control parameters ( r 1 , r2, r 3 , r4) can be obtained by sampling the quaternary joint normal distribution N(μ, ). In the iterative process, the updated methods of μ and are the same as those in this study.
In summary, the experimental results show that the improvements of FSGWO in balancing diversity and convergence are feasible and effective. The proposed algorithm can produce high-quality solutions when used to solve high-dimensional complex MMOPs and has good convergence and stability.

6. Conclusions

To address the issue that the traditional GWO solves high-dimensional MMOPs with slow convergence speed and low quality solutions, a fuzzy strategy grey wolf optimizer (FSGWO) is proposed in this paper, the key improvements of which are as follows. (i) A new individual mutation strategy is proposed, which utilizes both global and local search information in the fuzzy search direction of mutation and enhances the ability of grey wolf individuals to find the optimal solutions. (ii) A fuzzy crossover operator is used to prevent new individuals from mutating in all dimensions and effectively improves the local search ability of FSGWO and the quality of solutions. (iii) The noninferior selection strategy is applied to update the population, and only better new individuals are allowed to update in the population. Therefore, the entire grey wolf population can be used to estimate the region where the optimal solution is located, which speeds up the convergence of FSGWO. (iv) The two control parameters of FSGWO are modeled by a binary joint normal distribution whose parameters are adaptively updated by a fuzzy perturbation, which effectively reduces the blindness of control parameter selection and improves the stability of the proposed algorithm. Finally, FSGWO is verified on 30 complex test functions of IEEE CEC2014 and 5 engineering application problems; the results show that the convergence of the proposed algorithm and quality of solutions are better than those of the competitive algorithms, which means that the improvements of FSGWO are feasible and effective.
Recent studies have shown that multiple populations have advantages over a single population in maintaining diversity and convergence [67,68]. For our future works, we are interested in some novel topics on GWOs with multiple populations, such as the exchange method of optimal solution information between different populations and the design idea of individual search direction in multiple populations. In addition, state-of-the-art evolutionary algorithms, such as self-adaptive quasi-oppositional stochastic fractal search [69] and combined social engineering particle swarm optimization [70], have many creative update strategies for populations and can be used for reference in the future improvement of FSGWO.

Author Contributions

Conceptualization, H.Q.; methodology, H.Q.; software, T.M.; validation, T.M.; resources, Y.C.; data curation, Y.C.; writing—original draft preparation, T.M.; writing—review and editing, H.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (ID: 61762009).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We are grateful to the anonymous reviewers for their valuable suggestions and comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nekouie, N.; Yaghoobi, M. A new method in multimodal optimization based on firefly algorithm. Artif. Intell. Rev. 2016, 46, 267–287. [Google Scholar] [CrossRef]
  2. De Oliveira, R.P.; Lohmann, G.; Oliveira, A.V.M. A systematic review of the literature on air transport networks (1973–2021). J. Air Transp. Manag. 2022, 103, 102248. [Google Scholar] [CrossRef]
  3. Chen, D.C.; Li, Y.Y. A development on multimodal optimization technique and its application in structural damage detection. Appl. Soft. Comput. 2020, 91, 106264. [Google Scholar] [CrossRef]
  4. Farshi, T.R.; Drake, J.H.; Ozcan, E. A multimodal particle swarm optimization-based approach for image segmentation. Expert Syst. Appl. 2020, 149, 113233. [Google Scholar] [CrossRef]
  5. Bian, Q.; Nener, B.; Wang, X.M. A quantum inspired genetic algorithm for multimodal optimization of wind disturbance alleviation flight control system. Chin. J. Aeronaut. 2019, 32, 2480–2488. [Google Scholar] [CrossRef]
  6. Perez, E.; Posada, M.; Herrera, F. Analysis of new niching genetic algorithms for finding multiple solutions in the job shop scheduling. J. Intell. Manuf. 2012, 23, 341–356. [Google Scholar] [CrossRef]
  7. Mashayekhi, M.; Yousefi, R. Topology and size optimization of truss structures using an improved crow search algorithm. Struct. Eng. Mech. 2021, 77, 779–795. [Google Scholar]
  8. Nazmul, R.; Chetty, M.; Chowdhury, A.R. Multimodal Memetic framework for low-resolution protein structure prediction. Swarm Evol. Comput. 2020, 52, 100608. [Google Scholar] [CrossRef]
  9. Fahad, S.; Yang, S.Y.; Khan, R.A.; Khan, S.; Khan, S.A. A multimodal smart quantum particle swarm optimization for electromagnetic design optimization problems. Energies 2021, 14, 4613. [Google Scholar] [CrossRef]
  10. Tutkun, N. Optimization of multimodal continuous functions using a new crossover for the real-coded genetic algorithms. Expert Syst. Appl. 2009, 36, 8172–8177. [Google Scholar] [CrossRef]
  11. Rajabi, A.; Witt, C. Self-Adjusting evolutionary algorithms for multimodal optimization. Algorithmica 2022, 84, 1694–1723. [Google Scholar] [CrossRef]
  12. Seo, J.H.; Im, C.H.; Heo, C.G.; Kim, J.K.; Jung, H.K.; Lee, C.G. Multimodal function optimization based on particle swarm optimization. IEEE Trans. Magn. 2006, 42, 1095–1098. [Google Scholar] [CrossRef]
  13. Yang, Q.; Chen, W.N.; Yu, Z.T.; Gu, T.L.; Li, Y.; Zhang, H.X.; Zhang, J. Adaptive multimodal continuous ant colony optimization. IEEE Trans. Evol. Comput. 2017, 21, 191–205. [Google Scholar] [CrossRef]
  14. Cuevas, E.; Reyna-Orta, A. A Cuckoo search algorithm for multimodal optimization. Sci. World J. 2014, 1, 497514. [Google Scholar] [CrossRef] [PubMed]
  15. Nguyen, P.T.H.; Sudholt, D. Memetic algorithms outperform evolutionary algorithms in multimodal optimisation. Artif. Intell. 2020, 287, 103345. [Google Scholar] [CrossRef]
  16. Rim, C.; Piao, S.; Li, G.; Pak, U. A niching chaos optimization algorithm for multimodal optimization. Soft Comput. 2018, 22, 621–633. [Google Scholar] [CrossRef]
  17. Arora, A.; Miri, R. Cryptography and tay-grey wolf optimization based multimodal biometrics for effective security. Multimed. Tools Appl. 2022, 5, in press. [Google Scholar] [CrossRef]
  18. Kumar, V.; Chhabra, J.K.; Kumar, D. Variance-Based harmony search algorithm for unimodal and multimodal optimization problems with application to clustering. Cybern. Syst. 2014, 45, 486–511. [Google Scholar] [CrossRef]
  19. Li, J.Z.; Tan, Y. Loser-Out tournament-based fireworks algorithm for multimodal function optimization. IEEE Trans. Evol. Comput. 2018, 22, 679–691. [Google Scholar] [CrossRef]
  20. Bala, I.; Yadav, A. Comprehensive learning gravitational search algorithm for global optimization of multimodal functions. Neural Comput. Appl. 2020, 32, 7347–7382. [Google Scholar] [CrossRef]
  21. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Software 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  22. Ahmed, R.; Nazir, A.; Mahadzir, S.; Shorfuzzaman, M.; Islam, J. Niching grey wolf optimizer for multimodal optimization problems. Appl. Sci. 2021, 11, 4975. [Google Scholar] [CrossRef]
  23. Rajakumar, R.; Sekaran, K.; Hsu, C.H.; Kadry, S. Accelerated grey wolf optimization for global optimization problems. Technol. Forecast. Soc. Chang. 2021, 169, 120824. [Google Scholar] [CrossRef]
  24. Yu, X.B.; Xu, W.Y.; Li, C.L. Opposition-based learning grey wolf optimizer for global optimization. Knowl.-Based Syst. 2021, 226, 107139. [Google Scholar] [CrossRef]
  25. Rodriguez, A.; Camarena, O.; Cuevas, E.; Aranguren, I.; Valdivia-G, A.; Morales-Castaneda, B.; Zaldivar, D.; Perez-Cisneros, M. Group-based synchronous-asynchronous grey wolf optimizer. Appl. Math. Modell. 2021, 93, 226–243. [Google Scholar] [CrossRef]
  26. Deshmukh, N.; Vaze, R.; Kumar, R.; Saxena, A. Quantum entanglement inspired grey wolf optimization algorithm and its application. Evol. Intell. 2022, in press. [CrossRef]
  27. Hu, J.; Chen, H.L.; Heidari, A.A.; Wang, M.J.; Zhang, X.Q.; Chen, Y.; Pan, Z.F. Orthogonal learning covariance matrix for defects of grey wolf optimizer: Insights, balance, diversity, and feature selection. Knowl.-Based Syst. 2021, 213, 106684. [Google Scholar] [CrossRef]
  28. Mittal, N.; Singh, U.; Sohi, B.S. Modified grey wolf optimizer for global engineering optimization. Appl. Comput. Intell. Soft Comput. 2016, 2016, 7950348. [Google Scholar] [CrossRef]
  29. Saxena, A.; Kumar, R.; Mirjalili, S. A harmonic estimator design with evolutionary operators equipped grey wolf optimizer. Expert Syst. Appl. 2020, 145, 113125. [Google Scholar] [CrossRef]
  30. Hu, P.; Chen, S.Y.; Huang, H.X.; Zhang, G.Y.; Liu, L. Improved alpha-guided grey wolf optimizer. IEEE Access 2019, 7, 5421–5437. [Google Scholar] [CrossRef]
  31. Saxena, A.; Kumar, R.; Das, S. Beta-Chaotic map enabled grey wolf optimizer. Appl. Soft Comput. 2019, 75, 84–105. [Google Scholar] [CrossRef]
  32. Long, W.; Jiao, J.J.; Liang, X.M.; Cai, S.H.; Xu, M. A random opposition-based learning grey wolf optimizer. IEEE Access 2019, 7, 113810–113825. [Google Scholar] [CrossRef]
  33. Heidari, A.A.; Pahlavani, P. An efficient modified grey wolf optimizer with lévy flight for optimization tasks. Appl. Soft Comput. 2017, 60, 115–134. [Google Scholar] [CrossRef]
  34. Gupta, S.; Deep, K. Enhanced leadership-inspired grey wolf optimizer for global optimization problems. Eng. Comput. 2020, 36, 1777–1800. [Google Scholar] [CrossRef]
  35. Gupta, S.; Deep, K. Cauchy grey wolf optimiser for continuous optimisation problems. J. Exp. Theor. Artif. Intell. 2018, 30, 1051–1075. [Google Scholar] [CrossRef]
  36. Dhargupta, S.; Ghosh, M.; Mirjalili, S.; Sarkar, R. Selective opposition based grey wolf optimization. Expert Syst. Appl. 2020, 151, 113389. [Google Scholar] [CrossRef]
  37. Long, W.; Wu, T.B.; Cai, S.H.; Liang, X.M.; Jiao, J.J.; Xu, M. A novel grey wolf optimizer algorithm with refraction learning. IEEE Access 2019, 7, 57805–57819. [Google Scholar] [CrossRef]
  38. Ibrahim, R.A.; Abd Elaziz, M.; Lu, S.F. Chaotic opposition-based grey-wolf optimization algorithm based on differential evolution and disruption operator for global optimization. Expert Syst. Appl. 2018, 108, 1–27. [Google Scholar] [CrossRef]
  39. Mohammed, H.; Rashid, T. A novel hybrid GWO with WOA for global numerical optimization and solving pressure vessel design. Neural Comput. Appl. 2020, 32, 14701–14718. [Google Scholar] [CrossRef]
  40. Zhao, Y.T.; Li, W.G.; Liu, A. Improved grey wolf optimization based on the two-stage search of hybrid CMA-ES. Soft Comput. 2020, 24, 1097–1115. [Google Scholar] [CrossRef]
  41. Makhadmeh, S.N.; Khader, A.T.; Al-Betar, M.A.; Naim, S.; Abasi, A.K.; Alyasseri, Z.A.A. A novel hybrid grey wolf optimizer with min-conflict algorithm for power scheduling problem in a smart home. Swarm Evol. Comput. 2021, 60, 100793. [Google Scholar] [CrossRef]
  42. Purushothaman, R.; Rajagopalan, S.P.; Dhandapani, G. Hybridizing gray wolf optimization (GWO) with grasshopper optimization algorithm (GOA) for text feature selection and clustering. Appl. Soft Comput. 2020, 96, 106651. [Google Scholar] [CrossRef]
  43. Wang, Z.W.; Qin, C.; Wan, B.T.; Song, W.W.; Yang, G.Q. An adaptive fuzzy chicken swarm optimization algorithm. Math. Probl. Eng. 2021, 2021, 8896794. [Google Scholar] [CrossRef]
  44. Brindha, S.; Amali, S.M.J. A robust and adaptive fuzzy logic based differential evolution algorithm using population diversity tuning for multi-objective optimization. Eng. Appl. Artif. Intell. 2021, 102, 104240. [Google Scholar]
  45. Ferrari, A.C.K.; da Silva, C.A.G.; Osinski, C.; Pelacini, D.A.F.; Leandro, G.V.; Coelho, L.D. Tuning of control parameters of the whale optimization algorithm using fuzzy inference system. J. Intell. Fuzzy Syst. 2022, 42, 3051–3066. [Google Scholar] [CrossRef]
  46. Liang, J.; Suganthan, P.N. Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization. In Technical Report 201311; Computational Intelligence Laboratory, Zhengzhou University: Zhengzhou, China, 2013. [Google Scholar]
  47. Dahmani, S.; Yebdri, D. Hybrid algorithm of particle swarm optimization and grey wolf optimizer for reservoir operation management. Water Resour. Manag. 2020, 34, 4545–4560. [Google Scholar] [CrossRef]
  48. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  49. Liu, H.; Zhang, X.; Tu, L. A modified particle swarm optimization using adaptive strategy. Expert Syst. Appl. 2020, 152, 113353. [Google Scholar] [CrossRef]
  50. Sinha, N.; Chakrabarti, R.; Chattopadhyay, P.K. Evolutionary programming techniques for economic load dispatch. IEEE Trans. Evol. Comput. 2003, 7, 83–94. [Google Scholar] [CrossRef]
  51. Das, S.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems; Technical Report; Jadavpur University: Kolkata, India, 2011. [Google Scholar]
  52. Al-Betar, M.A. Island-Based harmony search algorithm for non-convex economic load dispatch problems. J. Electr. Eng. Technol. 2021, 16, 1985–2015. [Google Scholar] [CrossRef]
  53. Omran, M.G.H.; Alsharhan, S.; Clerc, M. A modified intellects-masses optimizer for solving real-world optimization problems. Swarm Evol. Comput. 2018, 41, 159–166. [Google Scholar] [CrossRef]
  54. Omran, M.G.H.; Clerc, M. APS 9: An improved adaptive population-based simplex method for real-world engineering optimization problems. Appl. Intell. 2018, 48, 1596–1608. [Google Scholar] [CrossRef]
  55. Zhang, H.; Cai, Z.; Ye, X.; Wang, M.; Kuang, F.; Chen, H.; Li, C.; Li, Y. A multi-strategy enhanced salp swarm algorithm for global optimization. Eng. Comput. 2022, 38, 1177–1203. [Google Scholar] [CrossRef]
  56. Elsayed, S.M.; Sarker, R.A.; Essam, D.L. GA with a mew multi-parent crossover for solving IEEE-CEC2011 competition problems. In Proceedings of the 2011 IEEE Congress of Evolutionary Computation (CEC), New Orleans, LA, USA, 5–8 June 2011; pp. 1034–1040. [Google Scholar]
  57. Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  58. Gupta, S.; Deep, K. A memory-based grey wolf optimizer for global optimization tasks. Appl. Soft Comput. 2020, 93, 106367. [Google Scholar] [CrossRef]
  59. Gupta, S.; Deep, K. A hybrid self-adaptive sine cosine algorithm with opposition based learning. Expert Syst. Appl. 2019, 119, 210–230. [Google Scholar] [CrossRef]
  60. Mirjalili, S. Moth-Flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  61. Fan, Q.S.; Huang, H.S.; Li, Y.T.; Han, Z.G.; Hu, Y.; Huang, D. Beetle antenna strategy based grey wolf optimization. Expert Syst. Appl. 2021, 165, 113882. [Google Scholar] [CrossRef]
  62. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  63. Zhang, H.L.; Li, R.; Cai, Z.N.; Gu, Z.Y.; Heidari, A.A.; Wang, M.J.; Chen, H.L.; Chen, M.Y. Advanced orthogonal moth flame optimization with Broyden-Fletcher-Goldfarb-Shanno algorithm: Framework and real-world problems. Expert Syst. Appl. 2020, 159, 113617. [Google Scholar] [CrossRef]
  64. Li, S.M.; Chen, H.L.; Wang, M.J.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comp. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  65. Gao, C.; Hu, Z.B.; Tong, W.Y. Linear prediction evolution algorithm: A simplest evolutionary optimizer. Memet. Comput. 2021, 13, 319–339. [Google Scholar] [CrossRef]
  66. Singh, N.; Singh, S.B. A novel hybrid GWO-SCA approach for optimization problems. Eng. Sci. Technol. 2017, 20, 1586–1601. [Google Scholar] [CrossRef]
  67. Tian, Y.; Liu, R.C.; Zhang, X.Y.; Ma, H.P.; Tan, K.C.; Jin, Y.C. A multipopulation evolutionary algorithm for solving large-scale multimodal multiobjective optimization problems. IEEE Trans. Evol. Comput. 2021, 25, 405–418. [Google Scholar] [CrossRef]
  68. Ma, H.P.; Fei, M.R.; Jiang, Z.H.; Li, L.; Zhou, H.Y.; Crookes, D. A multipopulation-based multiobjective evolutionary algorithm. IEEE Trans. Cybern. 2020, 50, 689–702. [Google Scholar] [CrossRef]
  69. Alkayem, N.F.; Shen, L.; Asteris, P.G.; Sokol, M.; Xin, Z.Q.; Cao, M.S. A new self-adaptive quasi-oppositional stochastic fractal search for the inverse problem of structural damage assessment. Alex. Eng. J. 2022, 61, 1922–1936. [Google Scholar] [CrossRef]
  70. Alkayem, N.F.; Cao, M.S.; Shen, L.; Fu, R.H.; Sumarac, D. The combined social engineering particle swarm optimization for real-world engineering problems: A case study of model-based structural health monitoring. Appl. Soft Comput. 2022, 123, 108919. [Google Scholar] [CrossRef]
Figure 1. Flowchart of FSGWO.
Figure 1. Flowchart of FSGWO.
Sensors 22 06420 g001
Figure 2. The box plots of the related algorithms for the 30-dimensional functions.
Figure 2. The box plots of the related algorithms for the 30-dimensional functions.
Sensors 22 06420 g002aSensors 22 06420 g002b
Figure 3. Convergence curves of the related algorithms for the 30-dimensional functions.
Figure 3. Convergence curves of the related algorithms for the 30-dimensional functions.
Sensors 22 06420 g003aSensors 22 06420 g003b
Figure 4. Convergence curves of the related algorithms for 50-dimensional functions.
Figure 4. Convergence curves of the related algorithms for 50-dimensional functions.
Sensors 22 06420 g004aSensors 22 06420 g004b
Figure 5. Convergence curves of FSGWO and FSGWO1.
Figure 5. Convergence curves of FSGWO and FSGWO1.
Sensors 22 06420 g005aSensors 22 06420 g005b
Figure 6. Convergence curves for FSGWO and FSGWO2.
Figure 6. Convergence curves for FSGWO and FSGWO2.
Sensors 22 06420 g006aSensors 22 06420 g006b
Figure 7. Three-bar truss design.
Figure 7. Three-bar truss design.
Sensors 22 06420 g007
Figure 8. Pressure vessel design.
Figure 8. Pressure vessel design.
Sensors 22 06420 g008
Figure 9. Gear train design.
Figure 9. Gear train design.
Sensors 22 06420 g009
Figure 10. Cantilever beam design.
Figure 10. Cantilever beam design.
Sensors 22 06420 g010
Table 1. Key parameters of the competitive algorithms.
Table 1. Key parameters of the competitive algorithms.
AlgorithmParameters
EON = 50, a1 = 2, a2 = 1, GP = 0.5
MPSON = 50, w1 = 0.9, w2 = 0.4, c1 = 2, c2 = 2
GWON = 50, a = 2
HPSOGWON = 50, w = 0.5 + rand
SOGWON = 50, a = 2
FSGWON = 50, c = 0.2, r c j = [ r a j , r b j ] ~ N(μ, ∑)
Table 2. Results of the related algorithms for 30-dimensional test functions.
Table 2. Results of the related algorithms for 30-dimensional test functions.
FunctionIndexEOMPSOGWOHPSOGWOSOGWOFSGWO
F1Mean4.30e + 056.80e + 065.05e + 074.01e + 073.43e + 073.29e + 03
STD2.80e + 057.50e + 064.11e + 076.88e + 072.36e + 073.46e + 03
F2Mean6.31e − 015.91e + 071.26e + 091.77e + 093.79e + 080.00e + 00
STD7.96e − 011.72e + 081.07e + 095.18e + 095.15e + 080.00e + 00
F3Mean8.97e + 002.36e + 033.11e + 043.26e + 042.28e + 040.00e + 00
STD9.08e + 003.23e + 039.49e + 033.33e + 049.80e + 030.00e + 00
F4Mean3.75e + 016.79e + 012.11e + 023.21e + 021.84e + 020.00e + 00
STD4.27e + 013.08e + 015.54e + 015.24e + 024.45e + 010.00e + 00
F5Mean2.04e + 012.08e + 012.09e + 012.08e + 012.09e + 012.00e + 01
STD1.25e − 019.25e − 025.86e − 022.94e − 015.46e − 023.28e − 03
F6Mean7.35e + 001.29e + 011.29e + 011.48e + 011.11e + 018.33e + 00
STD2.58e + 002.76e + 002.79e + 007.42e + 003.16e + 003.57e + 00
F7Mean5.36e − 038.34e − 031.41e + 016.93e + 006.70e + 007.86e − 03
STD8.73e − 031.43e − 021.20e + 011.30e + 014.77e + 001.21e − 02
F8Mean4.60e + 013.64e + 017.28e + 018.03e + 016.45e + 010.00e + 00
STD1.04e + 011.15e + 011.62e + 013.57e + 011.69e + 010.00e + 00
F9Mean8.84e + 016.99e + 019.19e + 019.65e + 018.34e + 013.71e + 01
STD2.74e + 011.83e + 012.37e + 016.92e + 011.57e + 018.27e + 00
F10Mean1.56e + 039.51e + 022.19e + 032.64e + 031.90e + 031.35e + 01
STD5.46e + 024.86e + 026.26e + 021.25e + 035.31e + 023.73e + 00
F11Mean3.26e + 032.93e + 032.73e + 033.64e + 032.67e + 031.98e + 03
STD7.43e + 026.43e + 026.29e + 021.67e + 037.22e + 023.09e + 02
F12Mean9.18e − 014.82e − 011.64e + 001.40e + 002.23e + 001.84e − 01
STD3.90e − 012.51e − 011.08e + 001.17e + 007.53e − 013.67e − 02
F13Mean2.19e − 014.90e − 013.89e − 014.47e − 013.33e − 012.78e − 01
STD6.95e − 021.10e − 011.87e − 012.79e − 016.63e − 026.68e − 02
F14Mean2.55e − 014.47e − 012.26e + 003.31e + 009.11e − 012.10e − 01
STD1.10e − 011.96e − 014.25e + 008.81e + 002.24e + 004.68e − 02
F15Mean4.48e + 007.28e + 005.38e + 011.55e + 042.88e + 014.89e + 00
STD1.27e + 003.12e + 001.07e + 029.43e + 045.52e + 011.31e + 00
F16Mean1.11e + 011.21e + 011.10e + 011.17e + 011.08e + 011.03e + 01
STD8.75e − 015.60e − 017.16e − 011.07e + 007.09e − 013.73e − 01
F17Mean1.87e + 052.24e + 041.43e + 069.24e + 058.78e + 052.32e + 03
STD1.27e + 052.40e + 041.81e + 069.99e + 058.59e + 054.37e + 03
F18Mean2.85e + 035.40e + 027.14e + 061.79e + 063.95e + 067.26e + 01
STD4.01e + 035.91e + 021.95e + 077.19e + 061.45e + 073.08e + 01
F19Mean9.13e + 008.14e + 003.87e + 014.43e + 012.06e + 013.95e + 00
STD1.16e + 018.81e + 002.53e + 015.07e + 011.39e + 011.13e + 00
F20Mean3.55e + 022.77e + 021.52e + 041.45e + 041.05e + 045.73e + 01
STD1.19e + 021.97e + 021.06e + 042.03e + 045.75e + 033.14e + 01
F21Mean8.76e + 042.10e + 047.68e + 058.95e + 053.03e + 054.10e + 02
STD8.12e + 044.71e + 041.46e + 061.74e + 063.25e + 052.49e + 02
F22Mean3.31e + 024.15e + 023.39e + 024.93e + 022.97e + 021.44e + 02
STD1.53e + 021.76e + 021.47e + 022.60e + 021.12e + 027.41e + 01
F23Mean3.15e + 023.15e + 023.32e + 023.41e + 023.28e + 023.15e + 02
STD1.50e − 121.71e − 128.54e + 004.23e + 017.58e + 005.05e − 13
F24Mean2.00e + 022.00e + 022.00e + 022.38e + 022.00e + 022.31e + 02
STD6.37e − 041.58e − 048.09e − 045.42e + 017.42e − 045.73e + 00
F25Mean2.01e + 022.00e + 022.10e + 022.11e + 022.10e + 022.08e + 02
STD2.55e + 000.00e + 004.03e + 007.30e + 003.62e + 003.42e + 00
F26Mean1.30e + 021.25e + 021.30e + 021.44e + 021.34e + 021.06e + 02
STD4.59e + 014.22e + 014.58e + 014.96e + 014.74e + 012.37e + 01
F27Mean5.09e + 026.21e + 026.24e + 027.63e + 026.01e + 024.51e + 02
STD6.92e + 011.74e + 021.20e + 022.23e + 028.60e + 017.28e + 01
F28Mean9.71e + 021.14e + 031.05e + 031.38e + 039.22e + 027.07e + 02
STD1.43e + 022.69e + 022.46e + 026.95e + 021.08e + 021.05e + 02
F29Mean1.77e + 061.73e + 054.24e + 052.82e + 062.00e + 055.29e + 02
STD3.62e + 061.22e + 061.94e + 065.32e + 061.25e + 061.62e + 02
F30Mean3.15e + 033.00e + 033.99e + 042.91e + 042.19e + 048.70e + 02
STD9.36e + 021.13e + 032.88e + 044.56e + 041.27e + 042.32e + 02
Table 3. Results of the Wilcoxon test for the mean values in Table 2.
Table 3. Results of the Wilcoxon test for the mean values in Table 2.
FSGWO v.s.EOMPSOGWOHPSOGWOSOGWO
p value8.3606e − 059.4199e − 062.7389e − 069.1269e − 073.3270e − 06
Table 4. Comparison of the calculation accuracy of related algorithms for 30 30D test functions.
Table 4. Comparison of the calculation accuracy of related algorithms for 30 30D test functions.
FSGWO v.s.EOMPSOGWOHPSOGWOSOGWO
F10.992330.999510.999930.999910.9999
F211111
F311111
F411111
F50.019350.038230.044950.038860.04480
F6−0.133280.352740.352620.438690.25254
F7−0.467190.057260.999440.998860.99882
F811111
F90.579940.468750.596160.615250.55491
F100.991350.985820.993840.994880.99289
F110.391120.323720.274800.454860.25834
F120.799110.616980.887350.868180.91712
F13−0.267460.432910.284500.378610.16395
F140.179360.531750.907440.936680.77010
F15−0.093130.327680.909050.999680.83011
F160.070970.148460.064300.121620.04669
F170.987570.896150.998380.997480.99735
F180.974510.865560.999980.999950.99998
F190.567950.515070.897980.911010.80831
F200.838640.793540.996220.996040.99456
F210.995310.980430.999460.999540.99864
F220.566510.654130.576700.708790.51583
F23000.050490.074500.03775
F24−0.15584−0.15584−0.155840.02768−0.15584
F25−0.03369−0.038510.012180.016220.00905
F260.181180.149550.181800.261230.20560
F270.113630.272570.276010.408170.24936
F280.272470.381700.327010.488750.23333
F290.999700.996950.998750.999810.99735
F300.723460.709660.978180.970130.96032
Average0.46980.54350.64840.69020.6227
Table 5. Results of the related algorithms for 50-dimensional functions.
Table 5. Results of the related algorithms for 50-dimensional functions.
FunctionIndexEOMPSOGWOHPSOGWOSOGWOFSGWO
F1Mean1.32e + 061.80e + 078.42e + 074.34e + 076.77e + 072.66e + 04
STD5.23e + 051.58e + 075.08e + 075.51e + 073.84e + 071.75e + 04
F2Mean7.35e + 031.72e + 097.67e + 095.63e + 094.14e + 094.16e − 05
STD8.57e + 031.68e + 093.29e + 091.12e + 103.21e + 091.56e − 04
F3Mean8.80e + 026.43e + 035.82e + 044.92e + 044.72e + 041.14e − 02
STD6.76e + 025.41e + 031.08e + 043.19e + 041.06e + 043.64e − 02
F4Mean8.15e + 012.11e + 027.64e + 025.68e + 024.50e + 028.76e + 00
STD3.63e + 012.67e + 023.33e + 028.90e + 022.02e + 021.80e + 01
F5Mean2.05e + 012.11e + 012.11e + 012.10e + 012.11e + 012.00e + 01
STD1.19e − 015.14e − 024.44e − 022.90e − 014.43e − 028.68e − 03
F6Mean2.06e + 013.00e + 012.98e + 012.81e + 012.62e + 012.37e + 01
STD3.65e + 004.68e + 003.98e + 001.04e + 013.62e + 004.21e + 00
F7Mean7.04e − 038.68e − 037.85e + 015.64e + 014.17e + 016.42e − 03
STD1.01e − 021.13e − 023.44e + 011.37e + 023.30e + 017.64e − 03
F8Mean1.28e + 026.97e + 011.88e + 021.75e + 021.69e + 020.00e + 00
STD2.51e + 011.66e + 013.04e + 017.77e + 012.30e + 010.00e + 00
F9Mean1.63e + 021.50e + 022.02e + 022.62e + 021.82e + 021.03e + 02
STD3.64e + 013.13e + 013.13e + 011.54e + 024.81e + 011.75e + 01
F10Mean3.67e + 032.13e + 035.70e + 036.37e + 035.02e + 032.74e + 01
STD9.87e + 026.28e + 028.02e + 022.35e + 037.68e + 026.31e + 00
F11Mean6.66e + 035.88e + 035.70e + 036.46e + 035.27e + 034.33e + 03
STD8.95e + 029.05e + 021.37e + 032.68e + 031.32e + 034.57e + 02
F12Mean1.49e + 004.53e − 011.92e + 001.88e + 002.55e + 001.86e − 01
STD3.89e − 011.98e − 011.66e + 001.56e + 001.43e + 003.26e − 02
F13Mean4.42e − 015.85e − 017.46e − 018.03e − 015.92e − 014.69e − 01
STD8.05e − 022.92e − 015.09e − 018.17e − 017.78e − 028.04e − 02
F14Mean2.98e − 014.13e − 011.48e + 018.34e + 004.40e + 002.88e − 01
STD1.16e − 011.69e − 011.33e + 012.49e + 017.24e + 007.33e − 02
F15Mean1.15e + 012.57e + 011.65e + 034.39e + 035.33e + 022.44e + 01
STD3.81e + 008.01e + 002.36e + 032.31e + 049.96e + 026.05e + 00
F16Mean2.02e + 012.16e + 012.00e + 012.09e + 011.98e + 011.88e + 01
STD9.71e − 015.97e − 018.13e − 011.11e + 001.06e + 005.18e − 01
F17Mean2.58e + 053.75e + 054.55e + 061.97e + 063.00e + 064.68e + 04
STD1.40e + 058.94e + 055.24e + 062.10e + 061.83e + 064.13e + 04
F18Mean2.53e + 031.72e + 036.52e + 078.26e + 072.77e + 074.84e + 02
STD1.32e + 032.13e + 031.27e + 083.36e + 086.07e + 075.70e + 02
F19Mean1.73e + 012.56e + 018.25e + 017.04e + 017.35e + 012.57e + 01
STD9.38e + 001.47e + 012.84e + 014.70e + 012.36e + 012.05e + 01
F20Mean5.69e + 025.51e + 021.51e + 041.86e + 041.01e + 042.76e + 02
STD1.43e + 022.18e + 027.77e + 033.12e + 046.06e + 033.02e + 02
F21Mean1.76e + 051.79e + 052.62e + 061.77e + 062.18e + 061.96e + 04
STD1.10e + 052.00e + 052.76e + 063.37e + 061.85e + 063.94e + 04
F22Mean8.33e + 021.11e + 038.07e + 029.67e + 027.23e + 024.59e + 02
STD3.12e + 023.19e + 022.80e + 025.13e + 023.05e + 021.68e + 02
F23Mean3.45e + 023.45e + 024.37e + 023.94e + 024.15e + 023.44e + 02
STD1.01e − 031.09e − 124.22e + 016.97e + 013.00e + 018.44e − 13
F24Mean2.01e + 022.20e + 022.01e + 022.63e + 022.00e + 022.86e + 02
STD5.47e − 043.13e + 016.47e − 048.22e + 014.88e − 045.50e + 00
F25Mean2.00e + 022.01e + 022.27e + 022.27e + 022.23e + 022.28e + 02
STD2.90e − 134.06e + 009.21e + 001.61e + 016.45e + 007.02e + 00
F26Mean1.80e + 021.91e + 021.88e + 021.94e + 021.80e + 021.04e + 02
STD4.00e + 012.94e + 014.43e + 018.37e + 014.98e + 011.95e + 01
F27Mean8.55e + 021.20e + 031.06e + 031.17e + 039.35e + 029.67e + 02
STD9.36e + 011.51e + 021.11e + 023.21e + 021.10e + 028.51e + 01
F28Mean1.57e + 032.32e + 032.18e + 032.51e + 031.88e + 031.53e + 03
STD3.56e + 027.20e + 025.22e + 021.71e + 034.86e + 022.05e + 02
F29Mean1.71e + 072.25e + 064.98e + 062.12e + 071.07e + 069.36e + 02
STD2.08e + 071.17e + 077.89e + 063.84e + 072.45e + 061.17e + 02
F30Mean1.12e + 042.25e + 041.44e + 056.35e + 041.01e + 051.09e + 04
STD1.89e + 031.44e + 049.16e + 048.40e + 044.86e + 041.06e + 03
Table 6. Results of the Wilcoxon test for the mean values of Table 5.
Table 6. Results of the Wilcoxon test for the mean values of Table 5.
FSGWO v.s.EOMPSOGWOHPSOGWOSOGWO
p value6.2062e−041.3587e − 054.0348e − 062.7389e − 061.4875e − 05
Table 7. Comparison of the calculation accuracy of related algorithms for 30 50-D test functions.
Table 7. Comparison of the calculation accuracy of related algorithms for 30 50-D test functions.
FSGWO v.s.EOMPSOGWOHPSOGWOSOGWO
F10.979800.998510.999680.999380.99960
F20.999990.999990.999990.999990.99999
F30.999980.999990.999990.999990.99999
F40.892540.958450.988530.984570.98052
F50.024330.049520.052300.048730.05284
F6−0.153020.210580.205770.156430.09618
F70.088170.259800.999910.999880.99984
F811111
F90.365400.310380.488430.606600.43276
F100.992510.987090.995180.995690.99453
F110.349820.264150.240150.329720.17841
F120.874850.589480.903000.900990.92698
F13−0.060790.198560.371490.416020.20706
F140.032710.301910.980540.965420.93453
F15−1.124180.050050.985150.994420.95411
F160.067790.127300.058310.096730.04623
F170.818350.875110.989710.976210.98438
F180.808870.719450.999990.999990.99998
F19−0.48750−0.001780.688770.635470.65086
F200.514650.499300.981700.985170.97274
F210.888850.890590.992500.988920.99102
F220.448410.585290.430700.5250.36459
F23000.212570.127940.17198
F24−0.42928−0.30017−0.42928−0.08550−0.42928
F25−0.14177−0.13853−0.00643−0.00486−0.02340
F260.421340.451880.443990.461090.41925
F27−0.130370.192540.087320.17541−0.03359
F280.026650.340620.297420.389180.18338
F290.999940.999580.999810.999950.99912
F300.022320.513920.924180.827590.89163
Average0.33630.46450.62940.64990.5982
Table 8. Results of FSGWO and FSGWO1 for 30-dimensional functions.
Table 8. Results of FSGWO and FSGWO1 for 30-dimensional functions.
FunctionIndexFSGWOFSGWO1FunctionIndexFSGWOFSGWO1
F1Mean3.29e + 035.74e + 04F16Mean1.03e + 011.07e + 01
STD3.46e + 031.06e + 05STD3.73e − 013.98e − 01
F2Mean0.00e + 000.00e + 00F17Mean2.32e + 031.97e + 03
STD0.00e + 000.00e + 00STD4.37e + 031.10e + 03
F3Mean0.00e + 000.00e + 00F18Mean7.26e + 018.39e + 01
STD0.00e + 000.00e + 00STD3.08e + 012.74e + 01
F4Mean0.00e + 002.86e + 01F19Mean3.95e + 008.36e + 00
STD0.00e + 003.58e + 01STD1.13e + 008.44e + 00
F5Mean2.00e + 012.05e + 01F20Mean5.73e + 014.86e + 01
STD3.28e − 035.22e − 02STD3.14e + 012.26e + 01
F6Mean8.33e + 004.23e + 00F21Mean4.10e + 026.00e + 02
STD3.57e + 001.38e + 00STD2.49e + 025.80e + 02
F7Mean7.86e − 031.35e − 02F22Mean1.44e + 021.78e + 02
STD1.21e − 021.37e − 02STD7.41e + 016.66e + 01
F8Mean0.00e + 001.48e + 01F23Mean3.15e + 023.15e + 02
STD0.00e + 002.83e + 00STD5.05e − 135.94e − 13
F9Mean3.71e + 014.94e + 01F24Mean2.31e + 022.29e + 02
STD8.27e + 007.51e + 00STD5.73e + 005.55e + 00
F10Mean1.35e + 013.50e + 02F25Mean2.08e + 022.09e + 02
STD3.73e + 001.01e + 02STD3.42e + 002.83e + 00
F11Mean1.98e + 033.00e + 03F26Mean1.06e + 021.24e + 02
STD3.09e + 022.77e + 02STD2.37e + 014.27e + 01
F12Mean1.84e − 015.66e − 01F27Mean4.51e + 024.28e + 02
STD3.67e − 028.91e − 02STD7.28e + 015.05e + 01
F13Mean2.78e − 012.64e − 01F28Mean7.07e + 028.76e + 02
STD6.68e − 025.31e − 02STD1.05e + 024.81e + 01
F14Mean2.10e − 012.29e − 01F29Mean5.29e + 027.69e + 02
STD4.68e − 024.60e − 02STD1.62e + 021.39e + 02
F15Mean4.89e + 007.18e + 00F30Mean8.70e + 021.99e + 03
STD1.31e + 001.49e + 00STD2.32e + 026.60e + 02
Table 9. Results of the Wilcoxon test for the mean values of Table 8.
Table 9. Results of the Wilcoxon test for the mean values of Table 8.
FSGWO v.s.FSGWO1
p value2.7610e − 03
Table 10. Results of FSGWO and FSGWO2 for 30-dimensional functions.
Table 10. Results of FSGWO and FSGWO2 for 30-dimensional functions.
FunctionIndexFSGWOFSGWO2FunctionIndexFSGWOFSGWO2
F1Mean3.29e + 031.09e + 04F16Mean1.03e + 011.02e + 01
STD3.46e + 037.88e + 03STD3.73e − 013.38e − 01
F2Mean0.00e + 000.00e + 00F17Mean2.32e + 035.21e + 03
STD0.00e + 000.00e + 00STD4.37e + 038.12e + 03
F3Mean0.00e + 000.00e + 00F18Mean7.26e + 017.55e + 01
STD0.00e + 000.00e + 00STD3.08e + 012.73e + 01
F4Mean0.00e + 001.37e + 00F19Mean3.95e + 008.66e + 00
STD0.00e + 009.10e + 00STD1.13e + 001.18e + 01
F5Mean2.00e + 012.02e + 01F20Mean5.73e + 014.03e + 01
STD3.28e−033.82e − 02STD3.14e + 012.64e + 01
F6Mean8.33e + 003.28e + 00F21Mean4.10e + 025.49e + 02
STD3.57e + 002.56e + 00STD2.49e + 027.45e + 02
F7Mean7.86e − 031.08e − 02F22Mean1.44e + 021.67e + 02
STD1.21e − 021.41e − 02STD7.41e + 016.17e + 01
F8Mean0.00e + 007.08e − 02F23Mean3.15e + 023.15e + 02
STD0.00e + 005.05e − 01STD5.05e − 135.35e − 13
F9Mean3.71e + 014.48e + 01F24Mean2.31e + 022.31e + 02
STD8.27e + 007.67e + 00STD5.73e + 006.83e + 00
F10Mean1.35e + 019.25e + 01F25Mean2.08e + 022.07e + 02
STD3.73e + 003.84e + 01STD3.42e + 002.78e + 00
F11Mean1.98e + 032.63e + 03F26Mean1.06e + 021.20e + 02
STD3.09e + 022.28e + 02STD2.37e + 014.00e + 01
F12Mean1.84e − 013.75e − 01F27Mean4.51e + 024.25e + 02
STD3.67e − 025.87e − 02STD7.28e + 014.79e + 01
F13Mean2.78e − 012.46e − 01F28Mean7.07e + 028.74e + 02
STD6.68e − 025.89e − 02STD1.05e + 025.15e + 01
F14Mean2.10e − 012.13e − 01F29Mean5.29e + 026.58e + 02
STD4.68e − 025.06e − 02STD1.62e + 021.66e + 02
F15Mean4.89e + 006.04e + 00F30Mean8.70e + 021.69e + 03
STD1.31e + 001.26e + 00STD2.32e + 025.83e + 02
Table 11. Results of the Wilcoxon test for the mean values of Table 10.
Table 11. Results of the Wilcoxon test for the mean values of Table 10.
FSGWO v.s.FSGWO1
p value2.9719e − 03
Table 12. Results of the related algorithms for the 40-unit ELD case.
Table 12. Results of the related algorithms for the 40-unit ELD case.
No.AlgorithmBestMeanMedianWorstSTD
1FSGWO1.2260e + 051.2514e + 051.2519e + 051.2758e + 059.8099e + 02
2GWO1.2602e + 051.2762e + 051.2751e + 051.3002e + 059.4627e + 02
3HPSOGWO1.2474e + 051.2614e + 051.2606e + 051.2795e + 051.0180e + 03
4SOGWO1.2626e + 051.2804e + 051.2799e + 051.3113e + 051.3027e + 03
5EO1.2591e + 051.2725e + 051.2688e + 051.2930e + 051.0265e + 03
6MPSO1.2617e + 051.2741e + 051.2737e + 051.2920e + 057.4119e + 02
7iHS [52]1.2980e + 051.3375e + 051.3392e + 051.3701e + 051.6258e + 03
8IMO [53]1.3073e + 051.3465e + 051.3428e + 051.3846e + 052.2328e + 03
9MIMO [53]1.2960e + 051.3306e + 051.3297e + 051.3698e + 052.1161e + 03
10APS 9 [54]1.2390e + 051.2553e + 051.2544e + 051.2715e + 058.0868e + 02
11ESSA [55]1.2885e + 051.3061e + 051.3355e + 051.0434e + 03
12GA-MPC [56]1.2921e + 051.3323e + 051.3319e + 051.3606e + 051.8788e + 03
Table 13. Results of the related algorithms for the 140-unit ELD case.
Table 13. Results of the related algorithms for the 140-unit ELD case.
No.AlgorithmBestMeanMedianWorstSTD
1FSGWO1.7551e + 061.8119e + 061.8107e + 061.8598e + 062.3481e + 04
2GWO1.8935e + 061.9313e + 061.9317e + 061.9582e + 061.6986e + 04
3HPSOGWO1.8611e + 061.9210e + 061.9234e + 061.9728e + 063.0071e + 04
4SOGWO1.8828e + 061.9303e + 061.9284e + 061.9660e + 062.0406e + 04
5EO1.8645e + 061.9187e + 061.9190e + 061.9639e + 062.3944e + 04
6MPSO1.8756e + 061.9173e + 061.9133e + 061.9746e + 062.4482e + 04
7iHS [52]1.9142e + 062.0538e + 061.9942e + 062.5494e + 061.5139e + 05
8IMO [53]1.9061e + 061.9338e + 061.9322e + 061.9639e + 061.6741e + 04
9MIMO [53]1.8957e + 061.9181e + 061.9184e + 061.9373e + 061.1350e + 04
10APS 9 [54]1.8540e + 062.0666e + 061.9268e + 062.9513e + 063.1309e + 05
11ESSA [55]1.9087e + 061.9350e + 061.9541e + 061.3123e + 04
12GA-MPC [56]1.9203e + 061.9533e + 061.9567e + 061.9707e + 061.4084e + 04
Table 14. Results of related algorithms for the three-bar truss design problem.
Table 14. Results of related algorithms for the three-bar truss design problem.
Algorithmx1x2f (x1,x2)
FSGWO0.78867510.4082485263.8958
m-GWO [58]0.78858450.4085071263.8961
m-SCA [59]0.819150.36956263.8972
MFO [60]0.788244770.40946691263.8960
CS [57]0.788670.40902263.9716
Table 15. Results of related algorithms for the pressure vessel design problem.
Table 15. Results of related algorithms for the pressure vessel design problem.
Algorithmx1x2x3x4f(x1,x2,x3,x4)
FSGWO0.77820.384640.3196200.00005885.3328
BGWO [61]0.77830.384740.3197200.00005886.4955
I-GWO [62]0.7790310.38550140.36313199.40175888.3400
BFGSOLMFO [63]0.7786750.38539240.342876199.7548055889.7080
SMA [64]0.79310.393240.6711196.21785994.1857
Table 16. Results of related algorithms for the gear train design problem.
Table 16. Results of related algorithms for the gear train design problem.
Algorithmx1x2x3x4f(x1,x2,x3,x4)
FSGWO194316492.7009e−12
m-SCA [59]431619492.7009e−12
CS [57]431619492.7009e−12
LPE [65]194916432.7009e−12
GWOSCA [66]265115532.3078e−11
Table 17. Results of related algorithms for the cantilever beam design problem.
Table 17. Results of related algorithms for the cantilever beam design problem.
Algorithmx1x2x3x4x5f(x1,x2,x3,x4,x5)
FSGWO6.01605.30924.49433.50152.15271.33996
CS [57]6.00895.30494.50233.50772.15041.33999
BGWO [61]6.01305.31124.49533.50792.14611.33996
m-SCA [59]6.00895.30494.50233.50772.15041.33999
MFO [60]5.98495.31674.49733.51362.16161.33999
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Qin, H.; Meng, T.; Cao, Y. Fuzzy Strategy Grey Wolf Optimizer for Complex Multimodal Optimization Problems. Sensors 2022, 22, 6420. https://doi.org/10.3390/s22176420

AMA Style

Qin H, Meng T, Cao Y. Fuzzy Strategy Grey Wolf Optimizer for Complex Multimodal Optimization Problems. Sensors. 2022; 22(17):6420. https://doi.org/10.3390/s22176420

Chicago/Turabian Style

Qin, Hua, Tuanxing Meng, and Yuyi Cao. 2022. "Fuzzy Strategy Grey Wolf Optimizer for Complex Multimodal Optimization Problems" Sensors 22, no. 17: 6420. https://doi.org/10.3390/s22176420

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop