Next Article in Journal
Effects of Second-Order Velocity Slip and the Different Spherical Nanoparticles on Nanofluid Flow
Previous Article in Journal
Connecting (Anti)Symmetric Trigonometric Transforms to Dual-Root Lattice Fourier–Weyl Transforms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified jSO Algorithm for Solving Constrained Engineering Problems

School of Software, Yunnan University, Kunming 650000, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(1), 63; https://doi.org/10.3390/sym13010063
Submission received: 14 December 2020 / Revised: 26 December 2020 / Accepted: 29 December 2020 / Published: 31 December 2020

Abstract

:
Proposing new strategies to improve the optimization performance of differential evolution (DE) is an important research study. The jSO algorithm was the announced winner of the Congress on Evolutionary Computation (CEC) 2017 competition on numerical optimization, and is the state-of-the-art algorithm in the SHADE (Success-History based Adaptive Differential Evolution) algorithm series. However, the jSO algorithm converges prematurely in the search space with different dimensions and is prone to falling into local optimum during evolution, as well as the problem of decreasing population diversity. In this paper, a modified jSO algorithm (MjSO) is proposed which is based on cosine similarity with parameter adaptation and a novel opposition-based learning restart mechanism incorporated with symmetry to address the above problems, respectively. Moreover, it is well known that parameter setting has a significant impact on the performance of the algorithm and the search process can be divided into two symmetrical parts. Hence, a parameter control strategy based on a symmetric search process is introduced in the MjSO. The effectiveness of these designs is supported by presenting a population clustering analysis, along with a population diversity measure to evaluate the performance of the proposed algorithm, three state-of-the-art DE variant algorithms (EBLSHADE, ELSHADE-SPACMA, and SALSHADE-cnEPSin) and two original algorithms (jSO and LSHADE) are compared with it, for solving 30 CEC’17 benchmark functions and three classical engineering design problems. The experimental results and analysis reveal that the proposed algorithm can outperform other competitions in terms of the convergence speed and the quality of solutions. Promisingly, the proposed method can be treated as an effective and efficient auxiliary tool for more complex optimization models and scenarios.

1. Introduction

In the real world, there are many engineering optimization problems that need to be solved. The design cost problems in engineering have some practical constraints and one or more objective functions [1]. These engineering problems often demand methods to find optimum from a large number of available solutions, without wasting efforts in searching sub-optimal regions. Because of the large number of these problems and their practical application value, it has become a research hotspot in recent years. These problems are characterized mainly by their large scale and high difficulty. When the traditional optimization algorithm solves such a large-scale problem, it has certain inherent drawbacks, such as local optimal stagnation and derivation of the search space. Therefore, much attention has been paid to stochastic optimization methods [2] in recent decades [3,4]. Stochastic optimization algorithms can avoid the derivation of the mathematical model in that they only adjust the inputs and monitor the outputs of a given system for objective outputs and treat the optimization task as a black box [5]. In addition, stochastic optimization algorithms can solve optimization tasks randomly [6], which means that they have intrinsic superiority over conventional optimization algorithms in the capability of local optimal avoidance. Based on the number of solutions generated at each iteration optimization during the whole process, stochastic optimization algorithms can be divided into population-based algorithms, which generate many random solutions and promote them during the full optimization, and individual-based algorithms, which generate only one candidate solution throughout optimization.
Numerous stochastic optimization algorithms have been proposed based on evolutionary phenomena, swarm-based intelligence techniques, physical rules, and human-related concepts. Evolutionary algorithms include genetic algorithm (GA) [7], particle swarm optimization (PSO) [8], differential evolution (DE) [9], and evolutionary strategy (ES) [10]. Swarm-based intelligence algorithms usually simulate living beings’ behavior of social hierarchy, cooperation, and collective, and then mathematically abstract optimization models for problems to be optimized.
The differential evolution (DE) algorithm was proposed by Storn and Price [9] in 1995, and this laid the foundation for a series of successful continuous optimization algorithms. Literature [11] summarizes the extensive research fields focused on DE in recent years. The algorithms based on DE [12,13,14,15,16,17,18,19,20,21,22,23,24,25,26] have gained very high rankings in various competitions held annually at the IEEE (Institute of Electrical and Electronics Engineers) Congress on Evolutionary Computation (CEC). Therefore, DE-based algorithms are generally considered to be an effective and popular population-based evolutionary algorithm for single-objective continuous optimization problems [27,28].
In recent years, many modified algorithms based on DE algorithms have been derived [29]. In addition to combining DE with other algorithms, the main research modifications of DE include the exploitation of mutation strategies to control the movement of individuals and the adaptive modification of the DE control parameters during operation. SHADE is an adaptive DE that combines parameter adaptation based on the success history with one of the most advanced DE algorithms [30]. LSHADE further expands SHADE through linear population size reduction (LPSR), thus continuously reducing the population size using linear functions [31]. The jSO algorithm, which mainly has a new weighted version of the mutation strategy and some parameter settings, is an improved variant of the LSHADE algorithm [32]. ELSHADE-SPACMA is an improved method based on LSHADE-SPACMA. In LSHADE-SPACMA, the value of p used to control the degree of the greed of the mutant strategy is constant, while in ELSHADE-SPACMA, the value of p is dynamic. The performance of ELSHADE-SPACMA was further improved by integrating another directed mutation strategy into the hybridization framework [33,34]. The improved SALSHADE algorithm with global optimization linearly reduces the scaling factor and exponentially reduces the crossover rate and final Levy distribution step size to adapt to the improvement of the frequency component [35]. These classical algorithms will be reviewed in Section 2.
Furthermore, one of the disadvantages of differential evolutionary algorithms is the loss of population diversity. Therefore, the algorithm may prematurely converge to the local optimum. A good algorithm needs to have high enough population diversity in the early stage of the evolutionary process while reducing the population diversity to improve the convergence rate in the late stage. In order to solve the problems of premature convergence and population stagnation, Ming Yang et al. [36] study the adaptability of the population diversity in dimensions and a mechanism called automatically enhanced population diversity (AEPD) was proposed to automatically enhance the population diversity. Cheng Hongtan et al. [37] focus on controlling the value of the search parameter P. In his proposal, after he normalized the diversity of the population, each individual would select its unique search area according to the diversity conditions. Jing-zhong Wang et al. [38] proposed a variation strategy for a differential evolutionary algorithm (DE) with a fuzzy inference system (FIS), which enables the FIS to have well-controlled population diversity performance by controlling the ratio of the variation with respect to the globally optimal individual. All previously mentioned algorithms share the same idea of balancing between exploration and exploitation abilities, and they try to achieve it in various ways, but there is a general issue that they cannot maintain a long exploration phase or the ability to jump out of local optimum.
Nevertheless, many improved algorithms based on DE algorithms have made great progress in engineering problems. Xiaoyu He et al. [39] proposed a novel DE variant with covariance matrix self-adaptation, which produces a good solution for solving three classic constrained engineering design problems (pressure vessel design problem, tension/compression spring design problem and welded beam design problem). Noor H. Awad et al. [40] introduced an effective surrogate model to assist the differential evolution algorithm to generate competitive solutions during the search process and solved the two engineering design problems of welded beams and pressure vessels. At the same time, the improved algorithm fused with the DE algorithm has also achieved good performance in solving engineering problems. K.R. Subhashini et al. [41] integrated the basic animal migration optimization (AMO) algorithm with the DE algorithm and applied it to engineering problems. The result is significantly better than the original algorithm AMO.
Therefore, this study proposes a modified jSO algorithm to solve engineering problem. The MjSO algorithm uses a novel parameter adaptive mechanism based on cosine similarity instead of the original scaling factor and crossover rate with the weighted mean adaptive mechanism and adopts a parameter control strategy based on a symmetric search process. Thus, it can maintain a high population diversity and a longer exploration phase. In addition, in order to jump out of the local optimum, a novel opposition-based learning restart mechanism is applied. Finally, abundant experiments on 30 unconstrained benchmark functions [42], and 3 constrained engineering problems are carried out to demonstrate the superiority of MjSO algorithm. The experimental results show that the proposed MjSO algorithm can obtain more persuasive optima results than other optimization algorithms on unconstrained benchmark functions and achieve less cost engineering design results than other algorithms on constrained engineering problems.
The rest of this article is arranged as follows: in Section 2, several of the more famous DE variants are reviewed. The MjSO algorithm is described in detail in Section 3. Section 4 describes the experimental setup. Section 5 presents the analysis of the experimental results. The proposed MjSO algorithm for engineering problems is conducted in Section 6. Finally, the conclusion is given in Section 7.

2. Related Work

In this section, several of the state-of-the-art DE variants will be reviewed, including JADE [43], SHADE [30], LSHADE [31], iLSHADE [44], and jSO [32], because our proposed MjSO algorithm is a further extension of these algorithms. In addition, these algorithms provide a clear idea for the exploitation of DE. By introducing these state-of-the-art DE variants, researchers can better understand how the MjSO algorithm works.

2.1. JADE

JADE, proposed by Zhang and Sanderson, was the first algorithm to propose a new experimental vector generation strategy with an optional external archive called “DE/current-to-pbest/1/bin”. By integrating external archives, a good balance was achieved between the population diversity and convergence rate. In addition to the mutation strategy, the control parameters F and Cr in JADE obey a Gaussian distribution and Cauchy distribution, respectively, and the control parameters PS remain constant throughout the whole evolution process. In terms of the adaptation of control parameters, JADE further extends the innovation of jDE that “better control parameters lead to better individuals and therefore these control parameters should be retained in the next generation” by using these better control parameters to update their distribution.

2.2. SHADE

The SHADE algorithm was modified based on JADE; therefore, they have many of the same mechanisms, and their main differences are historical memory MF and MCR and their updating mechanisms. SHADE is very representative of the state-of-the-art DE variants, and how it works is described in detail below. The basic steps for SHADE are as follows:
Step 1. The initial population P consists of randomly generated solution vectors. The solution vector X is generated according to the lower and upper bounds of the solution space:
P = x 1 , , x i , , x n ; i = 1 , , N P
x i , j = r a n d x m i n , x m a x ; i = 1 , , N P ; j = 1 , , D
where i represents the individual, j represents the dimension, and rand() is uniformly random. D denotes the dimension of the problem, NP is the population size, and X m i n = x m i n 1 , , x m i n D and X m a x = x m a x 1 , , x m a x D are the lower and upper bounds of the search range, respectively.
Step 2. There are two additional components in SHADE, the historical memory and the external archive, which also need to be initialized. The control parameter scaling factor and crossover rate stored in the historical memory are initialized to 0.5.
M C R , i = M F , i = 0.5 ; i = 1 , , H
where H is the capacity of the user-defined historical memory. The index K used for historical memory updates is initialized to 1. In addition, the deposit out solution of the external archive A is initialized to null, A = .
Step 3. The variation strategy of “current-to-pbest/1” is used by SHADE. The variation formula is shown in (4):
v i , G = x i , G + F i · x p b e s t , G x i , G + F i · x r 1 , G x r 2 , G
  p i = r a n d p m i n , 0.2
p m i n = 2 N P
where xi,G is the current individual and xpbest,G is randomly selected from the former NP × pi (where pi ∈ [0, 1]) optimal individuals of thegeneration population. The vectors xr1,G are randomly selected individuals from thegeneration population, xr2,G are randomly selected individuals from the combination of thegeneration population and the external archive A, r1 ≠ r2 ≠ r3 ≠ i, Fi is the scaling factor, NP is the population size, rand() is the uniform random distribution, G is the current iteration, and vi,G is the generated variation vector. Regarding “current-to- pbest/1”, the greed of the mutation strategy depends on the control parameters of pi, and the calculation formula for pi is as shown in (5) and (6). It balances exploration and exploitation (a small P is greedier). The scaling factor Fi is generated by the following formula:
  F i = r a n d c i M F , r i , 0.1
randci () is a Cauchy distribution, MF,ri is randomly selected from the historical memory MF, and ri is a uniformly distributed random value of [1, H]. If the generated Fi > 1, then Fi = 1; and if Fi ≤ 0, Equation (7) is repeated to generate effective values.
If a one-dimensional vj,i,G of the variation vector is outside the search range x m i n , x m a x , the following correction is performed, as shown in Formula (8):
v j , i , G = x m i n + x j , i , G 2   i f   v j , i , G < x m i n x m a x + x j , i , G 2   i f   v j , i , G > x m a x
where i represents the individual and j represents the dimension.
Step 4. Conduct the SHADE of the crossover operation using binomial crossover to generate trial vector u. The crossover operation formula is as follows:
u j , i , G = v i , i , G   i f   r a n d 0 , 1 C R i   o r   j = j r a n d   x j , i , G                                             o t h e r w i s e                                  
where i represents the individual, j represents the dimension, rand () is a uniform random distribution, and jrand is the decision variable index selected from the uniform random distribution of [1, D]. When rand [0, 1] is less than or equal to the crossover rate CRi or j = jrand, the dimension of the test vector u inherits the dimension of the variation vector v; otherwise, the dimension of the original vector x is inherited. The purpose of setting j = jrand is to provide a protection mechanism so that at least one dimension of the test vector is inherited from the variation vector.
The crossover rate CRi is generated using the following formula:
C R i = r a n d n i M C R , r i , 0.1
where randni () is a Gaussian distribution, MCR,ri is randomly selected from historical memory MCR, and ri is a uniformly distributed random value of [1, H]. If the generated CRi > 1, then let CRi = 1; and if CRi < 0, then let CRi = 0.
Step 5. SHADE follows some selection steps to ensure that the optimization will be in the direction of a better solution because it only allows better objective function values or values that at least equal to the next generation of the surviving individual. The selection operation formula is as follows:
x i , G + 1 = u i , G     i f   f u i , G f x i , G   x i , G                                     o t h e r w i s e
where f () represents the fitness function, G represents the current generation, and G + 1 represents the next generation.
SHADE also needs to update the external archive during the selection process. If a trial individual better than the original individual is generated, the original individual xi,G will be stored in the external archive of elimination solution A. If the external archive exceeds its capacity, one of them will be deleted randomly to make room for the subsequent elimination solution.
Step 6. The renewal of historical memory. The historical memories MF and MCR are initialized according to Formula (3), but the contents inside will change as the algorithm iterates. These memories store the scaling factor F and crossover rate CR of “success”, where “success” means that the trial vector u rather than the original vector x is selected in the selection process to be part of the next generation. In each iteration, these “successful” F and CR values are first stored in the arrays SF and SCR, respectively. After each iteration, the historical memories of MF and MCR are updated by one unit. The updated cell is represented by K. It is initialized as 1, 1 is added after each iteration, and it is reset to 1 when K exceeds the memory capacity H. The K t h unit of historical memory is updated using the following formula:
M C R , k , G + 1 = m e a n W A S C R       i f   S C R M C R , k , G                             o t h e r w i s e
M F , k , G + 1 = m e a n W L S F       i f   S F M F , k , G                           o t h e r w i s e
When all the individuals in iteration G fail to make a better trial vector than the original one, that is, when SF = SCR = ∅, the historical memory does not update. The weighted average WA and the weighted Lehmer average WL are respectively calculated by the following formulas:
m e a n W A S C R = k = 1 S C R w k · S C R , k  
m e a n W L S F = k = 1 S F w k · S F , k 2 k = 1 S F w k · S F , k
In order to improve the adaptability of parameters, the weight vector w is calculated based on the absolute value of the difference of the objective function value between the trial individual and the original individual in the current generation G, as shown below:
w k = Δ f k k = 1 S C R Δ f k                       Δ f k = f u k , G f x k , G
Step 7. Repeat Steps 2 to 6 until a stopping criterion is met.

2.3. LSHADE

LSHADE, proposed by Tanabe and Fukunaga, is an improved version of the SHADE algorithm that introduces linear population size reduction (LPSR). In LSHADE, the new population size is calculated after each iteration, as shown in Formula (18). When the new population size NPnew is smaller than the current population size NP, the population is sorted according to the value of the objective function, and the worst NP-NPnew individuals are discarded. The size of external archive A also decreases as the population size increases.
N P n e w = r o u n d N P i n i t F E S M A X F E S N P i n i t N P f
N P i n i t is the initial population size, and N P f is the final population size. FES is the current number of fitness function evaluations, MAXFES is the maximum number of the fitness function evaluations, and round() is the integral function.

2.4. iLSHADE

The iLSHADE algorithm is an improvement on the LSHADE algorithm, and it includes the following modifications:
  • iLSHADE uses a larger MF = 0.8 in the evolution of the initialization phase and a smaller population size N P i n i t = 12·D.
  • In the iLSHADE algorithm, the last entry in the H-entry pool records constant control parameter pairs, which are MF = 0.9 and MCR = 0.9, respectively. These two parameters remain unchanged throughout evolution.
  • At different stages of the evolution, the F and CR of each individual are set to different fixed values, as shown in Equations (18) and (19).
  • The value of the degree of greed control parameter P of the variation strategy in iLSHADE increases linearly as the number of fitness function evaluations increases (see Equation (20)).
    F i =   m i n F i , 0.7         i f   F E S < 0.25 M A X F E S m i n F i , 0.8         i f   F E S < 0.5 M A X F E S   m i n F i , 0.9         i f   F E S < 0.75 M A X F E S
    C r i =   m i n C r i , 0.5         i f   F E S < 0.25 M A X F E S m i n C r i , 0.25   i f   F E S < 0.5 M A X F E S  
      p = p m i n + F E S M A X F E S p m a x p m i n

2.5. jSO

The jSO algorithm is an improved version of the iLSHADE algorithm, and it won the CEC 2017 single objective real parameter optimization competition [42]. In the jSO algorithm, pmax = 0.25, pmin = pmax/2, the initial population size N P i n i t = 25 D log D , and the historical memory capacity H = 5. In addition, all parameter values in the historical memory MF and MCR are set to 0.3 and 0.8, respectively, and the weighted current mutation strategy current-to-pBest-w/1 was used.
v i , G = x i , G + F w · x p b e s t , G x i , G + F i · x r 1 , G x r 2 , G
Fw is calculated by the following formula:
F w = 0.7 F i   F E S < 0.2 M A X F E S     0.8 F i   F E S < 0.4 M A X F E S 1.2 F i                                       o t h e r w i s e
The distribution of control parameters F and CR is the same as that of iLSHADE. Unlike iLSHADE, jSO’s control parameters F and CR need to be adjusted. These adjustments are shown in Equations (23) and (24).
F i = F i               i f   F i < 0.7 & F E S < 0.6 M A X F E S   0.7 ,         o t h e r w i s e                                                                                                  
C r i = 0.7       i f   C r i < 0.7 & F E S < 0.25 M A X F E S 0.6       i f   C r i < 0.6 & F E S 0.25 , 0.5 M A X F E S C r i       o t h e r w i s e                                                                                                                                

3. MjSO

In this section, a modified algorithm called MjSO is proposed, in which three modifications are made: (1) a parameter control strategy based on a symmetric search process, (2) a novel parameter adaptive mechanism based on cosine similarity, and (3) a novel opposition-based learning restart mechanism. The MjSO pseudocode is shown in Algorithm 2.

3.1. A Parameter Control Strategy Based on a Symmetric Search Process

It is well known that the search process of an effective evolutionary algorithm can be divided into two symmetrical parts: exploration and exploitation. In the process of exploration, individuals face an unknown area to search, but also to avoid falling into the local optimal. On the other hand, in the exploitation process, individuals should search around the current optimal solution to find a better solution. Each problem has its own appropriate parameter values used to better balance exploration and exploitation [45]. Therefore, the symmetric parameter control strategy to the jSO algorithm is added, and the parameter F will be randomly generated using a uniform distribution within a specific range.
The optimization process of the algorithm into two symmetrical stages is divided empirically. When F E S < M A X F E S / 2 , the scaling factor (F) is generated using Equation (25). When the F E S > M A X F E S / 2 , the scaling factor (F) uses Equation (7) the Cauchy distribution to generate, where F E S is the current number of fitness function evaluations, M A X F E S is the largest number of the fitness function evaluations and rand () is a random number from 0 ~ 1 .
F i = 0.45 + 0.1 r a n d ( )

3.2. A Novel Parameter Adaptive Mechanism Based on Cosine Similarity

The original adaptive mechanism for the scaling factor and crossover rate values uses the weighted form of the means in Equations (15) and (16), where the weights are based on the improvement of the objective function value from Equation (17). This approach promotes exploitation rather than exploration; therefore, it can lead to premature convergence, which can be a problem, especially in higher dimensions.
The idea behind the proposed approach is simple. Regardless of whether one wants to improve the search ability of the algorithm or to maintain the search ability of the algorithm in a longer period of time, it is necessary to make individuals explore the search space more intensively and maintain a high population diversity. To do this through any kind of adaptation, it would be beneficial to find a parameter setting that enables an individual to achieve the desired behavior.
By replacing the fitness difference with the cosine similarity, in the original weight calculation in Equation (17), the difference between the objective function values can reach a higher value, especially in the high-dimensional search space [46,47]. However, the cosine similarity method in Equation (26) only considers the direction of the individual within the search space, which greatly reduces the complexity of the calculation.
In this modification, the individual-related scaling factor and crossover ratio values with the greatest directional differences will have the highest weights. This occurs because in n-dimensional space, any vector can be viewed as a directed line segment pointing in different directions from 0 ,   0 ,   0 , , 0 , and there are a definitely included angle and corresponding cosine between any two directed line segments. Cosine similarity comparison is a mathematical method used to measure the difference between vectors by using the cosine of the angle between line segments. Cosine similarity measures the angle between vectors in space and is more about the difference in direction than the difference in position, especially when solving higher-dimensional problems. As shown in Figure 1, if the position of point A is kept unchanged and point B is away from the origin of the coordinate axis in the original direction, then the cosine similarity remains unchanged (because the included angle does not change) while the distance between points A and B changes.
  w k = j = 1 D c o s u k , j , G , x k , j , G m = 1 S C R c o s u m , j , G , x m , j , G                               c o s θ = j = 1 D u k , j , G x k , j , G j = 1 D u k , j , G 2 j = 1 D x k , j , G 2
As a result, the ability to explore is rewarded by avoiding premature convergence in higher-dimensional target spaces.

3.3. A Novel Opposition-Based Learning Restart Mechanism

At the time of evolution, individuals in the population may lack diversity and therefore do not update the results of the algorithm. Therefore, if certain conditions are met, the novel opposition-based learning (OBL) restart mechanism is implemented. The pseudocode is shown in Algorithm 1.
Algorithm 1: A novel OBL restart mechanism
1: if λ = ξ
2: for i = 1 : N P do
3:  Generate the opposite vector O P i using Equation (27)
4:  Calculate the fitness value O P i ;
5:   F E S + +
6:  Replace P i with a fitter one between P i and O P i
7: end for
8: end if
In general, the evolutionary algorithm first obtains the feasible solution and then searches for the optimal solution in the solution space according to the fitness of the feasible solution. Various evolutionary algorithms search in different ways, but they all essentially have a random factor. In recent years, opposition-based learning has been widely applied in the evolutionary algorithm [48,49]. The main principle of opposition-based learning (OBL) is to replace random search with symmetric search, which can significantly improve the searching ability of the algorithm [50,51,52,53].
The inventors of OBL argue that given a random number, its opposite is more likely to be near the solution than a random point in the search space. Therefore, by comparing a number with its opposite number, the convergence time for finding the best solution can be reduced. For example, if the current solution X is −1 and the globally optimal value is 2, the X* solution that is symmetric about the origin of the coordinate axis is 1, and X is 3 away from the globally optimal value. However, the global optimal value of X* is only 1. Thus, the solution X* is closer to the global optimal, as shown in Figure 2.
Although the opposition-based learning method improves the searching capability of the algorithm to a great extent, opposition-based learning is too fixed and the tuning effect of the small space is not good. Therefore, in this paper, an innovative way is proposed to improve the original OBL which adds a random weight factor σ . For each individual p i , a different individual p i 1 is randomly selected in the population, where i i 1 and i , i 1 = 1 , 2 , p s . Then, P is recombined and obtain a new point. Finally, the opposite operation is performed at the merge point. In this way, the resulting position is not a fixed symmetric position, but a random position between the symmetric position and the central position. This is as shown in Equation (27).
O P i j = a j t + b j t σ P i j + 1 σ P i 1 j
Here, O P i j is the opposite of p i j , and p i j is the j t h vector of the i t h individual in the population. a j t and b j t are dynamic interval boundaries [ a j ( t ) , b j ( t ) ] and are respectively the minimum and maximum values of the j t h dimension in the current search space. t = 1 , 2 , . denotes the generation. σ is a random number in the range of 0 , 1 .
In order to solve the problem of how to judge the search stagnation, the variable coefficient λ is introduced to determine whether the population has low diversity or whether the algorithm converges to the local optimum. The variable coefficient λ is the variable coefficient of the error of all individual members. The variable coefficient λ is described as follows:
λ = f s t d f m e a n
Here, f s t d represents the standard deviation of the error value of all individuals and f m e a n represents to the average value of the error values of all individuals. The restart policy is enacted if the condition in Equation (29) is met.
λ < ξ
where ξ is the predetermined parameter used to control the restart.
Algorithm 2: MjSO
1: g 1 ; Archive A ← ∅ ; F E S = 0 ;
2: Initialize population P g = ( x i , g . . . , x N P , g ) randomly
3: Set all values in M F to 0.5
4: Set all values in M C R to 0.5
5: k 1 //index counter
6: while the termination criteria are not meet do
7:  S C R ← ∅, S F ← ∅
8:  for i = 1 to N P do
9:    r i ← select from 1 , H randomly
10:   if r i   = H then
11:    M F , r i   ← 0.9
12:     M C R , r i   ← 0.9
13:    end if
14:    if M C R , r i   < 0 then
15:    C R i , g 0
16:    else
17:    C R i , g N i ( M C R , r i , 0.1)
18:    end if
19:    if   g < 0.25   M A X _ F E S then
20:      C R i g   ← max ( C R i , g , 0.7)
21:    else if     g < 0.5 M A X _ F E S then
22:    C R i , g ← max ( C R i , g , 0.6)
23:   end if
24:    if F E S < M A X _ F E S / 2
25:    F i , g = 0.45 + 0.1 r a n d
26:    else
27:    F i , g C i ( M F , r i , 0.1)
28:    if g < 0.6 M A X _ F E S and F i , g > 0.7 then
29:       F i , g ← 0.7
30:    end if
31:    end if
32:    u i , j ← current-to-pBest-w/1/bin using Equation (21)
33:  end for
34:  for i = 1 to N P do
35:   if f u i , j   f x i , g   then
36:    x i , g + 1   u i , j
37:   else
38:    x i , g + 1   x i , g  
39:   end if
40:   if f u i , j f x i , g   then
41:     x i , g   A , x i , g S C R , F i , g S F
42:   end if
43:   Shrink A , if necessary
44:   Update M C R and M F
45:   Apply LPSR strategy//linear population size reduction
46:   Apply Algorithm 1
47:   Update p using Equation (20)
48:  end for
49:  g g + 1
50: end while

4. The Experimental Setup

Usually, due to the lack of a theoretical proof, it is difficult to evaluate the degree of “good” of an optimized algorithm; therefore, the benchmark function plays an important role in the performance evaluation of these algorithms. Therefore, the performance of the MjSO algorithm proposed in this paper is evaluated using the CEC 2017 actual parameter single objective optimization competition as the benchmark. The benchmark test includes 30 test functions with various characteristics. D is the dimension of the problem, and the problem is tested using D = 10 ,   D = 30 , D = 50 , and   D = 100 . Functions 1~3 are unimodal functions, functions 4~10 are multimodal functions, functions 11~20 are mixed functions, and functions 21~30 are composite functions. These unconstrained benchmark functions are shown in Table 1. The termination criteria for the iteration of the algorithm are when the maximum number of fitness function evaluations ( M A X F E S ) and the minimum error value (when the error value is less than 10 8 , it is considered to have found the optimal value) are D 10,000 and 10 8 , respectively. The search range [xmin, xmax] = [−100, 100], and 51 independent repeated experiments are conducted for each function. In order to statistically compare the quality of the solutions of different algorithms, two nonparametric statistical hypothesis tests were used to analyze the results: (1) the Friedman test was used to sort all the comparison algorithms [54]; (2) all of the comparison algorithms were evaluated using the Wilcoxon’s signed rank test with a significance level α = 0.05.
Each algorithm has its own parametric optimization method to control the influence of parameters on the algorithm. The algorithm LSHADE uses ParamILS [55] which is a versatile and efficient automatic parameter tuner. The parameters in jSO are kept unchanged according to the parameters setting in the LSHADE, except the following parameters: p , N P i n i t , H , and M F . The author of algorithm EBLSHADE thinks that it is difficult to determine the optimal values of the control parameters for a variety of problems with different characteristics at different stages of evolution. Therefore, for achieving good performance, the algorithm LSHADE parameter adaptation method was used in EBLSHADE. The algorithm SALSHADE-cnEPSin introduced the method of adaptive parameters by using Weibull distribution-based scaling factor and exponentially decreasing crossover rate. The algorithm ELSHADE-SPACMA performs semi-parameter adaptation (SPA) for F and Cr, and proposes a semi-parameter adaptation strategy to solve parameter setting problem. The parameter settings of each algorithm are consistent with the recommended values of the original papers [31,32,34,56,57], as shown in Table 2.

4.1. Experimental Environment

All these experiments were conducted on a PC with an Intel(R) Core (TM) i7-9700 CPU @ 3.0 GHz, 32 GB of RAM and the Windows 10 Professional Edition operating system; and all these algorithms were implemented using Matlab 2017b.

4.2. Clustering Analysis

The clustering algorithm selected in this paper is density-based noisy application spatial clustering (DBSCAN) [58]. It is a clustering algorithm based on the clustering density, which can find clusters with arbitrary shapes. The DBSCAN algorithm needs to set two control parameters and a distance measure. The settings are as follows:
  • The core point distance is that Eps = 1% of the decision space. For the CEC2017 benchmark set, Eps = 2.
  • The minimum number of clusters MinPts = 4 (minimum number of individuals with mutations).
  • The distance measurement is equal to Chebyshev distance [59]. If the distance between any corresponding attributes of two individuals is greater than 1% of the decision space, they are not considered to be directly dense-reachable.

4.3. Population Diversity

The population diversity (PD) measure used in this paper was taken from reference [60] and calculated based on the square root of the sum of individual dimensions, as shown in Equation (31), and the deviation from the corresponding mean, as shown in Equation (30):
x j ¯ = 1 N P i = 1 N P x j , i
P D = 1 N P i = 1 N P j = 1 D x j , i x j ¯
where i is the iteration of the individual population, and j is the iteration of the individual dimension.

5. Experimental Results and Analysis

This section presents the experimental results of this paper. Table 3, Table 4, Table 5, Table 6, Table 7 shows the results of the comparison between MjSO and the latest proposed LSHADE variant algorithms (EBLSAHDE, ELSHADE SPACMA, and SALSHADE-cnEPSin) and their original versions (jSO and LSHADE) on D = 10 , D = 30 , D = 50 , and D = 100 . All the experimental results adopt the error value. According to the experimental results, three symbols ( > , < ,   and   ) are set to compare the advantages and disadvantages of the algorithm. The symbols ( > , < ,   and   ) indicate that MjSO performed significantly better ( > ), significantly worse ( < ), or not significantly different better or worse ( ) compared to other compared algorithms using the Wilcoxon rank-sum test (significantly, α = 0.05). For each benchmark function from F1 to F30, the best value among the six algorithms is shown in bold (or bold if the performance is similar).
As shown in Table 3, the six DE variants compared perform equally well on benchmarks f1, f2, f3, f4, f6, and f9 and the global optimal values for these benchmarks were always found within 51 independent runs. In addition, compared with EBLSHADE, the proposed MjSO algorithm achieved better performance for 14 and similar performance for 13 out of 30 benchmarks. Compared with SALSHADE-cnEPSin, our proposed MjSO algorithm achieved better performance 8 times and similar performance 20 times. Compared to jSO, our algorithm achieved better performance 13 times and similar performance 16 times. Compared to LSHADE, our algorithm achieved better performance 16 times and similar performance 13 times. Compared to ELSHADE-SPACMA, our algorithm achieved better performance 13 times and similar performance 17 times. In summary, the proposed MjSO algorithm achieved the best performance in D = 10 optimization.
It can be seen from Table 4 that the six DE variants compared performed equally well on the benchmarks f1, f2, f3, and f9 and that the global optimal values for these benchmarks were always found within 51 separate runs. The global optimal values of SALSHADE-cnEPSin, ELSHADE-SPACMA and MjSO can be found on f6 each time the algorithms run. In addition, compared with EBLSHADE, the proposed MjSO algorithm achieved better performance in 16 and similar performance in 14 out of 30 benchmarks. Compared with SALSHADE-cnEPSin, our algorithm achieved better performance 19 times and similar performance 10 times. Compared to jSO, our algorithm achieved better performance 20 times and similar performance 10 times. Compared to LSHADE, our algorithm achieved better performance 17 times and similar performance 13 times. Compared to ELSHADE-SPACMA, our algorithm achieved better performance 18 times and similar performance 12 times. From what has been discussed above, the MjSO algorithm achieved the best performance on D = 30 optimization.
In Table 5, the six DE variants compared performed equally well on the benchmarks f1, f2, f3, and f9 and the global optimal values for these benchmarks were always found within 51 separate runs. In addition, compared with EBLSHADE, the proposed MjSO algorithm achieved better performance in 23 and similar performance in 7 out of 30 benchmarks. Compared with SALSHADE-cnEPSin, our algorithm achieved better performance 26 times and similar performance 4 times. Compared with jSO, our algorithm achieved better performance 20 times and similar performance 10 times. Compared with LSHADE, our algorithm achieved better performance 20 times and similar performance 10 times. Compared with ELSHADE-SPACMA, our algorithm achieved better performance 7 times and similar performance 12 times. In summary, the MjSO algorithm achieved the best performance on D = 50 optimization.
Table 6 shows the performances of the EBLSHADE, jSO, LSHADE, ELSHADE SPACMA, and MjSO algorithms during each run seeking to find the global optimal value in f1. The global optimal values of SALSHADE-cnEPSin, ELSHADE-SPACMA, and MjSO can be found on f9 every time the algorithms run. In addition, compared with EBLSHADE, the proposed MjSO algorithm achieved better performance in 24 and similar performance in 2 out of 30 benchmarks. Compared with SALSHADE-cnEPSin, our algorithm achieved better performance 25 times and similar performance 1 time. Compared to jSO, our algorithm achieved better performance 24 times and similar performance 2 times. Compared to LSHADE, our algorithm achieved better performance 25 times and similar performance 2 times. Compared to ELSHADE-SPACMA, our algorithm achieved better performance 23 times and similar performance 3 times. In summary, the proposed MjSO algorithm achieved the best performance on D = 100 optimization. From this, the modified algorithm MjSO proposed by us achieved better results on high-dimensional problems compared with the other five algorithms.
In addition, the effectiveness of the proposed MjSO algorithm with respect to convergence speed was also verified. The convergence diagram is shown in Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10. Figure 3 and Figure 4 show the convergence curves of the partial test functions on D = 10 optimization for the MjSO algorithm and the other five comparison algorithms. Figure 5 and Figure 6 show the convergence curves of the partial test functions on D = 30 optimization for the MjSO algorithm and the other five comparison algorithms. Figure 7 and Figure 8 show the convergence curves of the partial test functions on D = 50 optimization for the MjSO algorithm and the other five comparison algorithms. Figure 9 and Figure 10 show the convergence curves of the partial test functions on D = 100 optimization for the MjSO algorithm and the other five comparison algorithms. In most cases, in the early stage of the optimization process, most MjSO algorithms have similar convergence to the original algorithm, but in the middle stage of the optimization process, the MjSO algorithm maintains a longer search state and achieves a better objective function value in the middle and late stages of the optimization. The convergence rate of the red curve of the proposed MjSO algorithm is usually slow, but it can achieve a better objective function value.
Table A1, Table A2, Table A3, Table A4 show the number of independent experiments (#runs) of the proposed MjSO algorithm and its original version jSO, the mean number of iterations (mean CO) of the first clustering during the independent experiment running period, and the mean population density (mean PD) of these corresponding iterations.
It can be seen from the table that in most cases, the proposed MjSO algorithm had fewer clustering times (#runs), later clustering (mean CO), and greater population density (mean PD), i.e., better population diversity. In summary, in most cases, MjSO maintained the diversity of the population and prolonged the search stage in the optimization process. This also proves the effectiveness of the modifications we introduced.
According to the Friedman rankings in Table A5, Table A6, Table A7, Table A8, the MjSO algorithm performed the best in all dimensions and was ranked first. Table A9 shows that in all dimensions, the null hypothesis was rejected, which verifies the correctness of the above Friedman rankings. Thus, in the D = 10 , D = 30 , D = 50 , and D = 100 problems, the MjSO proposed by us is superior to the original jSO and LSHADE algorithms. Compared with more advanced algorithms such as EBLSHADE, SALSHADE-cnEPSin and ELSHADE-SPACMA, it has obvious advantages.

6. MjSO for Engineering Problems

In this section, three constrained engineering problems are employed to demonstrate the performance of MjSO, namely, the pressure vessel design problem, tension/compression spring design problem, and welding beam design problem [61,62]. These engineering problems are inspired by real world cases and there exist some real constrained conditions and one objective functions. Therefore, transforming them into constrained optimization problems is the general method to handle them.

6.1. Pressure Vessel Design Problem

The goal of the pressure vessel design problem is to minimize the total cost of the cylindrical pressure vessel [33]. It has four design variables: shell thickness ( T s ), ball thickness ( T h ), cylindrical shell radius ( R ), and shell length ( L ). These four optimization variables T s , T h ,   R ,   L constitute a four-dimensional constrained optimization problem, which is x = x 1 ,   x 2 , x 3 ,   x 4 = T s , T h ,   R ,   L . Figure 11 shows the pressure vessel and parameters involved in the design. The mathematical model of the pressure vessel design problem can be expressed as follows:
Minimize
f x = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
Subject to
g 1 x = x 1 + 0.0193 x 3 0 g 2 x = x 2 + 0.0954 x 3 0 g 3 x = π x 3 2 x 4 4 3 π x 3 3 + 1296000 0 g 4 x = x 4 240 0
where
0.0625 x i 6.1875 ,   i = 1 ,   2 ;   10 x i 200 ,   i = 3 ,   4 ;
Table 8 shows the seven test algorithms running independently for 51 times to solve the pressure vessel design problem, the optimal value of the optimization results of each algorithm, and the corresponding optimal solution (the optimal design value of the optimization variable). As can be seen from the table, compared with all other algorithms, MjSO achieved better results in the pressure vessel design problem: the MjSO of the objective function value of 5885.522643154713 obtained the best solution.

6.2. Tension/Compression Spring Design Problem

The goal of this design problem is to minimize the weight of the tension/compression spring shown in Figure 12. The design has three design variables: the wire diameter (d), mean coil diameter (D), and number of active coils (N), which are subject to one linear and three nonlinear inequality constraints on shear stress, surge frequency, and deflection. The mathematical model of tension/compression spring design problem can be expressed as follows:
Consider
x = x 1   x 2   x 3 = d   D   N
minimize
f x = 2 + x 1 x 2 x 3 2
subject to
g 1 x = 1 x 1 x 2 3 71785 x 3 4 0 g 2 x = 4 x 2 2 x 2 x 3 12566 x 2 x 3 3 x 3 4 + 1 5108 x 3 2 1 0 g 3 x = 1 140.45 x 1 x 2 2 x 1 0 g 4 x = x 2 + x 3 1.5 1 0  
where
2 x 1 15 , 0.25 x 2 1.3 , 0.05 x 3 2
As can be seen from Table 9, compared with all other algorithms, MjSO achieved better results in the design of tension/compression springs: the MjSO with the objective function value of 0.012665255009 got the best solution.

6.3. Welded Beam Design Problem

The purpose of the welded beam design problem is to minimize the cost of the welded beam shown in Figure 13. It contains four design variables: the thickness of the weld (h), length of the clamped bar (l), the height of the bar (t), and thickness of the bar (b), which are subject to two linear and five nonlinear inequality constraints on shear stress, bending stress in the beam, buckling load, and end deflection of the beam. The design is expressed as follows:
Consider
x = x 1   x 2   x 3   x 4 = h   l   t   b
Minimize
f x = 1.10471 x 2 + 0.04811 x 3 x 4 14.0 + x 2
Subject to
g 1 x = τ x 13600 0 g 2 x = σ x 30000 0 g 3 x = x 1 x 4 0 g 4 x = 6.000 P C x 0 g 5 x = 0.1047 x 1 2 + 0.04811 x 3 x 4 14.0 + x 2 0.5 0   g 6 x = δ x 0.25 0 g 7 x = 0.125 x 1 0
where
τ x = τ 2 + τ 2 + 2 τ τ x 2 2 R τ = P 2 x 1 x 2 τ = M P J ,   M = P L + x 2 2 R = x 2 2 4 + x 1 + x 3 2 2
J = 2 2 x 1 x 2 x 2 2 12 + x 1 + x 3 2 2 σ x = 6 P L x 4 x 3 2 ,   δ x = 4 P L 3 x 4 x 3 3 P C x = 4.013 E x 3 2 x 4 6 36 L 2 1 x 3 2 L E 4 G P = 6000 l b ,   L = 14   i n ,   E = 30 × 10 6   p s i ,   G = 12 × 10 6   p s i
0.1 x 1 2 ,   0.1 x 2 10 ,   0.1 x 3 10 ,   0.1 x 4 2
Table 10 presents the best weights and the optimal values for the decision variables from MjSO and several other algorithms. As can be seen from the table, when the four parameters h, l, t, and b are set as 0.2057, 3.4704, 9.0366, and 0.2057, respectively, the minimum manufacturing cost with MjSO is 1.7248. The solution from MjSO is better than the solutions from the other algorithms.
In this section, the proposed algorithm MjSO was used to solve three kinds of engineering problems and compared with the other six algorithms. There is a phenomenon worthy of analysis and explanation, as can be seen from the results: (1) in the three engineering problems, MjSO achieved the best results, which is significantly better than the DE series basic algorithms (DE, LSHADE). (2) For the latest LSHADE variant algorithm, in pressure vessel design problem, the result of MjSO was close to EBLSHADE; in the tension/compression spring design problem and welded beam design problem, the result of MjSO was not much different from the result of SALSHADE-cnEPSin and EBLSHADE. (3) Through Table 3 and Table 8, Table 9, Table 10, it can be found that the performance of MjSO, EBLSHADE, and SALSHADE-cnEPSin on engineering problems was consistent with the performance on D = 10 CEC’17 benchmark functions. This shows the effectiveness of the algorithm. (4) Due to the accuracy of the results generated by the algorithm on the model [40], it can be found that although the numerical results are relatively close, the progress made by MjSO is still significant.

7. Conclusions

This article proposes an improved jSO algorithm (MjSO) for engineering optimization design problems. In our proposed MjSO algorithm, first, a parameter control strategy based on a symmetric search process is applied for the scaling factor to balance exploration and exploitation. In addition, in order to improve the loss of population diversity in the evolutionary process, a parameter adaptive mechanism based on the “cosine similarity” is proposed. And a novel opposition-based learning restart mechanism to the MjSO algorithm to jump out of the local optimum. Through comparative experiments on 30 CEC 2017 benchmark functions, MjSO ranked first in the Friedman test in different dimensions, proving the effectiveness and competitiveness of the proposed algorithm. Finally, through the comparative analysis of the optimization design results of the six comparison algorithms, it is clearly shown that the improved algorithm in this paper has good solution quality and effectiveness in solving engineering optimization design problems.
In future work, we will continue to improve the optimization performance of the algorithm and improve its robustness, apply it to more engineering optimization design problems, and improve its ability to solve problems in engineering practice.

Author Contributions

Conceptualization, H.K.; methodology, H.K.; project administration, Z.L. and Y.S.; software, X.S.; validation, X.S. and Q.C.; visualization, Z.L. and Q.C.; formal analysis, H.K.; investigation, Q.C.; resources, Y.S.; data curation, Z.L.; writing—original draft preparation, Z.L.; writing—review and editing, H.K. and X.S.; supervision, X.S.; funding acquisition, Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China, grant number 61663046, 1876166. This research was funded by Open Foundation of Key Laboratory of Software Engineering of Yunnan Province, grant number 2020SE308, 2020SE309. This research was funded by Science Research Foundation of Yunnan Education Committee of China, grant number 2020Y0004.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Clustering and population diversity of jSO and MjSO on the CEC2017 in D = 10 .
Table A1. Clustering and population diversity of jSO and MjSO on the CEC2017 in D = 10 .
NojSOMjSO
#runsMean COMean PD#runsMean COMean PD
f1515.67E+013.93E+01514.58E+013.17E+01
f2518.84E+013.06E+01516.99E+012.92E+01
f3518.64E+017.00E+00516.97E+017.19E+00
f4516.01E+011.18E+01514.70E+011.13E+01
f5481.19E+033.28E+01501.11E+033.11E+01
f6519.35E+018.47E+00517.21E+018.84E+00
f7491.40E+031.05E+01461.32E+031.02E+01
f8501.19E+033.40E+01511.06E+033.32E+01
f9519.00E+017.45E+00516.95E+017.80E+00
f10471.37E+038.31E+01501.29E+037.08E+01
f11513.90E+022.13E+01513.35E+021.98E+01
f12512.76E+024.31E+01511.86E+023.40E+01
f13518.30E+021.35E+01516.52E+021.35E+01
f14518.12E+022.75E+01476.06E+023.25E+01
f15513.26E+021.81E+01512.62E+021.80E+01
f16498.87E+021.77E+01415.95E+022.61E+01
f1711.89E+031.47E+0121.72E+032.55E+01
f18513.00E+022.57E+01512.40E+022.40E+01
f19515.11E+023.04E+01513.77E+022.78E+01
f20477.52E+023.69E+01426.50E+023.44E+01
f21515.70E+024.52E+01515.78E+025.50E+01
f22515.68E+017.83E+00514.50E+011.11E+01
f23518.08E+022.96E+01516.38E+022.44E+01
f24518.42E+022.89E+01516.77E+023.64E+01
f25517.75E+011.85E+01516.37E+012.34E+01
f26516.12E+016.77E+00515.09E+017.10E+00
f27511.05E+021.74E+01518.44E+011.98E+01
f28518.25E+012.42E+01516.89E+012.71E+01
f29241.59E+034.41E+01241.48E+033.85E+01
f30511.60E+021.32E+01511.24E+021.31E+01
Table A2. Clustering and population diversity of jSO and MjSO on the CEC2017 in D = 30 .
Table A2. Clustering and population diversity of jSO and MjSO on the CEC2017 in D = 30 .
NojSOMjSO
#runsMean COMean PD#runsMean COMean PD
f1511.07E+022.37E+01519.13E+012.82E+01
f2512.81E+021.77E+01512.21E+021.77E+01
f3511.90E+027.05E+00511.72E+026.93E+00
f4511.40E+028.45E+00511.11E+028.20E+00
f5472.32E+034.46E+01482.34E+034.15E+01
f6511.82E+027.46E+00511.52E+027.31E+00
f7342.49E+031.47E+01362.54E+031.29E+01
f8482.26E+035.00E+01423.34E+034.45E+01
f9511.77E+027.19E+00511.47E+027.05E+00
f10172.76E+031.42E+02222.59E+031.48E+02
f11511.17E+032.70E+01511.22E+032.87E+01
f12514.77E+021.85E+01515.09E+022.00E+01
f13518.72E+029.56E+00519.66E+029.88E+00
f14292.01E+032.38E+01262.88E+031.80E+01
f15519.12E+021.45E+01511.05E+031.71E+01
f16302.86E+031.61E+01292.80E+031.65E+01
f17122.82E+035.36E+0173.74E+037.99E+01
f18202.50E+036.02E+00153.20E+038.67E+00
f19511.41E+031.46E+01511.42E+031.30E+01
f20142.93E+033.33E+0182.85E+036.11E+01
f21482.36E+034.28E+01502.25E+034.27E+01
f22511.15E+026.68E+00519.66E+016.54E+00
f23502.01E+034.25E+01511.86E+033.62E+01
f24511.82E+034.62E+01511.79E+033.94E+01
f25511.27E+027.39E+00511.04E+027.14E+00
f26511.68E+033.07E+01511.15E+031.80E+01
f27513.44E+021.59E+01512.53E+022.03E+01
f28511.58E+021.15E+01511.31E+029.79E+00
f29292.77E+035.36E+01242.72E+034.26E+01
f30512.85E+021.26E+01512.15E+021.34E+01
Table A3. Clustering and population diversity of jSO and MjSO on the CEC2017 in D = 50 .
Table A3. Clustering and population diversity of jSO and MjSO on the CEC2017 in D = 50 .
NojSOMjSO
#runsMean COMean PD#runsMean COMean PD
f1511.20E+029.52E+00511.32E+021.74E+01
f2513.95E+021.28E+01513.68E+021.32E+01
f3512.43E+027.88E+00512.52E+027.55E+00
f4511.84E+029.18E+00511.70E+028.41E+00
f5442.97E+035.62E+01442.83E+035.57E+01
f6511.95E+027.93E+00511.98E+027.67E+00
f7352.99E+031.86E+01343.03E+031.77E+01
f8453.00E+035.46E+01442.90E+035.47E+01
f9511.90E+027.75E+00511.89E+027.50E+00
f10143.49E+031.29E+02173.43E+031.54E+02
f11511.84E+033.05E+01511.85E+032.97E+01
f12513.67E+021.08E+01514.43E+021.25E+01
f13517.42E+021.12E+01519.27E+021.15E+01
f14511.67E+031.33E+01501.84E+031.43E+01
f15518.34E+021.20E+01511.13E+031.65E+01
f16413.47E+038.42E+00373.41E+031.12E+01
f17153.58E+037.67E+00103.52E+032.62E+01
f18518.67E+029.25E+00511.14E+039.77E+00
f19511.46E+031.23E+01511.63E+031.37E+01
f20243.43E+032.04E+01204.44E+032.19E+01
f21472.94E+035.73E+01432.93E+035.60E+01
f22371.64E+033.82E+01369.58E+023.10E+01
f23482.69E+035.35E+01472.64E+034.69E+01
f24512.59E+034.76E+01502.12E+033.38E+01
f25511.94E+028.44E+00511.69E+027.77E+00
f26512.36E+033.69E+01511.40E+031.43E+01
f27513.00E+021.05E+01512.15E+028.77E+00
f28511.71E+029.04E+00512.54E+021.03E+01
f29323.34E+035.63E+01233.34E+035.09E+01
f30513.28E+021.11E+01514.22E+023.08E+01
Table A4. Clustering and population diversity of jSO and MjSO on the CEC2017 in D = 100 .
Table A4. Clustering and population diversity of jSO and MjSO on the CEC2017 in D = 100 .
NojSOMjSO
#runsMean COMean PD#runsMean COMean PD
f1511.37E+029.05E+00511.94E+021.15E+01
f2515.10E+021.09E+01515.80E+021.04E+01
f3513.75E+029.86E+00514.08E+029.16E+00
f4512.01E+029.29E+00512.32E+029.04E+00
f5503.93E+037.09E+01494.03E+035.59E+01
f6512.06E+029.48E+00512.79E+028.85E+00
f7464.12E+032.19E+01434.02E+032.07E+01
f8454.04E+036.76E+01513.90E+036.33E+01
f9512.00E+029.04E+00512.62E+028.70E+00
f10134.46E+033.06E+02134.52E+032.13E+02
f11515.27E+029.79E+00511.17E+031.14E+01
f12513.30E+021.02E+01514.53E+029.08E+00
f13517.08E+021.34E+01519.23E+021.39E+01
f14511.01E+031.02E+01511.28E+031.30E+01
f15514.84E+021.03E+01519.30E+029.53E+00
f16494.43E+031.16E+01474.60E+039.92E+00
f17334.66E+032.45E+01494.04E+037.65E+01
f18515.82E+029.80E+00515.88E+021.40E+01
f19516.38E+021.07E+01511.48E+031.20E+01
f20264.64E+031.16E+01264.54E+037.67E+01
f21493.87E+036.44E+01512.41E+029.42E+00
f22144.39E+032.67E+01493.95E+036.22E+01
f23501.72E+032.38E+01517.70E+029.25E+00
f24511.01E+031.18E+01512.12E+028.49E+00
f25512.11E+029.56E+00512.53E+029.72E+00
f26517.98E+029.50E+00512.68E+028.62E+00
f27513.31E+029.28E+00512.93E+028.93E+00
f28512.23E+029.42E+00512.31E+028.99E+00
f29514.38E+034.21E+01514.13E+034.22E+01
f30515.09E+021.08E+01514.55E+021.01E+01
Table A5. The Friedman ranks of comparative algorithms on CEC2017 in D = 10 .
Table A5. The Friedman ranks of comparative algorithms on CEC2017 in D = 10 .
RankNameF-Rank
0MjSO2.62
1jSO3.43
2SALSHADE-cnEPSin3.55
3EBLSHADE3.6
4ELSHADE-SPACMA3.82
5LSHADE3.98
Table A6. The Friedman ranks of comparative algorithms on CEC2017 in D = 30 .
Table A6. The Friedman ranks of comparative algorithms on CEC2017 in D = 30 .
RankNameF-Rank
0MjSO2.1
1jSO3.4
2EBLSHADE3.43
3SALSHADE-cnEPSin3.95
4LSHADE4.05
5ELSHADE-SPACMA4.07
Table A7. The Friedman ranks of comparative algorithms on CEC2017 in D = 50 .
Table A7. The Friedman ranks of comparative algorithms on CEC2017 in D = 50 .
RankNameF-Rank
0MjSO1.9
1ELSHADE-SPACMA3.27
2jSO3.4
3SALSHADE-cnEPSin4.05
4LSHADE4.15
5EBLSHADE4.23
Table A8. The Friedman ranks of comparative algorithms on CEC2017 in D = 100 .
Table A8. The Friedman ranks of comparative algorithms on CEC2017 in D = 100 .
RankNameF-Rank
0MjSO1.87
1ELSHADE-SPACMA3.35
2SALSHADE-cnEPSin3.4
3jSO3.78
4EBLSHADE3.93
5LSHADE4.67
Table A9. Related statistical values obtained of Friedman test for α = 0.05.
Table A9. Related statistical values obtained of Friedman test for α = 0.05.
DChi-sq’Prob > Chi-sq’(p)Critical Value
1013.738191631.74E-0211.07
3031.078914929.04E-0611.07
5038.657458562.78E-0711.07
10038.163565133.50E-0711.07

References

  1. Dhiman, G.; Kumar, V. Multi-objective spotted hyena optimizer: A multi-objective optimization algorithm for engineering problems. Knowl. Based Syst. 2018, 150, 175–197. [Google Scholar] [CrossRef]
  2. Spall, J. Introduction to Stochastic Search and Optimization; Wiley Interscience: New York, NY, USA, 2003. [Google Scholar]
  3. Parejo, J.A.; Ruiz-Cortés, A.; Lozano, S.; Fernandez, P. Metaheuristic optimization frameworks: A survey and benchmarking. Soft Comput. 2012, 16, 527–561. [Google Scholar] [CrossRef] [Green Version]
  4. Zhou, A.; Qu, B.-Y.; Li, H.; Zhao, S.-Z.; Suganthan, P.N.; Zhang, Q. Multiobjective evolutionary algorithms: A survey of the state of the art. Swarm Evol. Comput. 2011, 1, 32–49. [Google Scholar] [CrossRef]
  5. Droste, S.; Jansen, T.; Wegener, I. Upper and lower bounds for randomized search heuristics in black-box optimization. Theory Comput. Syst. 2006, 39, 525–544. [Google Scholar] [CrossRef] [Green Version]
  6. Hoos, H.H.; Stützle, T. Stochastic Local Search: Foundations and Applications; Elsevier: Amsterdam, The Netherlands, 2004. [Google Scholar]
  7. Holland, J. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Application to Biology; Control and artificial intelligence, University of Michigan Press: Ann Arbor, MI, USA, 1975; pp. 106–111. [Google Scholar]
  8. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE; pp. 1942–1948. [Google Scholar]
  9. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  10. Wienholt, W. Minimizing the system error in feedforward neural networks with evolution strategy. In Proceedings of the International Conference on Artificial Neural Networks, Amsterdam, The Netherlands, 13–16 September 1993; Springer: London, UK, 1993; pp. 490–493. [Google Scholar]
  11. Eltaeib, T.; Mahmood, A. Differential evolution: A survey and analysis. Appl. Sci. 2018, 8, 1945. [Google Scholar] [CrossRef] [Green Version]
  12. Arafa, M.; Sallam, E.A.; Fahmy, M. An enhanced differential evolution optimization algorithm. In Proceedings of the 2014 Fourth International Conference on Digital Information and Communication Technology and Its Applications (DICTAP), Bangkok, Thailand, 6–8 May 2014; pp. 216–225. [Google Scholar]
  13. Liu, X.-F.; Zhan, Z.-H.; Zhang, J. Dichotomy guided based parameter adaptation for differential evolution. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, Madrid, Spain, 11–15 July 2015; pp. 289–296. [Google Scholar]
  14. Sallam, K.M.; Sarker, R.A.; Essam, D.L.; Elsayed, S.M. Neurodynamic differential evolution algorithm and solving CEC2015 competition problems. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 1033–1040. [Google Scholar]
  15. Viktorin, A.; Pluhacek, M.; Senkerik, R. Network based linear population size reduction in SHADE. In Proceedings of the 2016 International Conference on Intelligent Networking and Collaborative Systems (INCoS), Ostrawva, Czech Republic, 7–9 September 2016; pp. 86–93. [Google Scholar]
  16. Bujok, P.; Tvrdík, J.; Poláková, R. Evaluating the performance of shade with competing strategies on CEC 2014 single-parameter test suite. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 5002–5009. [Google Scholar]
  17. Viktorin, A.; Pluhacek, M.; Senkerik, R. Success-history based adaptive differential evolution algorithm with multi-chaotic framework for parent selection performance on CEC2014 benchmark set. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 4797–4803. [Google Scholar]
  18. Poláková, R.; Tvrdík, J.; Bujok, P. L-SHADE with competing strategies applied to CEC2015 learning-based test suite. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 4790–4796. [Google Scholar]
  19. Poláková, R.; Tvrdík, J.; Bujok, P. Evaluating the performance of L-SHADE with competing strategies on CEC2014 single parameter-operator test suite. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 1181–1187. [Google Scholar]
  20. Liu, Z.-G.; Ji, X.-H.; Yang, Y. Hierarchical differential evolution algorithm combined with multi-cross operation. Expert Syst. Appl. 2019, 130, 276–292. [Google Scholar] [CrossRef]
  21. Bujok, P.; Tvrdík, J. Adaptive differential evolution: SHADE with competing crossover strategies. In Proceedings of the International Conference on Artificial Intelligence and Soft Computing, Zakopane, Poland, 14–18 June 2015; pp. 329–339. [Google Scholar]
  22. Viktorin, A.; Senkerik, R.; Pluhacek, M.; Kadavy, T.; Zamuda, A. Distance based parameter adaptation for differential evolution. In Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 27 November–1 December 2017; pp. 1–7. [Google Scholar]
  23. Viktorin, A.; Senkerik, R.; Pluhacek, M.; Kadavy, T. Distance vs. Improvement Based Parameter Adaptation in SHADE. In Proceedings of the Computer Science On-Line Conference, Vsetin, Czech Republic, 25–28 April 2018; 2018; pp. 455–464. [Google Scholar]
  24. Molina, D.; Herrera, F. Applying Memetic algorithm with Improved L-SHADE and Local Search Pool for the 100-digit challenge on Single Objective Numerical Optimization. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 7–13. [Google Scholar]
  25. Awad, N.H.; Ali, M.Z.; Suganthan, P.N. Ensemble sinusoidal differential covariance matrix adaptation with Euclidean neighborhood for solving CEC2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastian, Spain, 5–8 June 2017; pp. 372–379. [Google Scholar]
  26. Zhao, F.; He, X.; Yang, G.; Ma, W.; Zhang, C.; Song, H. A hybrid iterated local search algorithm with adaptive perturbation mechanism by success-history based parameter adaptation for differential evolution (SHADE). Eng. Optim. 2019. [Google Scholar] [CrossRef]
  27. Das, S.; Mullick, S.S.; Suganthan, P.N. Recent advances in differential evolution—An updated survey. Swarm Evol. Comput. 2016, 27, 1–30. [Google Scholar] [CrossRef]
  28. Hansen, N. The CMA evolution strategy: A comparing review. In Towards a New Evolutionary Computation; Springer: Berlin/Heidelberg, Germany, 2006; pp. 75–102. [Google Scholar]
  29. Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 2010, 15, 4–31. [Google Scholar] [CrossRef]
  30. Al-Dabbagh, R.D.; Neri, F.; Idris, N.; Baba, M.S. Algorithmic design issues in adaptive differential evolution schemes: Review and taxonomy. Swarm Evol. Comput. 2018, 43, 284–311. [Google Scholar] [CrossRef]
  31. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar]
  32. Brest, J.; Maučec, M.S.; Bošković, B. Single objective real-parameter optimization: Algorithm jSO. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastian, Spain, 5–8 June 2017; pp. 1311–1318. [Google Scholar]
  33. Mohamed, A.W.; Hadi, A.A.; Fattouh, A.M.; Jambi, K.M. LSHADE with semi-parameter adaptation hybrid with CMA-ES for solving CEC 2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastian, Spain, 5–8 June 2017; pp. 145–152. [Google Scholar]
  34. Hadi, A.A.; Wagdy, A.; Jambi, K. Single-Objective Real-Parameter Optimization: Enhanced LSHADE-SPACMA Algorithm; King Abdulaziz Univ.: Jeddah, Saudi Arabia, 2018. [Google Scholar]
  35. Salgotra, R.; Singh, U.; Singh, G. Improving the adaptive properties of lshade algorithm for global optimization. In Proceedings of the 2019 International Conference on Automation, Computational and Technology Management (ICACTM), London, UK, 24–26 April 2019; pp. 400–407. [Google Scholar]
  36. Yang, M.; Li, C.; Cai, Z.; Guan, J. Differential evolution with auto-enhanced population diversity. IEEE Trans. Cybern. 2014, 45, 302–315. [Google Scholar] [CrossRef] [PubMed]
  37. Hongtan, C.; Zhaoguang, L. Improved Differential Evolution with Parameter Adaption Based on Population Diversity. In Proceedings of the 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), Chongqing, China, 27–29 June 2018; pp. 901–905. [Google Scholar]
  38. Wang, J.-Z.; Sun, T.-Y. Control the Diversity of Population with Mutation Strategy and Fuzzy Inference System for Differential Evolution Algorithm. Int. J. Fuzzy Syst. 2020, 22, 1979–1992. [Google Scholar] [CrossRef]
  39. He, X.; Zhou, Y. Enhancing the performance of differential evolution with covariance matrix self-adaptation. Appl. Soft Comput. 2018, 64, 227–243. [Google Scholar] [CrossRef]
  40. Awad, N.H.; Ali, M.Z.; Mallipeddi, R.; Suganthan, P.N. An improved differential evolution algorithm using efficient adapted surrogate model for numerical optimization. Inf. Sci. 2018, 451, 326–347. [Google Scholar] [CrossRef]
  41. Subhashini, K.; Chinta, P. An augmented animal migration optimization algorithm using worst solution elimination approach in the backdrop of differential evolution. Evol. Intell. 2019, 12, 273–303. [Google Scholar] [CrossRef]
  42. Awad, N.; Ali, M.; Liang, J.; Qu, B.; Suganthan, P.; Definitions, P. Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastian, Spain, 5–8 June 2017. [Google Scholar]
  43. Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  44. Brest, J.; Maučec, M.S.; Bošković, B. iL-SHADE: Improved L-SHADE algorithm for single objective real-parameter optimization. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, Canada, 24–29 July 2016; pp. 1188–1195. [Google Scholar]
  45. Fan, Q.; Yan, X. Self-adaptive differential evolution algorithm with zoning evolution of control parameters and adaptive mutation strategies. IEEE Trans. Cybern. 2015, 46, 219–232. [Google Scholar] [CrossRef]
  46. Wang, D.; Lu, H.; Bo, C. Visual tracking via weighted local cosine similarity. IEEE Trans. Cybern. 2014, 45, 1838–1850. [Google Scholar] [CrossRef]
  47. Van Dongen, S.; Enright, A.J. Metric distances derived from cosine similarity and Pearson and Spearman correlations. arXiv 2012, arXiv:1208.3145. [Google Scholar]
  48. Luo, J.; He, F.; Yong, J. An efficient and robust bat algorithm with fusion of opposition-based learning and whale optimization algorithm. Intell. Data Anal. 2020, 24, 581–606. [Google Scholar] [CrossRef]
  49. Shekhawat, S.; Saxena, A. Development and applications of an intelligent crow search algorithm based on opposition based learning. ISA Trans. 2020, 99, 210–230. [Google Scholar] [CrossRef] [PubMed]
  50. Ergezer, M.; Simon, D. Mathematical and experimental analyses of oppositional algorithms. IEEE Trans. Cybern. 2014, 44, 2178–2189. [Google Scholar] [CrossRef] [PubMed]
  51. Tizhoosh, H.R. Opposition-based reinforcement learning. J. Adv. Comput. Intell. Intell. Inform. 2006, 10. [Google Scholar] [CrossRef]
  52. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M. Opposition-based differential evolution. EEE Trans. Evol. Comput. 2008, 12, 64–79. [Google Scholar] [CrossRef] [Green Version]
  53. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M. Quasi-oppositional differential evolution. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, San Sebastian, Spain, 5–8 June 2017; pp. 2229–2236. [Google Scholar]
  54. Demšar, J. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  55. Hutter, F.; Hoos, H.H.; Leyton-Brown, K.; Stützle, T. ParamILS: An automatic algorithm configuration framework. J. Artif. Intell. Res. 2009, 36, 267–306. [Google Scholar] [CrossRef]
  56. Mohamed, A.W.; Hadi, A.A.; Jambi, K.M. Novel mutation strategy for enhancing SHADE and LSHADE algorithms for global numerical optimization. Swarm Evol. Comput. 2019, 50, 100455. [Google Scholar] [CrossRef]
  57. Salgotra, R.; Singh, U.; Saha, S.; Nagar, A. New improved salshade-cnepsin algorithm with adaptive parameters. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 3150–3156. [Google Scholar]
  58. Ester, M.; Kriegel, H.-P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Kdd, Menlo Park, CA, USA, August 1996; pp. 226–231. [Google Scholar]
  59. Deza, M.M.; Deza, E. Encyclopedia of distances. In Encyclopedia of Distances; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1–583. [Google Scholar]
  60. Poláková, R.; Tvrdík, J.; Bujok, P.; Matoušek, R. Population-size adaptation through diversity-control mechanism for differential evolution. In Proceedings of the MENDEL, 22th International Conference on Soft Computing, Brno, Czech Republic, 8–10 June 2016; pp. 49–56. [Google Scholar]
  61. Varaee, H.; Ghasemi, M.R. Engineering optimization based on ideal gas molecular movement algorithm. Eng. Comput. 2017, 33, 71–93. [Google Scholar] [CrossRef]
  62. Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
Figure 1. Cosine similarity.
Figure 1. Cosine similarity.
Symmetry 13 00063 g001
Figure 2. The opposition-based learning (OBL) principle.
Figure 2. The opposition-based learning (OBL) principle.
Symmetry 13 00063 g002
Figure 3. Convergence speed comparison on D = 10 optimization.
Figure 3. Convergence speed comparison on D = 10 optimization.
Symmetry 13 00063 g003
Figure 4. Convergence speed comparison on D = 10 optimization.
Figure 4. Convergence speed comparison on D = 10 optimization.
Symmetry 13 00063 g004
Figure 5. Convergence speed comparison on D = 30 optimization.
Figure 5. Convergence speed comparison on D = 30 optimization.
Symmetry 13 00063 g005
Figure 6. Convergence speed comparison on D = 30 optimization.
Figure 6. Convergence speed comparison on D = 30 optimization.
Symmetry 13 00063 g006
Figure 7. Convergence speed comparison on D = 50 optimization.
Figure 7. Convergence speed comparison on D = 50 optimization.
Symmetry 13 00063 g007
Figure 8. Convergence speed comparison on D = 50 optimization.
Figure 8. Convergence speed comparison on D = 50 optimization.
Symmetry 13 00063 g008
Figure 9. Convergence speed comparison on D = 100 optimization.
Figure 9. Convergence speed comparison on D = 100 optimization.
Symmetry 13 00063 g009
Figure 10. Convergence speed comparison on D = 100 optimization.
Figure 10. Convergence speed comparison on D = 100 optimization.
Symmetry 13 00063 g010
Figure 11. Pressure vessel design problem.
Figure 11. Pressure vessel design problem.
Symmetry 13 00063 g011
Figure 12. Compression/tension spring design problem.
Figure 12. Compression/tension spring design problem.
Symmetry 13 00063 g012
Figure 13. Welding beam design problem.
Figure 13. Welding beam design problem.
Symmetry 13 00063 g013
Table 1. Congress on Evolutionary Computation (CEC) 2017 unconstrained benchmark functions.
Table 1. Congress on Evolutionary Computation (CEC) 2017 unconstrained benchmark functions.
IDFunctionsOptima
F1Shifted and Rotated Bent Cigar Function100
F2Shifted and Rotated Sum of Differential Power Function200
F3Shifted and Rotated Zakharov Function300
F4Shifted and Rotated Rosenbrock’s Function400
F5Shifted and Rotated Rastrigin’s Function500
F6Shifted and Rotated Expanded Scaffer’s F6 Function600
F7Shifted and Rotated Lunacek Bi_Rastrigin Function700
F8Shifted and Rotated Non-Continuous Rastrigin’s Function800
F9Shifted and Rotated Levy Function900
F10Shifted and Rotated Schwefel’s Function1000
F11Hybrid Function 1 (N = 3)1100
F12Hybrid Function 2 (N = 3)1200
F13Hybrid Function 3 (N = 3)1300
F14Hybrid Function 4 (N = 4)1400
F15Hybrid Function 5 (N = 4)1500
F16Hybrid Function 6 (N = 4)1600
F17Hybrid Function 6 (N = 5)1700
F18Hybrid Function 6 (N = 5)1800
F19Hybrid Function 6 (N = 5)1900
F20Hybrid Function 6 (N = 6)2000
F21Composition Function 1 (N = 3)2100
F22Composition Function 2 (N = 3)2200
F23Composition Function 3 (N = 4)2300
F24Composition Function 4 (N = 4)2400
F25Composition Function 5 (N = 5)2500
F26Composition Function 6 (N = 5)2600
F27Composition Function 7 (N = 6)2700
F28Composition Function 8 (N = 6)2800
F29Composition Function 9 (N = 3)2900
F30Composition Function 10 (N = 3)3000
Table 2. Parameter settings.
Table 2. Parameter settings.
Parameter Setting
MjSO N P i n i t = 25 log D , N P f i n = 4, H = 5, M F = 0.5, M C R = 0.5, A = N P , ξ = 10 8 , P m a x = 0.25 P m i n = P m a x 2
jSO N P i n i t = 25 log D , N P f i n = 4, H = 5, M F = 0.3, M C R = 0.8, A = N P , , P m a x = 0.25 P m i n = P m a x 2
LSHADE N P i n i t = 18 D , N P f i n = 4, H = 6, M F = 0.5, M C R = 0.5, A = N P , P = 0.11
EBLSAHDE N P i n i t = 18 D ,   N P f i n = 4, H = 5, M F = 0.5, M C R = 0.5, A = N P , P = 0.11
ELSHADE-SPACMA N P i n i t = 18 D , N P f i n = 4, H = 5, F c p =0.5, c = 0.8, P i n i t = 0.3, p m i n = 0.15
SALSHADE-cnEPSin N P i n i t = 18 D , N P f i n = 4,   H = 5, M F = 0.5, M C R = 0.5, f r e q = 0.5 , p s = 0.5 , p c = 0.4
Table 3. Algorithm comparison between five powerful differential evolution (DE) variants and our MjSO algorithm on D = 10 optimization under f1–f30 of our test suite.
Table 3. Algorithm comparison between five powerful differential evolution (DE) variants and our MjSO algorithm on D = 10 optimization under f1–f30 of our test suite.
NOEBLSHADESALSHADE-cnEPSinjSOLSHADEELSHADE-SPACMAMjSO
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
f1≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+000.00E+000.00E+00
f2≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+000.00E+000.00E+00
f3≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+000.00E+000.00E+00
f4≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+000.00E+000.00E+00
f5>2.52E+008.98E-01>1.99E+006.62E-01>1.76E+007.60E–01>2.57E+008.37E-01>3.87E+002.02E+001.35E+009.10E-01
f6≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+000.00E+000.00E+00
f7>1.22E+017.09E-01>1.19E+015.67E-01>1.18E+016.07E–01>1.22E+017.05E-01>1.33E+011.75E+001.15E+015.99E-01
f8>2.23E+009.02E-01>1.99E+007.63E-01>1.95E+007.44E–01>2.52E+006.99E-01>4.10E+002.51E+001.37E+005.87E-01
f9≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+000.00E+000.00E+00
f10>2.50E+013.99E+01<7.36E+004.49E+01>3.59E+015.55E+01>3.86E+015.50E+01>2.27E+014.84E+011.53E+013.27E+01
f11>2.74E-015.48E-01≈0.00E+000.00E+00≈0.00E+000.00E+00>3.65E-016.94E-01≈0.00E+000.00E+000.00E+000.00E+00
f12<1.22E+013.59E+01<1.19E+027.49E+01<2.66E+001.68E+01>3.89E+015.68E+01>2.85E+015.14E+012.79E+015.02E+01
f13>3.64E+002.23E+00>4.83E+002.30E+00>2.96E+002.35E+00>3.88E+002.37E+00>3.57E+002.21E+002.49E+002.50E+00
f14>5.38E-018.03E-01≈0.00E+002.36E-01>5.85E–022.36E–01>7.73E-019.05E-01>7.80E-022.70E-010.00E+000.00E+00
f15≈1.44E-012.03E-01≈2.70E-012.03E+00≈2.21E–012.00E–01≈1.99E-012.12E-01≈2.51E-012.17E-011.92E-012.20E-01
f16≈4.34E-012.23E-01≈6.25E-012.59E-01≈5.69E–012.64E–01≈3.83E-011.64E-01≈5.62E-012.55E-015.64E-012.76E-01
f17≈1.23E-011.51E-01≈1.77E-012.41E-01≈5.02E–013.48E–01≈1.06E-011.31E-01≈1.39E-011.44E-013.54E-013.11E-01
f18≈1.79E-011.95E-01≈4.49E-015.43E+00≈3.08E–011.95E–01≈2.15E-011.97E-01≈7.10E-012.80E+003.06E-011.99E-01
f19≈9.06E-031.09E-02≈1.97E-023.01E-02≈1.07E–021.25E–02≈1.05E-021.10E-02≈1.55E-021.14E-021.60E-022.30E-02
f20<1.22E-026.12E-02≈3.12E-014.01E-01≈3.43E–011.29E–01<1.22E-026.06E-02≈1.41E-011.57E-013.12E-010.00E+00
f21>1.56E+025.12E+01≈1.00E+025.13E+01>1.32E+024.84E+01>1.50E+025.14E+01≈1.02E+021.48E+011.10E+023.11E+01
f22≈1.00E+021.01E-01≈1.00E+026.26E-02≈1.00E+020.00E+00≈1.00E+024.01E-02≈1.00E+021.21E-011.00E+028.17E-14
f23>3.03E+021.71E+00>3.01E+021.43E+00>3.01E+021.59E+00>3.03E+021.65E+00>3.04E+022.30E+003.00E+029.63E-01
f24>3.16E+025.46E+01>3.29E+027.97E+01>2.97E+027.93E+01>3.21E+024.52E+01>2.91E+029.54E+012.42E+021.14E+02
f25>4.15E+022.24E+01>4.43E+022.22E+01>4.06E+021.75E+01>4.09E+021.95E+01>4.13E+022.18E+013.95E+021.37E+01
f26≈3.00E+020.00E+00≈3.00E+020.00E+00≈3.00E+020.00E+00≈3.00E+020.00E+00≈3.00E+020.00E+003.00E+020.00E+00
f27>3.89E+021.39E-01>3.88E+021.66E+00>3.89E+022.26E–01>3.89E+021.78E-01>3.89E+021.67E-013.87E+021.63E+00
f28>3.47E+021.10E+02≈3.00E+021.23E+02>3.39E+029.65E+01>3.58E+021.18E+02>3.25E+021.04E+023.00E+020.00E+00
f29>2.33E+022.65E+00≈2.28E+021.56E+00>2.34E+022.96E+00>2.34E+022.78E+00≈2.30E+022.26E+002.30E+022.10E+00
f30<3.24E+041.60E+05≈3.94E+029.42E+04≈3.95E+024.50E–02>4.05E+022.08E+01>4.02E+021.77E+013.96E+029.44E+00
Table 4. Algorithm comparison between five powerful DE variants and our MjSO algorithm on D = 30 optimization under f1–f30 of our test suite.
Table 4. Algorithm comparison between five powerful DE variants and our MjSO algorithm on D = 30 optimization under f1–f30 of our test suite.
NOEBLSHADESALSHADE-cnEPSinjSOLSHADEELSHADE-SPACMAMjSO
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
f1≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+000.00E+000.00E+00
f2≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+000.00E+000.00E+00
f3≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+000.00E+000.00E+00
f4≈5.86E+013.11E-14<4.90E+013.32E+00≈5.87E+017.78E–01≈5.86E+013.22E-14≈5.86E+010.00E+005.86E+013.66E-14
f5≈6.26E+001.29E+00>1.24E+012.39E+00>8.56E+002.10E+00≈6.41E+001.52E+00>1.86E+018.04E+007.45E+002.20E+00
f6>6.04E-092.71E-08≈0.00E+008.66E-08>6.04E–092.71E–08>2.68E-081.52E-07≈0.00E+000.00E+000.00E+000.00E+00
f7≈3.73E+011.44E+00>4.32E+012.18E+00>3.89E+011.46E+00≈3.71E+011.55E+00>3.89E+013.43E+003.75E+012.11E+00
f8≈6.66E+001.54E+00>1.36E+012.21E+00>9.09E+001.84E+00≈7.15E+001.58E+00>1.61E+017.46E+007.98E+001.62E+00
f9≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+000.00E+000.00E+00
f10≈1.42E+032.01E+02>1.47E+032.35E+02>1.53E+032.77E+02>1.50E+031.73E+02>1.70E+034.06E+021.42E+032.73E+02
f11>2.63E+012.82E+01>3.93E+001.76E+01>3.04E+002.65E+00>3.22E+012.85E+01>7.80E+001.42E+011.56E+001.36E+00
f12>9.45E+023.73E+02>3.43E+022.19E+02>1.70E+021.02E+02>1.00E+033.59E+02>2.47E+021.28E+021.32E+028.57E+01
f13>1.55E+014.88E+00>1.70E+015.24E+00>1.48E+014.83E+00>1.61E+014.97E+00>1.57E+015.03E+001.25E+018.99E+00
f14≈2.11E+014.24E+00>2.20E+013.85E+00>2.18E+011.25E+00≈2.14E+012.98E+00>2.37E+015.25E+002.16E+014.72E+00
f15>2.67E+001.43E+00>3.65E+001.75E+00≈1.09E+006.91E–01>3.22E+001.32E+00≈1.86E+001.29E+001.80E+001.27E+00
f16>3.84E+012.64E+01>1.88E+013.98E+01>7.89E+018.48E+01>6.52E+017.46E+01>6.68E+018.35E+011.55E+015.56E+00
f17>3.32E+015.24E+00>2.83E+015.88E+00>3.29E+018.08E+00>3.28E+016.36E+00>2.97E+016.76E+002.70E+016.12E+00
f18≈2.07E+013.93E+00≈2.06E+019.07E-01≈2.04E+012.87E+00>2.21E+019.87E-01≈2.09E+013.03E+002.08E+013.21E-01
f19>5.32E+001.65E+00>5.91E+001.89E+00>4.50E+001.73E+00>5.21E+001.58E+00>4.61E+001.35E+004.22E+001.32E+00
f20>3.08E+015.80E+00>3.08E+015.96E+00>2.94E+015.85E+00>3.10E+016.54E+00>2.73E+014.56E+002.63E+016.29E+00
f21>2.11E+021.67E+00>2.13E+022.07E+00>2.09E+021.96E+00≈2.07E+021.47E+00>2.22E+026.64E+002.07E+021.72E+00
f22≈1.00E+021.00E-13≈1.00E+021.00E-13≈1.00E+020.00E+00≈1.00E+021.00E-13≈1.00E+020.00E+001.00E+020.00E+00
f23>3.48E+022.81E+00>3.54E+024.11E+00>3.51E+023.30E+00>3.50E+023.10E+00>3.69E+021.05E+013.45E+023.66E+00
f24>4.25E+021.89E+00>4.29E+022.71E+00>4.26E+022.47E+00>4.26E+021.44E+00>4.41E+027.84E+004.22E+022.90E+00
f25≈3.87E+022.71E-02≈3.87E+026.82E-03≈3.87E+027.68E–03≈3.87E+022.47E-02≈3.87E+029.60E-033.87E+025.67E-03
f26>8.97E+023.13E+01>9.51E+024.74E+01>9.20E+024.30E+01>9.51E+023.79E+01>1.08E+038.68E+018.91E+023.48E+01
f27>5.01E+025.44E+00>5.03E+024.01E+00>4.98E+027.00E+00>5.05E+024.81E+00>4.99E+026.15E+004.96E+025.69E+00
f28>3.26E+024.66E+01≈3.05E+024.21E+01>3.09E+023.03E+01>3.33E+025.24E+01≈3.02E+021.60E+013.02E+022.49E+01
f29>4.38E+026.17E+00>4.38E+021.05E+01>4.34E+021.36E+01>4.34E+028.45E+00>4.33E+021.56E+014.28E+021.06E+01
f30≈1.98E+033.07E+01≈1.97E+034.42E+01≈1.97E+031.90E+01≈1.99E+035.24E+01≈1.98E+033.34E+011.97E+031.24E+01
Table 5. Algorithm comparison between five powerful DE variants and our MjSO algorithm on D = 50 optimization under f1–f30 of our test suite.
Table 5. Algorithm comparison between five powerful DE variants and our MjSO algorithm on D = 50 optimization under f1–f30 of our test suite.
NOEBLSHADESALSHADE-cnEPSinjSOLSHADEELSHADE-SPACMAMjSO
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
f1≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+000.00E+000.00E+00
f2≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+000.00E+000.00E+00
f3≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+000.00E+000.00E+00
f4>7.00E+014.58E+01>3.69E+014.17E+01>5.62E+014.88E+01>7.07E+014.97E+01>4.36E+013.62E+013.00E+012.70E+01
f5≈1.42E+011.78E+00>2.81E+015.30E+00>1.64E+013.46E+00≈1.38E+012.95E+00≈1.39E+015.55E+001.48E+013.28E+00
f6>6.94E-053.32E-04>9.52E-071.40E-06>1.09E–062.62E–06>6.12E-053.09E-04<0.00E+000.00E+001.83E-084.41E-08
f7≈6.29E+011.98E+00>7.73E+015.54E+00≈6.65E+013.47E+00≈6.30E+011.85E+00≈6.15E+013.86E+006.58E+013.26E+00
f8≈1.22E+012.00E+00>2.64E+015.85E+00>1.70E+013.14E+00≈1.20E+012.11E+00>1.79E+017.47E+001.29E+012.17E+00
f9≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+000.00E+000.00E+00
f10>3.17E+033.06E+02>3.36E+033.02E+02>3.14E+033.67E+02>3.15E+032.59E+02>3.69E+036.07E+022.99E+033.14E+02
f11>4.37E+017.27E+00>2.81E+011.89E+00>2.79E+013.33E+00>4.80E+018.24E+00>2.62E+013.76E+002.44E+012.96E+00
f12>2.02E+035.00E+02>1.28E+033.64E+02>1.68E+035.23E+02>2.24E+035.22E+02>1.36E+033.42E+028.51E+023.87E+02
f13>6.40E+013.47E+01>8.68E+012.97E+01>3.06E+012.12E+01>6.41E+012.65E+01>3.68E+011.72E+012.56E+011.96E+01
f14>2.78E+012.27E+00>2.65E+012.35E+00≈2.50E+011.87E+00>2.98E+013.01E+00>3.07E+013.95E+002.52E+012.53E+00
f15>3.41E+019.07E+00>2.63E+013.57E+00>2.39E+012.49E+00>4.01E+011.06E+01>2.28E+012.20E+002.11E+011.67E+00
f16>3.54E+021.09E+02>3.29E+021.11E+02>4.51E+021.38E+02>3.77E+021.24E+02>4.15E+021.77E+022.85E+021.22E+02
f17>2.64E+026.33E+01>2.76E+025.33E+01>2.83E+028.61E+01≈2.51E+025.71E+01≈2.30E+029.68E+012.48E+028.68E+01
f18>3.31E+017.86E+00>2.50E+012.09E+00>2.43E+012.02E+00>3.98E+018.64E+00>2.51E+012.56E+002.24E+011.45E+00
f19>1.93E+013.25E+00>1.81E+013.41E+00>1.41E+012.26E+00>2.32E+015.94E+00>1.44E+012.31E+001.26E+012.58E+00
f20>1.72E+026.94E+01>1.31E+022.64E+01>1.40E+027.74E+01>1.73E+027.15E+01>1.08E+027.31E+011.00E+023.35E+01
f21>2.21E+022.55E+00>2.26E+026.04E+00>2.19E+023.77E+00≈2.12E+022.25E+00>2.42E+029.52E+002.14E+023.99E+00
f22>2.67E+031.59E+03>1.00E+031.70E+03>1.49E+031.75E+03>2.68E+031.62E+03≈7.86E+021.64E+037.67E+021.42E+03
f23>4.67E+024.38E+00>4.41E+027.07E+00>4.30E+026.24E+00>4.30E+024.91E+00>4.62E+021.39E+014.27E+025.86E+00
f24>5.05E+023.51E+00>5.14E+026.05E+00>5.07E+024.13E+00>5.06E+022.55E+00>5.34E+029.14E+004.98E+023.50E+00
f25>4.88E+022.01E+01>4.88E+021.54E+00≈4.81E+022.80E+00>4.84E+021.29E+01≈4.81E+022.80E+004.80E+021.81E-02
f26>1.13E+034.45E+01>1.25E+039.13E+01>1.13E+035.62E+01>1.14E+034.93E+01>1.34E+031.38E+021.05E+034.64E+01
f27>5.27E+021.09E+01>5.23E+028.58E+00≈5.11E+021.11E+01>5.31E+021.67E+01≈5.10E+029.52E+005.18E+021.43E+01
f28>4.73E+022.23E+01>4.67E+026.78E+00≈4.60E+026.84E+00>4.71E+022.15E+01≈4.60E+026.84E+004.59E+022.91E-13
f29>3.62E+021.04E+01>3.61E+021.07E+01>3.63E+021.32E+01≈3.50E+021.09E+01>3.58E+021.78E+013.53E+021.21E+01
f30>6.54E+057.78E+04>6.48E+055.85E+04≈6.01E+052.99E+04>6.58E+058.12E+04≈5.97E+052.38E+046.02E+053.07E+04
Table 6. Algorithm comparison between five powerful DE variants and our MjSO algorithm on D = 100 optimization under f1–f30 of our test suite.
Table 6. Algorithm comparison between five powerful DE variants and our MjSO algorithm on D = 100 optimization under f1–f30 of our test suite.
NOEBLSHADESALSHADE-cnEPSinjSOLSHADEELSHADE-SPACMAMjSO
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
f1≈0.00E+000.00E+00>1.36E-082.95E-08≈0.00E+000.00E+00≈0.00E+000.00E+00≈0.00E+000.00E+000.00E+000.00E+00
f2<6.44E+001.54E+01>1.49E+117.72E+11<8.94E+002.42E+01>8.42E+056.01E+06>7.52E+076.01E+073.27E+015.92E+01
f3>1.56E-061.46E-06<0.00E+000.00E+00>2.39E–062.73E–06>6.90E-066.77E-06≈1.60E-074.00E-078.81E-077.38E-07
f4≈1.84E+025.87E+01>2.01E+027.91E+00≈1.90E+022.89E+01≈1.97E+021.58E+01>2.01E+028.67E+001.94E+027.94E+00
f5>4.14E+013.80E+00>6.19E+011.05E+01>4.39E+015.61E+00>3.83E+014.90E+00>3.78E+015.85E+002.82E+019.53E+00
f6>1.22E-026.81E-03>5.65E-053.51E-05>2.02E-046.20E–04>5.71E-033.43E-03<0.00E+001.34E-081.05E-069.32E-07
f7>1.40E+024.25E+00>1.71E+027.36E+00>1.45E+026.70E+00>1.41E+024.46E+00>1.51E+021.48E+001.34E+026.19E+00
f8>3.73E+016.67E+00>6.20E+019.99E+00>4.22E+015.52E+00>3.86E+014.47E+00>2.98E+011.32E+012.80E+011.00E+01
f9>6.33E-014.60E-01≈0.00E+000.00E+00>4.59E-021.15E–01>4.86E-014.83E-01≈0.00E+000.00E+000.00E+000.00E+00
f10>1.03E+044.51E+02>1.05E+045.30E+02>9.70E+036.82E+02>1.04E+045.45E+02>1.08E+049.53E+029.60E+037.03E+02
f11>3.71E+021.03E+02>4.54E+014.79E+01>1.13E+024.32E+01>4.52E+028.94E+01>7.34E+014.30E+013.01E+014.79E+00
f12>2.28E+045.65E+03>6.72E+039.17E+02>1.84E+048.35E+03>2.49E+049.97E+03>7.79E+032.92E+035.38E+031.32E+03
f13>2.33E+026.04E+01>1.04E+023.72E+01>1.45E+023.80E+01>5.70E+024.14E+02>1.49E+023.83E+015.58E+012.54E+01
f14>2.36E+021.88E+01>5.12E+016.90E+00>6.43E+011.09E+01>2.51E+022.94E+01>4.75E+015.69E+003.84E+014.16E+00
f15>2.65E+023.97E+01>9.37E+013.19E+01>1.62E+023.81E+01>2.57E+024.02E+01>1.08E+024.34E+015.55E+011.54E+01
f16>1.50E+033.54E+02>1.51E+031.90E+02>1.86E+033.49E+02>1.66E+032.78E+02>1.76E+034.88E+021.38E+033.43E+02
f17>1.13E+032.25E+02>1.14E+021.75E+02>1.28E+032.38E+02>1.16E+031.94E+02>1.27E+033.45E+029.17E+022.21E+02
f18>2.65E+024.96E+01>6.90E+011.55E+01>1.67E+023.65E+01>2.41E+025.66E+01>1.05E+022.56E+016.00E+011.36E+01
f19>1.62E+021.78E+01>5.71E+017.301+00>1.05E+022.01E+01>1.78E+022.42E+01>6.05E+017.55E+004.73E+015.00E+00
f20>1.63E+031.86E+02>1.41E+031.88E+02>1.38E+032.43E+02>1.56E+032.04E+02>1.28E+032.54E+021.26E+032.69E+02
f21>2.59E+023.33E+00>2.88E+021.45E+01>2.64E+026.43E+00>2.59E+026.03E+00>2.96E+021.65E+012.51E+027.79E+00
f22>1.14E+045.60E+02>1.08E+045.81E+02>1.02E+042.18E+03>1.13E+045.65E+02>9.70E+031.20E+033.53E+016.04E+00
f23<5.71E+026.68E+00<5.92E+028.64E+00<5.71E+021.07E+01<5.66E+029.02E+00<6.03E+022.19E+011.13E+032.49E+01
f24>9.01E+025.70E+00>9.19E+021.21E+01>9.02E+027.89E+00>9.20E+026.78E+00>9.32E+021.90E+012.95E+022.18E+01
f25>7.50E+022.58E+01>7.21E+024.62E+01>7.36E+023.53E+01>7.53E+022.58E+01>7.00E+023.99E+016.68E+021.80E+01
f26>3.22E+036.88E+01>3.15E+031.73E+02>3.27E+038.02E+01>3.43E+038.34E+01>3.24E+032.19E+023.00E+023.22E-13
f27<6.19E+021.68E+01<5.88E+021.75E+01<5.85E+022.17E+01<6.43E+021.70E+01<5.62E+021.75E+011.67E+031.35E+02
f28>5.32E+022.58E+01>5.16E+021.91E+01>5.27E+022.73E+01>5.27E+022.15E+01>5.21E+022.38E+015.04E+021.70E+01
f29>1.12E+031.54E+02>1.14E+031.40E+02>1.26E+031.91E+02>1.27E+031.76E+02>1.21E+031.98E+029.89E+021.84E+02
f30<2.39E+031.39E+02<2.33E+031.66E+02<2.33E+031.19E+02<2.41E+031.52E+02<2.25E+031.11E+023.32E+038.57E+01
Table 7. Summarized statistical testing (Wilcoxon rank-sum test at the 0.05 significance level).
Table 7. Summarized statistical testing (Wilcoxon rank-sum test at the 0.05 significance level).
MjSO vs. D   =   10 D   =   30 D   =   50 D   =   100
EBLSHADE > (better)14162324
(no sig)131472
< (worse)3004
SALSHADE-cnEPSin > (better)8192625
(no sig)201041
< (worse)2104
jSO > (better)13202024
(no sig)1610102
< (worse)1004
LSHADE > (better)16172025
(no sig)1313102
< (worse)1003
ELSHADE-SPACMA > (better)13181723
(no sig)1712123
< (worse)0014
Table 8. Comparison of results for pressure vessel design problem.
Table 8. Comparison of results for pressure vessel design problem.
AlgorithmVariableTarget Cost
T s T h R L
DE0.82310.445342.9230176.73566301.5664
LSHADE0.81680.447242.1412177.12316138.8931
EBLSHADE0.78020.385640.4292198.49645889.3216
ELSHADE-SPACMA0.81250.437542.0913176.74656061.0777
SALSHADE-cnEPSin0.79290.391441.1773188.39505912.7115
jSO0.80360.397241.6392182.41205930.3137
MjSO0.77820.384740.3201199.99755885.5226
Table 9. Comparison of results for compression/tension spring design problem.
Table 9. Comparison of results for compression/tension spring design problem.
AlgorithmVariableTarget Weight
dDN
DE0.05920.49838.89800.0172
LSHADE0.05240.353211.68240.0133
EBLSHADE0.05000.317114.14170.0127
ELSHADE-SPACMA0.05190.348711.81450.0129
SALSHADE-cnEPSin0.05030.315914.2500.0128
jSO0.05620.47546.66700.0130
MjSO0.05160.359711.28800.0126
Table 10. Comparison of results for welded beam design problem.
Table 10. Comparison of results for welded beam design problem.
AlgorithmVariableTarget Cost
h l t b
DE0.23893.40679.63830.29012.0701
LSHADE0.21343.56018.46290.23461.8561
EBLSHADE0.20876.72219.36730.42171.7583
ELSHADE-SPACMA0.19473.78319.12340.20771.7796
SALSHADE-cnEPSin0.20233.54429.03660.20571.7280
jSO0.21473.38418.81030.21951.7890
MjSO0.20573.47049.03660.20571.7248
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shen, Y.; Liang, Z.; Kang, H.; Sun, X.; Chen, Q. A Modified jSO Algorithm for Solving Constrained Engineering Problems. Symmetry 2021, 13, 63. https://doi.org/10.3390/sym13010063

AMA Style

Shen Y, Liang Z, Kang H, Sun X, Chen Q. A Modified jSO Algorithm for Solving Constrained Engineering Problems. Symmetry. 2021; 13(1):63. https://doi.org/10.3390/sym13010063

Chicago/Turabian Style

Shen, Yong, Ziyuan Liang, Hongwei Kang, Xingping Sun, and Qingyi Chen. 2021. "A Modified jSO Algorithm for Solving Constrained Engineering Problems" Symmetry 13, no. 1: 63. https://doi.org/10.3390/sym13010063

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop