Next Article in Journal
Parametric Inner Bounds for the Extreme Eigenvalues of Real Symmetric Matrices
Previous Article in Journal
A Methodology for Contrast Enhancement in Laser Speckle Imaging: Applications in Phaseolus vulgaris and Lactuca sativa Seed Bioactivity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Double-Swarm Grey Wolf Optimizer with Covariance and Dimension Learning for Engineering Optimization Problems

1
College of Continuing Education, North China University of Science and Technology, Tangshan 063210, China
2
Key Lab of Intelligent Data Information Processing and Control of Hebei Province, Key Lab of Intelligent Motion Control System of Tangshan City, Tangshan University, Tangshan 063000, China
3
School of Mathematics and Statistics, Taishan University, Tai’an 271000, China
4
College of Artificial Intelligence, North China University of Science and Technology, Tangshan 063210, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(12), 2030; https://doi.org/10.3390/sym17122030
Submission received: 18 October 2025 / Revised: 13 November 2025 / Accepted: 20 November 2025 / Published: 27 November 2025
(This article belongs to the Section Mathematics)

Abstract

Grey Wolf Optimizer (GWO) has been widely applied in many fields due to its advantages of fast convergence speed, simple parameter settings, and easy implementation. However, with the deepening of algorithm research, GWO has also exposed some problems, such as being prone to becoming stuck in local optima and insufficient convergence accuracy. To address the above issues, a double-swarm Grey Wolf Optimizer with covariance and dimension learning (CDL-DGWO) is proposed. In the CDL-DGWO, firstly, chaotic grouping is used to divide the grey wolf population into two sub-swarms, forming a symmetric cooperative search framework, thereby improving the diversity of the population. Meanwhile, covariance and dimension learning strategies are utilized to improve the hunting behavior of grey wolves; the global search capability and the stability of the algorithm are thereby enhanced. Moreover, CDL-DGWO is validated on 23 benchmark problems and the CEC2017 test set. The results indicate that the CDL-DGWO algorithm outperforms swarm intelligence optimization algorithms such as Particle Swarm Optimization (PSO), Moth Flame Optimization (MFO), and other variants of GWO. Finally, the CDL-DGWO algorithm addresses three engineering design problems that are representative of real-world scenarios. The statistical analysis of the experimental outcomes demonstrates the feasibility and practicality of the proposed methodology.

1. Introduction

Optimization problems are prevalent across diverse real-world applications, such as image segmentation [1], economic load dispatch [2], and feature selection [3]. Traditional optimization methods often fail to solve complex, nonlinear, non-convex, discontinuous, or discrete problems effectively. In contrast, metaheuristic algorithms have gained attention due to their avoidance of unnecessary assumptions and their powerful global search capabilities. Over the past few decades, numerous metaheuristic algorithms have been developed, including Particle Swarm Optimization (PSO) [4], Grey Wolf Optimizer [5], Moth Flame Optimization (MFO) [6], Slime Mold Algorithm (SMA) [7], Whale Optimization Algorithm (WOA) [8], Mayfly Algorithm (MA) [9], and Harris Hawks Optimization (HHO) [10].
The Grey Wolf Optimizer (GWO) is a population-based algorithm that emulates the leadership hierarchy and hunting behavior of grey wolves [5]. In GWO, the population is categorized into alpha, beta, delta, and omega wolves, with the first three leading the optimization process by steering the search direction. GWO is recognized as an effective metaheuristic, with applications across various domains such as fuel cell technology [11], localization technology [12], and robotics [13].
Despite its successful applications across various fields, GWO encounters challenges including reduced population diversity, imbalanced exploration and exploitation, and premature convergence. These limitations are attributed to its dependence on α , β and δ wolves for location updates. To overcome these limitations, researchers have conducted extensive studies. In [14,15], GWO is combined with SCA and GSA to create two hybrid algorithms, integrating the strengths of both to enhance search performance. In [16], the GWO is hybridized with the Jaya algorithm to enhance task scheduling in fog computing, improving load balancing, response time, and resource efficiency. In [17], an enhanced GWO incorporating Lévy mutation (LGWO) was introduced, utilizing Lévy mutation and greedy selection strategies to refine the hunting phases. In [18], a modified version of GWO, termed RWGWO, was proposed to enhance its optimization capabilities. In [19], the “survival of the fittest” (SOF) principle and differential evolution (DE) were employed to improve GWO. This approach increases the likelihood of GWO escaping local optima. Ref. [20] introduced an improved GWO variant leveraging information entropy for dynamic position updates and a nonlinear convergence strategy, aiming to balance exploration and exploitation. Ref. [21] introduced a fuzzy strategy Grey Wolf Optimization algorithm, utilizing fuzzy mutation, crossover operators, and a non-inferior selection strategy to refine wolf positions and enhance search accuracy. Ref. [22] introduced DI-GWOCD, a discrete version of the Improved Grey Wolf Optimizer, for effectively detecting communities in complex networks. It enhances community detection through local search strategies and binary distance calculations, demonstrating superior performance over existing algorithms in community quality assessment. However, increasing population diversity in solving practical problems requires further attention. In [23], an Improved Adaptive Grey Wolf Optimization (IAGWO) algorithm was introduced by integrating concepts from Particle Swarm Optimization (PSO), using an Inverse Multiquadratic Function (IMF) for inertia weight adjustment, and employing a Sigmoid-based adaptive updating mechanism. Extensive experiments show that IAGWO outperforms the compared algorithms on benchmark test sets and effectively addresses 19 real-world engineering challenges, demonstrating its robustness and broad application potential in optimization problems.
Ref. [24] introduced an Artificial Bee Colony Algorithm with Adaptive Covariance Matrix(ACoM-ABC), which incorporates an adaptive covariance matrix to enhance the performance of the original Artificial Bee Colony (ABC) in addressing problems with high variable correlations. The effectiveness of ACoM-ABC was validated through extensive testing on multiple benchmark platforms. Ref. [25] proposed Cumulative Covariance Matrix Artificial Bee Colony (CCoM-ABC) algorithm accumulates covariance matrix information from each generation to construct an eigen-coordinate system, thereby guiding the search direction. Additionally, it dynamically selects between the eigen-coordinate and natural coordinate systems during the search process to balance exploration and exploitation capabilities, thereby improving performance on non-separable problems. The Covariance Matrix Adapted Grey Wolf Optimizer (CMA-GWO) is an enhanced version of the Grey Wolf Optimizer (GWO) [26]. It employs a Covariance Matrix Adaptation (CMA) strategy in the initial phase to optimize the initial positions of the prey, thereby augmenting the algorithm’s exploration capability and convergence rate. This research not only provides a novel approach for modeling and optimizing the direct metal deposition additive manufacturing process but also enhances the predictive accuracy and generalization ability of the model through the improved GWO. The I-GWO algorithm effectively balances global and local search capabilities by integrating the Grey Wolf Optimizer (GWO) with a Dimension Learning-based Hunting (DLH) search strategy [27]. This integration prevents premature convergence to local optima and maintains population diversity. I-GWO demonstrates its potential in solving real-world engineering optimization problems by effectively handling constraints and identifying optimal solutions.
In light of the “No Free Lunch” theorem [28], which posits that no single optimization algorithm can be universally optimal for all problems, this study addresses several issues of the GWO, including insufficient population diversity, premature convergence, and an imbalance between exploitation and exploration. To overcome these challenges, a double-swarm Grey Wolf Optimizer with covariance and dimension learning (CDL-DGWO) is introduced. In CDL-DGWO, chaotic grouping is used to divide the grey wolf population into two sub-swarms, thereby improving the population diversity. Meanwhile, covariance and dimension learning are utilized to improve the hunting behavior of grey wolves, thereby enhancing both global search capability and algorithm stability. In summary, the main contributions of this paper are as follows:
(1)
Chaotic grouping is utilized to generate two sub-swarms of grey wolves. This strategy can improve population diversity of CDL-DGWO algorithm.
(2)
Covariance and dimension learning strategies are utilized to improve the hunting behavior of grey wolves, which can enhance the global search capability and algorithm stability.
(3)
The performance of the CDL-DGWO algorithm is validated on 23 benchmark problems and the CEC2017 test suite. The results indicate that the CDL-DGWO outperforms the compared swarm intelligence algorithms such as PSO, MFO, and GWO variants in terms of solving optimal solutions and convergence performance. Additionally, the CDL-DGWO is applied to three engineering design problems, which fully demonstrate the practicality of the proposed methodology.
The structure of this article is as follows: Section 2 introduces the GWO; Section 3 details the CDL-DGWO; Section 4 presents simulation verification and comparative results; Section 5 applies CDL-DGWO to engineering problems; the article concludes in the final section.

2. Grey Wolf Optimizer (GWO)

The Grey Wolf Optimizer (GWO) is a metaheuristic algorithm inspired by the leadership and social behavior of grey wolves, designed to simulate predation process [5,29]. This article provides a brief introduction to GWO.
In GWO, the population of wolves is divided into four categories based on their hunting capabilities: α , β and δ , and ω . The α , β and δ wolves are considered the leaders and have the best fitness, guiding the rest of the pack in searching for prey. In the context of optimization, the prey represents the optimal solution.
The hunting process of grey wolves is simulated through three main steps: encircling, hunting, and attacking the prey. The mathematical model for encircling prey is given by the following formulas:
D ( t ) = abs X p ( t ) C X ( t )
X ( t + 1 ) = X p ( t ) D ( t ) A
Here, t denotes the current iteration count. X signifies any grey wolf within the population, while X p represents the position vector of the prey. A and C are diagonal coefficient matrices that dictate the search dynamics, and their calculation methods are described in [30]. D denotes the distance between the grey wolf and its prey. The notation “ abs ( ) ” signifies the transformation of each vector element into its absolute value.
The hunting process is led by the α , β and δ wolves, with the position update formulas for the other wolves being:
D α ( t ) = abs X α ( t ) C 1 X v ( t )
D β ( t ) = abs X β ( t ) C 2 X v ( t )
D δ ( t ) = abs X δ ( t ) C 3 X v ( t )
X 1 ( t + 1 ) = X α ( t ) D α ( t ) A 1
X 2 ( t + 1 ) = X β ( t ) D β ( t ) A 2
X 3 ( t + 1 ) = X δ ( t ) D δ ( t ) A 3
X v ( t + 1 ) = X 1 ( t + 1 ) + X 2 ( t + 1 ) + X 3 ( t + 1 ) 3
Among them, X v ( t ) represents the position of the v-th wolf in the previous iteration. X v ( t + 1 ) signifies the position of the current solution. Subsequently, other candidate grey wolves randomly adjust their positions near the prey, guided by the optimal information from α , β and δ .
Finally, when the prey is cornered, the wolves attack and capture it. This phase in GWO is simulated by updating the position of the wolves when the coefficients | A | and | C | are less than 1, forcing the wolves to converge towards the prey.
The main steps and pseudo-code of GWO are presented as Algorithm 1:
Algorithm 1: Grey Wolf Optimizer (GWO).
1: Initialize t = 1 , dimension D, population size N, m a x _ F E s , F E s = 0 ; %
   m a x _ F E s and F E s represent the maximum number of function evaluations and
  the current number of function evaluations respectively.
2: Initialize population X of wolves;
3: while F E s m a x _ F E s
4:        Calculate the fitness value of wolves and update F E s = F E s + N ;
5:         X α , X β and X δ are the first three wolves with the best fitness;
6:        for  i = 1 : N
7:                for  j = 1 : D
8:                        Updata parameters a, A and C ;
9:                        Update the variables in X i using Equations (3)–(9);
10:               end
11:       end
12: end while

3. Double-Swarm Grey Wolf Optimizer with Covariance and Dimension Learning (CDL-DGWO)

Inspired by previous research on GWO, a double-swarm Grey Wolf Optimizer with covariance and dimension learning (CDL-DGWO) is proposed. In CDL-DGWO, chaotic grouping is used to divide the grey wolf population into two sub-swarms, thereby the diversity of the population is improved. Meanwhile, covariance and dimension learning strategies are utilized to improve the hunting behavior of grey wolves, thereby enhancing the global search capability and algorithm stability. The CDL-DGWO and its pseudocode are described in detail as follows.

3.1. Chaotic Grouping and Dynamic Regrouping

In this section, the entire population is grouped using a chaos grouping mechanism [31]. The mechanism divides the population by generating a set of highly correlated sequences through a chaos function, thereby improving the grouping quality and search performance. The iterative chaos map is used as the chaos function, with the mapping formula as follows:
x i = sin b π x i 1
where b is a constant with range ( 0 , 1 ) .
The process of chaotic grouping is given in Algorithm 2, where N d denotes a chaotic sequence.
Algorithm 2: Chaotic grouping mechanism.
1: The first chaotic value ( N d ( 1 ) ) is initialized randomly in (0,1);
2: for  i = 2 : N
3:           b = r a n d ;
4:           N d ( i ) = sin b π N d ( i 1 ) ;
5: end for
6: [ A , I ] = s o r t ( N d ) ;
7: The grey wolf population is divided into two sub-swarms according to I;
To strengthen the information exchange between the two sub-swarms in the search process, the two sub-swarms are reorganized every R d generations. If the value of R d is too large, timely information exchange between sub-swarms is hindered. Conversely, if the value of R d is too small, the sub-swarms cannot conduct an adequate search. In the early stage of the search, the sub-swarms should be given sufficient time to search independently. In the later stage of the search, maximizing population diversity enhances the ability of the algorithm to locate the global optimum. This paper employs the following dynamic regrouping mechanism:
R d = c e i l n s n s n d × F E s max _ F E s
where n s = 40 and n d = 10 represent the maximum and minimum interval iterations respectively, and c e i l indicates rounding up to the nearest integer.
The choice of dividing the population into two sub-swarms establishes a symmetrical cooperative search architecture, which is based on a balance between algorithmic complexity and performance enhancement. A dual-swarm structure provides a clear and computationally efficient framework for implementing complementary search strategies: one swarm focused on exploration (via covariance learning) and the other on exploitation (via dimension learning). This dichotomy has been effectively demonstrated in other multi-swarm optimizers [32], proving sufficient to introduce beneficial population diversity and mitigate premature convergence.

3.2. Learning Strategies

In GWO, the positions of individual grey wolves are updated through hunting activities. However, the inter-variable interactions are typically not considered, which makes GWO perform poorly in solving complex optimization problems. The update of individual grey wolf positions using a covariance matrix aims to transform the original space into a feature space, thereby enhancing information sharing among variables and improving the algorithm’s overall performance. The detailed explanation is as follows:
The covariance matrices ( Σ X ) for the populations of wolves and X α , X β and X δ are computed, with the calculation process for Σ X detailed below:
Σ X = [ c o v ( X j , X k ) ]
where X = [ X , X α , X β , X δ ] , j = 1 , 2 , , d , k = 1 , 2 , , d , X j = x 1 j , x 2 j , , x n j T , X k = x 1 k , x 2 k , , x n k T .
c o v ( X j , X k ) = i = 1 n ( x i j ( t ) X ¯ j ( t ) ) ( x i k ( t ) X ¯ k ( t ) ) n 1
where i = 1 , 2 , , n . The formula for X ¯ j ( t ) , X ¯ k ( t ) are as follows:
X ¯ j ( t ) = i = 1 n x i j ( t ) / n
X ¯ k ( t ) = i = 1 n x i k ( t ) / n
To rotate the original space into the eigenspace, eigen decomposition is applied to Σ X as follows:
Σ X = Q X Λ X Q X T
where Q X is orthogonal matrices, which is composed of eigenvectors of Σ X . Λ X is diagonal matrices that composed of eigenvalues of Σ X .
Then, the wolf individuals are transformed from the original space to the eigenspace. The conversion formula is as follows:
X i e i g ( t ) = X i ( t ) × Q X
Since X e i g ( t ) = [ X e i g ( t ) , X α e i g ( t ) , X β e i g ( t ) , X δ e i g ( t ) ] , X e i g ( t ) , X α e i g ( t ) , X β e i g ( t ) and X δ e i g ( t ) can be obtained.
In the eigenspace, wolves update their positions with the following form:
D e i g ( t ) = abs X p e i g ( t ) C X e i g ( t )
X e i g ( t + 1 ) = X p e i g ( t ) D e i g ( t ) A
X e i g ( t + 1 ) = X 1 e i g ( t + 1 ) + X 2 e i g ( t + 1 ) + X 3 e i g ( t + 1 ) 3
Among them, p [ α , β , δ ] . X e i g ( t ) represents the position of the wolf in the previous iteration in eigenspace. X e i g ( t + 1 ) is the position of the current solution, and X 1 e i g , X 2 e i g and X 3 e i g denote the position of the leaders α , β and δ .
After the wolves’ positions have been updated, the wolves are transformed from the eigenspace to the original space by the following formula:
X i ( t + 1 ) = X i e i g ( t + 1 ) × Q X T
While covariance learning enhances global search capability, dimension learning provides local refinement. The combination ensures a balanced approach to optimization.
In GWO, α , β and δ guide the other wolves to update their position in each iteration, resulting in strong convergence. Dimension learning involves updating positions based on interactions with different neighbors and randomly selected wolves. The detailed process is as follows [27]:
First, the radius R i is obtained by calculating the Euclidean distance between the current position X ( t ) and the candidate position X v ( t + 1 ) :
R i ( t ) = X i ( t ) X i v ( t + 1 )
Then, the neighborhood of X ( t ) is defined using the radius R i . D i is the Euclidean distance between X i and X j .
N i ( t ) = X j ( t ) D i X i ( t ) , X j ( t ) R i ( t )
Finally, calculate the candidate solution X i D L ( t + 1 ) obtained through dimension learning:
X i η D L ( t + 1 ) = X i η ( t + 1 ) + r a n d × X n η ( t ) X r η ( t )
where X r η ( t ) is a random wolf, and X n η ( t ) is d-th dimension of a random neighbor. The better solution between X i D L ( t + 1 ) and X i v ( t + 1 ) is selected as the final candidate X i ( t + 1 ) for the next iteration.

3.3. Framework of the CDL-DGWO

This subsection presents the detailed pseudo-code and a comprehensive flowchart that elucidate the workings of the CDL-DGWO. These are delineated in Algorithm 3 and Figure 1, respectively.
To ensure clarity and facilitate understanding, the steps of the CDL-DGWO are outlined as follows:
Step 1:
Initialize the system parameters, which include population size N, individual dimension d, random parameters a, A and C , as well as the maximum number of function evaluations ( m a x _ F E s ) and the current number of function evaluations ( F E s ).
Step 2:
Generate the initial population X of wolves randomly within the defined upper and lower bounds of X .
Step 3:
Divide the population into two sub-swarms using Algorithm 2.
Step 4:
Check the stopping criteria: Determine whether F E s m a x _ F E s or the best fitness value meets the accuracy requirements. If conditions are met, output the position of X α as the best approximated optimum, otherwise, proceed to step 5.
Step 5:
Calculate and sort the fitness of each individual, updating F E s = F E s + n . Select the top three individuals as X α , X β and X δ .
Step 6:
Sub-swarm 1 enhances search efficiency by integrating a dimension learning mechanism with GWO’s original position update mechanism. Updating the position of every grey wolf individual using Equations (3)–(9). Conduct dimension learning operation using Equations (22)–(24).
Step 7:
For the Sub-swarm 2, introduce a covariance matrix to enhance information sharing among individual variables, thereby improving the overall performance of the algorithm.
Step 8:
Perform dynamic regrouping using Equation (11) and Algorithm 2, then return to Step 4.
Step 9:
Return the best solution.
Algorithm 3: Pseudocode of CDL-DGWO.
1: Initialize t = 1 , d, N, m a x _ F E s , F E s = 0 ;
2: Initialize population X of wolves;
3: The population is divided into 2 sub-swarms by Algorithm 2;
4: while F E s m a x _ F E s
5:        Calculate the fitness value of wolves, F E s = F E s + N ;
6:         X α , X β and X δ are the first three wolves with the best fitness;
       %% The Sub-swarm 1
7:        for  i = 1 : N / 2
8:                for  j = 1 : d
9:                        Updata parameters a, A and C ;
10:                        Update the variables in X i using Equations (3)–(9);
11:               end
12:       end
13:       Conduct dimension learning operation using Equations (22)–(24);
       %% The Sub-swarm 2
14:       Calculate the covariance matrices Σ X using Equations (12)–(15);
15:        Q X is obtained, which is composed of eigenvectors of Σ X ,
          based on eigen decomposition relation Equation (16);
16:        The individuals of wolves are transformed into eigenspace based on
          eigenvector using Equation (17);
17:        for  i = 1 : N / 2
18:                for  j = 1 : d
19:                       Updata parameters a, A and C ;
20:                       Update the variables in X i e i g using Equations (18)–(20);
21:                end
22:        end
23:        Convert positions of wolves to original space using Equation (21);
24:        If mod(t, R d )==0
25:            Regrouping using Algorithm 2;
26:        end
27: end while

3.4. Computational Complexity of the CDL-DGWO

The computational complexity of CDL-DGWO is determined by the original GWO and the hierarchical guidance strategy. The original GWO primarily consists of the following stages: initialization with a complexity of O ( N · D ) , fitness evaluation with a complexity of O ( 1 ) , the first three wolfs update with a complexity of O ( N ) , and position update with a complexity of T · O ( N · D ) . Based on these stages, the overall complexity of GWO can be expressed as O ( N · D ) + T · O ( N · D ) = O ( T · N · D ) , where N represents the population size, D denotes the dimensionality of the function, and T is the maximum number of iterations.
The computational complexity of CDL-DGWO is primarily determined by the following components per iteration:
(1)
Fitness Evaluation: O ( N ) .
(2)
Standard GWO Operations (population update, leader selection): O ( N · D ) .
(3)
Covariance Matrix Calculation (for Sub-swarm 1) The covariance matrix computation using Equations (12)–(15) requires O ( 1 2 N · D 2 ) operations. The subsequent eigen decomposition in Equation (16) has a complexity of O ( D 3 ) .
(4)
Dimension Learning (for Sub-swarm 2): The neighborhood search and candidate solution update in Equations (22)–(24) contribute O ( 1 2 N · D ) .
(5)
Chaotic Grouping: The generation and sorting of chaotic sequences incur O ( N log N ) , but when amortized over all iterations, this becomes negligible.
Therefore, the complexity per iteration is: O ( N · D ) + O ( 1 2 N · D 2 + D 3 + O ( 1 2 N · D ) . Over T iterations, Big O notation ignores constant coefficients and lower-order terms. Therefore, the computational complexity of the CDL-DGWO can be simplified to: O T · ( N · D 2 + D 3 ) .

4. Test Results and Analysis

4.1. Test Suites, Test Methods and Performance Index

To assess the performance of the CDL-DGWO presented in this article, 23 basic functions [33] and the CEC2017 test set [34] are utilized for evaluation. The former includes unimodal, multimodal, and fixed-dimensional multimodal problems. The CEC2017 test suite is characterized by high nonlinearity, multimodality, high dimensionality, and non-convexity. These characteristics pose greater challenges for algorithms in solving problems, as they need to overcome issues such as local optima and curse of dimensionality in the solution space.
Based on the above two test sets, a comprehensive performance evaluation of CDL-DGWO is conducted. Additionally, CDL-DGWO is compared with and validated against other improved GWO and heuristic algorithms. The comparison algorithms include: GWO [5], MIGWO [30], IGWO [19], LGWO [17], HGWOSCA [14], RWGWO [18], EGWO [35], PSOGWO [36], PSO [37], MFO [6], SSA [38], WOA [8], HSCA [39], HCLPSO [40], PSOGSA [41], mSCA [42]. The parameters of comparative algorithms in the experiment are given in Table 1. The parameters listed in Table 1 are defined as follows, adhering to their original sources: r 1 , r 2 , r, p are uniformly distributed random numbers in [0,1]; a is a control parameter that decreases linearly from 2 to 0; c 1 , c 2 are acceleration coefficients; w is the inertia weight; b, t are constants specific to MFO; e l t i s t c h e c k , r p o w e r are control parameters as defined in [41]. All other unspecified parameters follow the standard definitions from the corresponding references.
During the experiment, the maximum number of function evaluations ( m a x _ F E s ) for the two test suites are set at 30,000 and 300,000, respectively. The dimension D of both test suites is set to 30. To ensure a fair comparison of the algorithms’ core search capabilities while maintaining statistical reliability, all algorithms were evaluated using the following procedure: For each test function, an identical set of 30 initial populations was generated. Each algorithm was then independently run 30 times, with each run starting from a corresponding population in this shared set. Different random seeds were employed across these runs to ensure the robustness of results, while the consistent initialization eliminates any performance bias that could arise from specific starting positions. The maximum number of function evaluations ( m a x _ F E s ) for the 23 benchmark functions is set to 30,000. This value is chosen in accordance with common practices in the metaheuristic optimization literature [33], providing a sufficient budget for algorithms to demonstrate convergence behaviors while maintaining computational tractability for extensive comparative studies involving multiple algorithms and independent runs.
In this paper, Mean, Std, the multiple-problem Wilcoxon test and Friedman test are used to evaluate the performance of CDL-DGWO algorithm. Mean and Std denote the average and standard deviation of 30 independent runs, respectively. the multiple-problem Wilcoxon test is applied to each test set.

4.2. Effects of Proposed Strategies

The proposed CDL-DGWO algorithm builds upon the original GWO and enhances its performance through the introduction of hybrid chaotic grouping and dynamic regrouping. These mechanisms effectively divide the population into two distinct sub-swarms. The first sub-swarm employs a covariance strategy, while the second sub-swarm utilizes a dimension learning strategy. Both strategies are designed to improve the overall performance of the algorithm.
To evaluate the effectiveness of the proposed strategies, a comparative experiment is designed in this section. Specifically, the C-GWO algorithm represents an enhanced variant of the original GWO. In this variant, one sub-swarm incorporates the covariance learning mechanism, whereas the other sub-swarm retains the original GWO update mechanism. Similarly, the D-GWO algorithm is another variant of GWO. Here, one sub-swarm employs the dimension learning strategy, while the other sub-swarm maintains the original GWO update mechanism.
Table 2 illustrates the configuration of these strategies. Within this matrix, the binary indicator “1” signifies that a particular strategy is applied within the algorithm, while “0” indicates that the strategy is not utilized.
To evaluate the effectiveness of the introduced policies, the performance of four algorithms (GWO, C-GWO, D-GWO, and CDL-DGW) was assessed across 23 benchmark functions. In the simulation tests, the population size was set to 30 for all algorithms, and the stopping criterion was defined as a maximum of 30,000 function evaluations. Each algorithm was independently executed 30 times on each function. The results, including the average values (Mean) and standard deviations (Std) for each algorithm, are presented in Table 3.
The results of the multiple comparisons using the Wilcoxon signed-rank test are presented in Table 4. Additionally, Table 5 displays the Friedman test results for each algorithm across 23 benchmark functions.
Compared to the original GWO, the C-GWO algorithm achieved the smallest values on 14 out of the 23 benchmark functions, including F1 and F3. This demonstrates the effectiveness of the covariance strategy employed by C-GWO. Similarly, D-GWO outperformed GWO on 14 functions, such as F5 and F6, by achieving the smallest values. This highlights the efficacy of the dimension learning strategy utilized by D-GWO. According to the results of the Friedman test, CDL-DGWO, C-GWO, and D-GWO all ranked higher than the original GWO, with CDL-DGWO achieving the highest overall ranking. This indicates that the integration of both covariance and dimension learning strategies in CDL-DGWO yields the best performance.

4.3. Comparisons on Classical Benchmark Problems

This section presents the comparison results between CDL-DGWO and 15 other algorithms. Specifically, Table 6 shows the average and standard deviation of the 16 algorithms after 30 independent runs. Table 7 presents the results of the multiple-problem Wilcoxon test. Table 8 shows the Friedman test results for each algorithm across the benchmark problems.
The statistical outcomes for “R+” represent a positive rank, indicating the superiority of CDL-DGWO compared to its counterparts. Conversely, “R−” denotes a negative rank, suggesting that CDL-DGWO is worse in performance relative to other algorithms. The “+/−/≈” represent that the CDL-DGWO is significantly better than, worse than, and similar to its compared algorithm on the associated function, respectively. Friedman test is used to provide a ranking of all algorithms on a group of test problems, thereby providing a more transparent assessment of the algorithm’s efficacy. “Ave” represents the average ranking, and “Rank” denotes the final ranking.
At the 0.05 level, when p < 0.05, it indicates that CDL-DGWO is significantly better than the compared algorithm, while p > 0.05 indicates that there is no significant difference in performance between the two compared algorithms.
Regarding the 23 benchmark problems, Table 6 indicates that CDL-DGWO outperforms other algorithms on F1-F4, F7, F9, F10, F11, and F12. CDL-DGWO also exhibits strong competitiveness in the remaining test problems. IGWO demonstrates superior results to other algorithms on problems F5 and F14–F19. SSA outperforms other algorithms on problem F6; WOA does so on F8; HCLPSO does so on F13, F21, and F23. Table 7 reveals no significant difference between CDL-DGWO, IGWO, and HCLPSO for the 23 test problems, but a significant difference when compared to the other 13 algorithms. In addition, Table 8 presents the ranking of 16 algorithms, with an average ranking of 3.2 and a final ranking of 1 for CDL-DGWO.
For a more intuitive demonstration of the CDL-DGWO’s performance, convergence curves have been plotted, comparing it with other swarm intelligence algorithms and GWO variants on selected functions. Figure 2 shows the convergence curves of CDL-DGWO for problems F1, F4, F10, F12, F21, and F23. These include problems with single peaks, multiple peaks, and fixed-dimensional multimodal peaks.
On unimodal functions (F1, F4), CDL-DGWO does not exhibit a superior convergence rate initially but outperforms other algorithms in the later stages, achieving higher precision. For multimodal functions (F10, F12), it demonstrates rapid early convergence, though the pace of improvement slows considerably thereafter. A similar two-phase convergence pattern—characterized by a fast start followed by gradual refinement—is also observed on functions F21 and F23.
To sum up, CDL-DGWO can find competitive solutions to benchmark testing problems. Double swarm technology and dimensional learning can maintain population diversity and enhance the search space exploration efficiency. Covariance strategies can improve information exchange between variables, thereby enhancing the performance of GWO.

4.4. Comparisons on CEC 2017

To further assess the performance of CDL-DGWO, this study analyzes its performance based on the CEC 2017 benchmark and compares it with various algorithms, including GWO, IGWO, LGWO, HGWOSCA, RWGWO, EGWO, PSOGWO, MIGWO, PSO, MFO, SSA, WOA, HSCA, PSOGSA, and mSCA. Table 9 provides the average and standard deviation of 16 algorithms running independently for 30 times. Table 10 presents the results of the multiple-problem Wilcoxon test. Table 11 shows the Friedman test results for each algorithm on CEC 2017. This test assesses the performance of each algorithm across multiple problems, providing a ranking that highlights the overall effectiveness of the CDL-DGWO relative to the other contenders.
For CEC2017, it can be seen from Table 9 that CDL-DGWO performs better than other compared algorithms on problems f7, f10, f24, and f26. CDL-DGWO also has strong competitiveness in other test cases. According to Table 10, there is no significant difference between CDL-DGWO, IGWO, SSA, and PSO on the 23 test instances, but a notable difference is observed when compared to the other algorithms. “+” indicates the superiority of CDL-DGWO over the 11 comparative algorithms. In addition, Table 11 presents the rankings for the 16 algorithms, with CDL-DGWO achieving an average ranking of 3.03 and a final ranking of 2. In summary, CDL-DGWO is effective in solving complex optimization problems.
The achievement of near-optimal solutions on several fixed-dimension multimodal functions (e.g., F14–F16) primarily demonstrates CDL-DGWO’s efficacy in locating the global optimum on these specific static benchmarks. To assess the risk of overfitting and generalization capability, the algorithm was further tested on the more complex and modern CEC2017 test suite, which comprises functions characterized by multimodality, asymmetry, noise, and rotation properties, along with intricate variable interactions and noisy fitness landscapes. The competitive performance of CDL-DGWO on CEC2017 (as shown in Table 9, Table 10 and Table 11) suggests that its strengths—maintained population diversity and adaptive search strategies—generalize effectively beyond the simpler classical benchmarks. Furthermore, its success on the diverse set of engineering design problems in Section 5, which involve non-linear constraints and real-world variable dependencies, provides additional evidence of its robustness and general applicability.

4.5. Analysis of Population Diversity in CDL-DGWO

Population diversity reflects the distribution of solutions within a population and can be used to assess the exploration and exploitation capabilities of an algorithm during the iterative process. In this subsection, based on the method described in [43], the population diversity index of CDL-DGWO is calculated and compared with that of the GWO. The calculation method is as follows:
X ¯ d = 1 N s = 1 N X s , d
D i v d = 1 N s = 1 N X s , d X ¯ d
D i v = 1 D d = 1 D D i v d
where X ¯ d represents the average value of solutions in the d-th dimension.
Figure 3 illustrates the dynamic changes in population diversity with respect to the number of function evaluations for the CDL-DGWO and GWO on four typical functions (unimodal function f3, simple multimodal function f7, hybrid function f14, and composite function f26) from the 30-dimensional CEC2017 benchmark test suite. These selected functions represent different types of optimization problems, including unimodal, multimodal, hybrid, and composite functions, and thus can reflect the performance of the algorithms in various scenarios. As observed from the figure, the population diversity of the GWO algorithm remains relatively stable and consistent throughout the search process for all four test functions. This indicates that GWO maintains a relatively stable level of population diversity during the search process. However, this relatively uniform diversity pattern limits the algorithm’s ability to escape local optima in complex optimization problems, revealing its limitations in exploring new solution spaces. Consequently, the global search capability of GWO is somewhat compromised. In contrast, the population diversity of the proposed CDL-DGWO algorithm fluctuates significantly throughout the search process and consistently maintains a higher level of diversity. This dynamic diversity pattern confers several notable advantages. This ability to balance exploration and exploitation significantly enhances the algorithm’s capacity to escape local optima, resulting in stronger adaptability and robustness when tackling complex optimization problems.

5. Testing of Engineering Design Problems

This section employs three practical engineering optimization problems to evaluate CDL-DGWO. In the experiment, the maximum number of function evaluations ( m a x _ F E s ) is set to 15,000, and the population size is set to 30. Each experiment is conducted independently 30 times.

5.1. Constrained Problem

For the constrained problem, the objective function and constraint of this problem are as follows [44]:
Min F 1 ( x ) = 5.3578547 x 3 3 + 0.8356891 x 1 x 5 + 37.293239 x 1 + 40729.141
s . t . g 1 ( x ) = 85.334407 + 0.0056858 x 2 x 5 + 0.0006262 x 1 x 4 0.0022053 x 3 x 5 92 0 g 2 ( x ) = 85.334407 0.0056858 x 2 x 5 0.0006262 x 1 x 4 0.0022053 x 3 x 5 0 g 3 ( x ) = 80.51249 + 0.0071317 x 2 x 5 + 0.0029955 x 1 x 2 + 0.0021813 x 1 2 110 0 g 4 ( x ) = 80.51249 0.0071317 x 2 x 5 0.0029955 x 1 x 2 0.0021813 x 1 2 + 90 0 g 5 ( x ) = 9.300961 + 0.0047026 x 3 x 5 + 0.0012547 x 1 x 3 + 0.0019085 x 3 x 4 25 0 g 6 ( x ) = 9.300961 0.0047026 x 3 x 5 0.0012547 x 1 x 3 0.0019085 x 3 x 4 + 20 0
78 x 1 102 33 x 2 45 27 x i 45 i = 3 , 4 , 5
For this minimization problem, Table 12 provides comparative results for the constrained problem. It is evident that CDL-DGWO yields the optimal solution. The results in Table 12 demonstrate that CDL-DGWO is competitive for the constrained problem.

5.2. Tension/Compression Spring Design Problem

For the tension/compression spring design problem, the goal is to achieve the lowest-cost spring by selecting a set of values, as illustrated in Figure 4. This optimization must consider constraints such as shear stress, resonant frequency, and minimum deflection requirements.
The objective function and constraint for this problem are as follows [5,45]:
Min F 2 ( x ) = x 3 + 2 x 2 x 1 2
x = x 1 , x 2 , x 3 = ( d , D , N )
s . t . g 1 ( x ) = 1 x 2 3 x 3 71785 x 1 4 0 g 2 ( x ) = 4 x 2 2 x 1 x 2 12566 x 2 x 1 3 x 1 4 + 1 5108 x 1 2 0 g 3 ( x ) = 1 140.45 x 1 x 2 2 x 3 0 g 4 ( x ) = x 1 + x 2 1.5 1 0
where N represents the number of active coils, D is mean coil diameter, and d is the wire diameter. 0.05 x 1 2.00 , 0.25 x 2 1.30 , 2.00 x 3 15.0 .
Table 13 demonstrates the comparison results of comparison among CDL-DGWO and competitors, CDL-DGWO achieves the best performance. Table 13 provides the statistical results for multiple algorithms in terms of average and standard deviation. It is evident that CDL-DGWO achieves the minimum weight design compared to other optimization algorithms.

5.3. Welded Beam Design Problem

Welded beam design is a standard engineering benchmark problem, as introduced in the literature [5,45]. As shown in Figure 5, the objective is to design a welded beam at the lowest possible manufacturing cost by finding the global optimal solution.
The design involves four optimization variables: the weld thickness (h), length of the clamped bar (l), the bar height (t), and thickness of the bar (b). The constraints for the welded beam design include the shear stress ( τ ) and bending stress ( σ ) in the beam, buckling load ( P b ) on the bar, and end deflection ( δ ) of the beam. Let X = x 1 , x 2 , x 3 , x 4 = h , l , t , b , the specific mathematical model is described as follows:
Min F 3 ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14 + x 2 )
s . t . g 1 ( x ) = τ ( x ) τ m a x 0 g 2 ( x ) = σ ( x ) σ m a x 0 g 3 ( x ) = x 1 x 4 0 g 4 ( x ) = 0.10471 x 1 2 + 0.04811 x 3 x 4 ( 14 + x 2 ) 0 g 5 ( x ) = 0.125 x 1 0 g 6 ( x ) = δ ( x ) δ m a x 0 g 7 ( x ) = P P c ( x ) 0
0.1 x m 2 , m = 1 , 4 , 0.1 x m 10 , m = 2 , 3 τ ( x ) = ( τ ) 2 + ( τ ) 2 + ( l τ τ ) / 0.25 ( l 2 + ( h + t ) 2 ) τ = P 2 x 1 x 2 , τ = M R J , M = P ( L + x 2 2 ) , R = x 2 2 4 + ( x 1 + x 3 2 ) 2 J = 2 2 x 1 x 2 x 2 2 12 + ( x 1 + x 3 2 ) 2 , σ ( x ) = 6 P L x 4 x 3 2 , δ ( x ) = 4 P L 3 E x 3 3 x 4 , P c ( x ) = 4.013 E x 3 2 x 4 6 36 L 2 ( 1 x 3 2 L E 4 G ) , P = 6000 l b , L = 14 i n , E = 30 · 10 6 psi , G = 12 · 10 6 psi , τ m a x = 13 , 600 psi , σ m a x = 30000 psi , δ m a x = 0.25 i n
Table 14 provides comparative results for the welded beam design problem. It is evident that CDL-DGWO achieved smaller design solutions in terms of both the average and optimal values compared to the other 14 algorithms, and also obtained the smallest standard deviation. The results in Table 14 demonstrate that CDL-DGWO is competitive in solving the welded beam design problem.
Based on the strong performance of CDL-DGWO demonstrated across 23 benchmark functions, the CEC2017 test suite, and three engineering design problems, the algorithm shows particular promise in complex, real-world optimization scenarios characterized by high dimensionality, non-linearity, and multi-modality. Its enhanced population diversity, achieved through chaotic double-swarm grouping and dynamic regrouping, makes it exceptionally suitable for problems where traditional GWOs are prone to premature convergence. The integration of covariance matrix learning allows CDL-DGWO to effectively handle problems with correlated variables by transforming the search space into an eigenspace, thereby improving information sharing among dimensions. This is especially beneficial in fields such as mechanical design, structural optimization, and energy systems—where design variables often exhibit strong interactions. Furthermore, the dimension learning strategy enhances local exploration, making CDL-DGWO a robust choice for fine-tuning solutions in precision-critical applications like aeronautical engineering, robotics trajectory planning, and embedded system design.

6. Conclusions

In this study, a novel Double-swarm Grey Wolf Optimizer with Covariance and Dimension Learning (CDL-DGWO) has been proposed. This enhancement of the traditional Grey Wolf Optimizer (GWO) utilizes chaotic grouping to partition the population into two sub-swarms, which has been found to improve population diversity. Furthermore, covariance and dimension learning strategies have been incorporated to refine the hunting behavior of grey wolves, thereby significantly enhancing the global search capability and stability of the algorithm. The performance of CDL-DGWO was evaluated using 23 benchmark problems and the CEC2017 test suite. It was demonstrated that CDL-DGWO surpasses the compared swarm intelligence algorithms such as PSO, MFO, and GWO variants in terms of optimal solution identification and convergence performance. Moreover, according to the results of the Friedman test, CDL-DGWO ranks first compared to the comparison algorithms. The efficacy of CDL-DGWO in addressing real-world challenges was further confirmed through its successful application to three practical engineering design problems. The results indicate that the CDL-DGWO algorithm exhibits significant performance advantages in handling complex engineering design optimization problems, particularly in the search for global optima and the maintenance of algorithmic stability. Future research may focus on further refinements in parameter tuning, multi-group dynamic grouping strategy, integration with other optimization techniques, and testing across a broader range of practical applications.

Author Contributions

Conceptualization, S.M. and M.X.; methodology, S.M.; software, S.M.; validation, S.M., M.X. and X.Z.; formal analysis, X.Z.; investigation, X.Y.; resources, M.X.; data curation, X.Z.; writing—original draft preparation, S.M.; writing—review and editing, M.X.; visualization, S.M.; supervision, X.Y.; project administration, M.X.; funding acquisition, M.X. and X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Science Research Project of Hebei Education Department (Grant No. QN2025401), Doctoral Research Startup Fund Project of Tangshan University (Grant No. BC202415) and Natural Science Foundation of Shandong Province under Grant (Grant No. ZR2023QF044).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

Nomenclature

Acronyms and Abbreviations
GWOGrey Wolf Optimizer
CDL-DGWODouble-swarm Grey Wolf Optimizer with Covariance and Dimension Learning
PSOParticle Swarm Optimization
MFOMoth Flame Optimization
SMASlime Mold Algorithm
WOAWhale Optimization Algorithm
MAMayfly Algorithm
HHOHarris Hawks Optimization
SCASine Cosine Algorithm
GSAGravitational Search Algorithm
LGWOmodified Lévy-embedded Grey Wolf Optimizer
RWGWORandom Walk Grey Wolf Optimizer
SOFSurvival Of the Fittest
DEDifferential Evolution
DI-GWOCDDiscrete version of the Improved Grey Wolf Optimizer
IAGWOImproved multi-strategy adaptive Grey Wolf Optimization
IMFInverse Multiquadratic Function
ACoM-ABCArtificial Bee Colony Algorithm with Adaptive Covariance Matrix
ABCArtificial Bee Colony
CCoM-ABCCumulative Covariance Matrix Artificial Bee Colony
CMA-GWOCovariance Matrix Adapted Grey Wolf Optimizer
I-GWOImproved Grey Wolf Optimizer
DLHDimension Learning-based Hunting
MIGWOMulti-swarm Improved Grey Wolf Optimizer
HGWOSCAHybrid Grey Wolf Optimizer (GWO)–Sine Cosine Algorithm (SCA)
EGWOEnhanced Grey Wolf Optimizer
PSOGWOHybrid Particle Swarm Optimization (PSO)Grey Wolf Optimizer (GWO)
SSASalp Swarm Algorithm
HSCAHybrid Sine Cosine Algorithm
HCLPSOHeterogeneous Comprehensive Learning Particle Swarm Optimization
PSOGSAHybrid Particle Swarm Optimization (PSO)–Gravitational Search Algorithm (GSA)
mSCAself-adaptive Sine Cosine Algorithm
CECCongress on Evolutionary Computation
C-GWODouble-swarm Grey Wolf Optimizer with Covariance
D-GWODouble-swarm Grey Wolf Optimizer with Dimension Learning
Mathematical Symbols
tCurrent iteration number
TMaximum number of iterations
NPopulation size (number of wolves)
DDimensionality of the problem (number of variables)
X Position vector of a grey wolf
X α , X β , X δ Position vectors of the alpha, beta, and delta wolves (best solutions)
X P Position vector of the prey
A , C Coefficient vectors in GWO
aControl parameter that decreases linearly from 2 to 0
D Distance vector between a wolf and the prey
Σ X Covariance matrix
Q X Orthogonal matrix of eigenvectors
Λ X Diagonal matrix of eigenvalues
X e i g Position vector in the eigenspace
R i Neighborhood radius for dimension learning
N i ( t ) Neighborhood of the i-th wolf
X i D L Candidate solution from dimension learning
R d Regrouping interval
n s , n d Maximum and minimum values for R d calculation
m a x _ F E s Maximum number of function evaluations
F E s Current number of function evaluations
X ¯ d Mean value of all solutions in the d-th dimension
D i v Population diversity index

References

  1. Yu, X.; Wu, X. Ensemble grey wolf Optimizer and its application for image segmentation. Expert Syst. Appl. 2022, 209, 118267. [Google Scholar] [CrossRef]
  2. Tai, T.C.; Lee, C.C.; Kuo, C.C. A hybrid grey wolf optimization algorithm using robust learning mechanism for large scale economic load dispatch with vale-point effect. Appl. Sci. 2023, 13, 2727. [Google Scholar] [CrossRef]
  3. Pan, H.; Chen, S.; Xiong, H. A high-dimensional feature selection method based on modified Gray Wolf Optimization. Appl. Soft Comput. 2023, 135, 110031. [Google Scholar] [CrossRef]
  4. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization: An overview. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  5. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  6. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  7. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  8. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  9. Zhang, T.; Zhou, Y.; Zhou, G.; Deng, W.; Luo, Q. Bioinspired bare bones mayfly algorithm for large-scale spherical minimum spanning tree. Front. Bioeng. Biotechnol. 2022, 10, 830037. [Google Scholar] [CrossRef]
  10. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  11. Ebrahimi, S.M.; Hasanzadeh, S.; Khatibi, S. Parameter identification of fuel cell using repairable grey wolf optimization algorithm. Appl. Soft Comput. 2023, 147, 110791. [Google Scholar] [CrossRef]
  12. Yu, X.W.; Huang, L.P.; Liu, Y.; Zhang, K.; Li, P.; Li, Y. WSN node location based on beetle antennae search to improve the gray wolf algorithm. Wirel. Netw. 2022, 28, 539–549. [Google Scholar] [CrossRef]
  13. Dereli, S. A new modified grey wolf optimization algorithm proposal for a fundamental engineering problem in robotics. Neural Comput. Appl. 2021, 33, 14119–14131. [Google Scholar] [CrossRef]
  14. Singh, N.; Singh, S. A novel hybrid GWO-SCA approach for optimization problems. Eng. Sci. Technol. Int. J. 2017, 20, 1586–1601. [Google Scholar] [CrossRef]
  15. Yu, X.; Zhao, Q.; Lin, Q.; Wang, T. A grey wolf optimizer-based chaotic gravitational search algorithm for global optimization. J.·Supercomput. 2023, 79, 2691–2739. [Google Scholar] [CrossRef]
  16. Keshri, R.; Vidyarthi, D.P. An ML-based task clustering and placement using hybrid Jaya-gray wolf optimization in fog-cloud ecosystem. Concurr. Comput. Pract. Exp. 2024, 36, e8109. [Google Scholar] [CrossRef]
  17. Heidari, A.A.; Pahlavani, P. An efficient modified grey wolf optimizer with Levy flight for optimization tasks. Appl. Soft Comput. 2017, 60, 115–134. [Google Scholar] [CrossRef]
  18. Gupta, S.; Deep, K. A novel random walk grey wolf optimizer. Swarm Evol. Comput. 2019, 44, 101–112. [Google Scholar] [CrossRef]
  19. Wang, J.S.; Li, S.X. An improved grey wolf optimizer based on differential evolution and elimination mechanism. Sci. Rep. 2019, 9, 7181. [Google Scholar] [CrossRef]
  20. Yao, K.; Sun, J.; Chen, C.; Cao, Y.; Xu, M.; Zhou, X.; Tang, N.; Tian, Y. An information entropy-based grey wolf optimizer. Soft Comput. 2023, 27, 4669–4684. [Google Scholar] [CrossRef]
  21. Qin, H.; Meng, T.; Cao, Y. Fuzzy strategy grey wolf optimizer for complex multimodal optimization problems. Sensors 2022, 22, 6420. [Google Scholar] [CrossRef]
  22. Nadimi-Shahraki, M.H.; Moeini, E.; Taghian, S.; Mirjalili, S. Discrete improved grey wolf optimizer for community detection. J. Bionic Eng. 2023, 20, 2331–2358. [Google Scholar] [CrossRef]
  23. Yu, M.; Xu, J.; Liang, W.; Qiu, Y.; Bao, S.; Tang, L. Improved multi-strategy adaptive Grey Wolf Optimization for practical engineering applications and high-dimensional problem solving. Artif. Intell. Rev. 2024, 57, 277. [Google Scholar] [CrossRef]
  24. Yang, J.; Cui, J.; Zhang, Y.D. Artificial bee colony algorithm with adaptive covariance matrix for hearing loss detection. Knowl.-Based Syst. 2021, 216, 106792. [Google Scholar] [CrossRef]
  25. Yang, J.; Xia, X.; Cui, J.; Zhang, Y.D. An artificial bee colony algorithm with a cumulative covariance matrix mechanism and its application in parameter optimization for hearing loss detection models. Expert Syst. Appl. 2023, 229, 120533. [Google Scholar] [CrossRef]
  26. Dhar, A.R.; Gupta, D.; Roy, S.S.; Lohar, A.K.; Mandal, N. Covariance matrix adapted grey wolf optimizer tuned eXtreme gradient boost for bi-directional modelling of direct metal deposition process. Expert Syst. Appl. 2022, 199, 116971. [Google Scholar] [CrossRef]
  27. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  28. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  29. Faris, H.; Aljarah, I.; Al-Betar, M.A.; Mirjalili, S. Grey wolf optimizer: A review of recent variants and applications. Neural Comput. Appl. 2018, 30, 413–435. [Google Scholar] [CrossRef]
  30. Ma, S.; Fang, Y.; Zhao, X.; Liu, Z. Multi-swarm improved Grey Wolf Optimizer with double adaptive weights and dimension learning for global optimization problems. Math. Comput. Simul. 2023, 205, 619–641. [Google Scholar] [CrossRef]
  31. Chen, K.; Xue, B.; Zhang, M.; Zhou, F. Novel chaotic grouping particle swarm optimization with a dynamic regrouping strategy for solving numerical optimization tasks. Knowl.-Based Syst. 2020, 194, 105568. [Google Scholar] [CrossRef]
  32. Chen, Y.; Li, L.; Peng, H.; Xiao, J.; Wu, Q. Dynamic multi-swarm differential learning particle swarm optimizer. Swarm Evol. Comput. 2018, 39, 209–221. [Google Scholar] [CrossRef]
  33. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar] [CrossRef]
  34. Awad, N.H.; Ali, M.Z.; Suganthan, P.N. Ensemble sinusoidal differential covariance matrix adaptation with Euclidean neighborhood for solving CEC2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 5–8 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 372–379. [Google Scholar]
  35. Qais, M.H.; Hasanien, H.M.; Alghuwainem, S. Augmented grey wolf optimizer for grid-connected PMSG-based wind energy conversion systems. Appl. Soft Comput. 2018, 69, 504–515. [Google Scholar] [CrossRef]
  36. Şenel, F.A.; Gökçe, F.; Yüksel, A.S.; Yiğit, T. A novel hybrid PSO–GWO algorithm for optimization problems. Eng. Comput. 2019, 35, 1359–1373. [Google Scholar] [CrossRef]
  37. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  38. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  39. Gupta, S.; Deep, K. A novel hybrid sine cosine algorithm for global optimization and its application to train multilayer perceptrons. Appl. Intell. 2020, 50, 993–1026. [Google Scholar] [CrossRef]
  40. Lynn, N.; Suganthan, P.N. Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation. Swarm Evol. Comput. 2015, 24, 11–24. [Google Scholar] [CrossRef]
  41. Mirjalili, S.; Hashim, S.Z.M. A new hybrid PSOGSA algorithm for function optimization. In Proceedings of the 2010 International Conference on Computer and Information Application, Tianjin, China, 3–5 December 2010; pp. 374–377. [Google Scholar]
  42. Gupta, S.; Deep, K. A hybrid self-adaptive sine cosine algorithm with opposition based learning. Expert Syst. Appl. 2019, 119, 210–230. [Google Scholar] [CrossRef]
  43. Cheng, S.; Shi, Y.; Qin, Q.; Zhang, Q.; Bai, R. Population diversity maintenance in brain storm optimization algorithm. J. Artif. Intell. Soft Comput. Res. 2014, 4, 83–97. [Google Scholar] [CrossRef]
  44. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm—A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110, 151–166. [Google Scholar] [CrossRef]
  45. Coello, C.A.C. Use of a self-adaptive penalty approach for engineering optimization problems. Comput. Ind. 2000, 41, 113–127. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the CDL-DGWO algorithm.
Figure 1. Flowchart of the CDL-DGWO algorithm.
Symmetry 17 02030 g001
Figure 2. Convergence curves on 6 functions.
Figure 2. Convergence curves on 6 functions.
Symmetry 17 02030 g002
Figure 3. The population diversity of the GWO and CDL-DGWO.
Figure 3. The population diversity of the GWO and CDL-DGWO.
Symmetry 17 02030 g003
Figure 4. Tension/compression spring design problem.
Figure 4. Tension/compression spring design problem.
Symmetry 17 02030 g004
Figure 5. Welded beam design problem.
Figure 5. Welded beam design problem.
Symmetry 17 02030 g005
Table 1. Parameter settings.
Table 1. Parameter settings.
AlgorithmPopulation Size (n)ParametersReference
CDL-DGWO30 r 1 = r a n d , r 2 = r a n d , n s = 40 , n d = 10 ~
GWO30 r 1 = r a n d , r 2 = r a n d [5]
MIGWO30 r 1 = r a n d , r 2 = r a n d , n s = 40 , n d = 10 [30]
IGWO30 r 1 = r a n d , r 2 = r a n d [27]
LGWO30 r 1 = r a n d , r 2 = r a n d , β = 3 / 2 [17]
HGWOSCA30 r 1 = r a n d , r 2 = r a n d , r = r a n d [14]
RWGWO30 r 1 = r a n d , r 2 = r a n d [18]
EGWO30 r 1 = r a n d , r 2 = r a n d [35]
PSOGWO30 r 1 = r a n d , r 2 = r a n d , c 1 = c 2 = 2 [36]
PSO30 c 1 , c 2 = 2 , ω = 0.9 [37]
MFO30 t [ 1 , 1 ] , b = 1 [6]
SSA30 c 2 = r a n d , c 3 = r a n d [38]
WOA30 r 1 = r a n d , r 2 = r a n d , p = r a n d [8]
HSCA30a = 2, r 2 = 2 · π · r a n d , r 3 = 2 · r a n d , r 4 = r a n d [39]
HCLPSO30 w = 0.99 0.2 , c 1 = 2.5 0.5 , c 2 = 0.5 2.5 [40]
PSOGSA30 e l i t i s t c h e c k = 1 , r p o w e r = 1 , w = 0.3 , c 1 = 0.5 , c 2 = 1.5 [41]
mSCA30 J R = 0.1 , r = r a n d [42]
Table 2. Various GWOs with three strategies.
Table 2. Various GWOs with three strategies.
AlgorithmChaotic Grouping and Dynamic RegroupingCovarianceDimension Learning
GWO000
C-GWO110
D-GWO101
CDL-DGWO111
Table 3. The test results of 4 algorithms.
Table 3. The test results of 4 algorithms.
GWOC-GWOD-GWOCDL-DGWO
MeanStdMeanStdMeanStdMeanSt
F1 2.516 × 10 58 8.746 × 10 58 1.130 × 10 59 2.335 × 10 59 9.641 × 10 30 9.372 × 10 30 2 . 351 × 10 218 0 . 000 × 10 0
F2 9.422 × 10 35 6.918 × 10 35 2.913 × 10 32 2.318 × 10 32 5.036 × 10 19 3.248 × 10 19 2 . 307 × 10 111 2 . 865 × 10 111
F3 9.987 × 10 14 5.107 × 10 13 1.057 × 10 22 2.884 × 10 22 1.174 × 10 4 3.338 × 10 4 9 . 554 × 10 191 0 . 000 × 10 0
F4 1.887 × 10 14 4.129 × 10 14 4.749 × 10 22 1.054 × 10 21 1.303 × 10 6 1.043 × 10 6 2 . 285 × 10 103 3 . 825 × 10 103
F5 2.713 × 10 + 1 9.697 × 10 1 2.573 × 10 + 1 2.279 × 10 1 2.540 × 10 + 1 5.165 × 10 1 2 . 507 × 10 + 1 1 . 789 × 10 1
F6 6.378 × 10 1 3.760 × 10 1 6 . 129 × 10 5 1 . 657 × 10 5 1.822 × 10 1 2.162 × 10 1 1.415 × 10 3 3.174 × 10 4
F7 9.346 × 10 4 4.688 × 10 4 6.456 × 10 4 3.687 × 10 4 2.159 × 10 3 1.093 × 10 3 1 . 278 × 10 4 7 . 789 × 10 5
F8 5.899 × 10 + 3 8.466 × 10 + 2 7.327 × 10 + 3 9.019 × 10 + 2 8.240 × 10 + 3 8.214 × 10 + 2 9 . 084 × 10 + 3 4 . 477 × 10 + 2
F9 1.956 × 10 1 1 . 071 × 10 0 3.553 × 10 1 1.462 × 10 0 1.370 × 10 0 2.146 × 10 0 0 . 000 × 10 0 6 . 245 × 10 0
F10 1.735 × 10 14 4.710 × 10 15 4.678 × 10 15 9.014 × 10 16 3.784 × 10 14 6.018 × 10 15 8 . 882 × 10 16 0 . 000 × 10 0
F11 1.511 × 10 3 3.457 × 10 3 0 . 000 × 10 0 0 . 000 × 10 0 1.505 × 10 3 3.984 × 10 3 0 . 000 × 10 0 0 . 000 × 10 0
F12 3.828 × 10 2 2.342 × 10 2 8 . 960 × 10 6 2 . 886 × 10 6 8.510 × 10 3 6.385 × 10 3 1.060 × 10 4 1.963 × 10 5
F13 5.752 × 10 1 1.917 × 10 1 8 . 718 × 10 4 2.772 × 10 3 1.423 × 10 1 1.072 × 10 1 1.827 × 10 3 4 . 214 × 10 4
F14 3.681 × 10 0 3.527 × 10 0 2.528 × 10 0 2.604 × 10 0 9.980 × 10 1 2.545 × 10 11 9 . 980 × 10 1 7 . 377 × 10 12
F15 2.313 × 10 3 6.120 × 10 3 6.712 × 10 4 1.472 × 10 3 3 . 075 × 10 4 3.944 × 10 8 3 . 075 × 10 4 3 . 863 × 10 8
F16 1.032 × 10 0 5.130 × 10 9 1.032 × 10 0 1.682 × 10 8 1.032 × 10 0 2.646 × 10 9 1 . 032 × 10 0 3 . 824 × 10 10
F17 3.979 × 10 1 4.034 × 10 7 3.979 × 10 1 4.539 × 10 7 3.979 × 10 1 3.274 × 10 7 3 . 979 × 10 1 1 . 914 × 10 8
F18 3.000 × 10 0 1.042 × 10 5 3.000 × 10 0 2.561 × 10 5 3.000 × 10 0 8.063 × 10 9 3 . 000 × 10 0 2 . 785 × 10 9
F19 3.862 × 10 0 2.669 × 10 3 3.863 × 10 0 5.728 × 10 4 3.863 × 10 0 1.479 × 10 6 3 . 863 × 10 0 4 . 499 × 10 7
F20 3.305 × 10 0 5.126 × 10 2 3.291 × 10 0 5.303 × 10 2 3.307 × 10 0 3.831 × 10 2 3 . 322 × 10 0 1 . 335 × 10 5
F21 9.984 × 10 0 9.225 × 10 1 7.590 × 10 0 2.037 × 10 0 9.491 × 10 0 1.715 × 10 0 1 . 015 × 10 1 6 . 410 × 10 4
F22 1 . 040 × 10 1 3.609 × 10 4 6.997 × 10 0 1.887 × 10 0 1.040 × 10 1 8.830 × 10 4 1 . 040 × 10 1 7 . 553 × 10 4
F23 1.027 × 10 1 1.481 × 10 0 7.457 × 10 0 2.201 × 10 0 1.053 × 10 1 1.030 × 10 3 1 . 054 × 10 1 9 . 043 × 10 4
Table 4. Results of multiple-problem Wilcoxon test.
Table 4. Results of multiple-problem Wilcoxon test.
CDL-DGWO VS.+R+R−p-Valuea = 0.05
GWO221026790.000087+
C-GWO1931240360.003302+
D-GWO221026970.000068+
Table 5. Friedman test results of the 4 algorithms.
Table 5. Friedman test results of the 4 algorithms.
GWOC-GWOD-GWOCDL-DGWO
Ave3.17392.76092.82611.2391
Rank4231
Table 6. The test results of 16 algorithms.
Table 6. The test results of 16 algorithms.
CDL-DGWOGWOIGWOLGWO
Mean Std Mean Std Mean Std Mean Std
F1 2 . 351 × 10 218 0 . 000 × 10 0 2.516 × 10 58 8.746 × 10 58 1.590 × 10 28 2.301 × 10 28 9.719 × 10 37 2.487 × 10 36
F2 2 . 307 × 10 111 2 . 865 × 10 111 9.422 × 10 35 6.918 × 10 35 8.090 × 10 18 7.145 × 10 18 1.191 × 10 22 2.191 × 10 22
F3 9 . 554 × 10 191 0 . 000 × 10 0 9.987 × 10 14 5.107 × 10 13 9.682 × 10 4 2.175 × 10 3 1.720 × 10 6 5.233 × 10 6
F4 2 . 285 × 10 103 3 . 825 × 10 103 1.887 × 10 14 4.129 × 10 14 1.825 × 10 5 1.473 × 10 5 3.475 × 10 9 4.666 × 10 9
F5 2.507 × 10 1 1.789 × 10 1 2.713 × 10 1 9.697 × 10 1 2 . 437 × 10 1 8 . 492 × 10 1 2.669 × 10 1 2.875 × 10 1
F6 1.415 × 10 3 3.174 × 10 4 6.378 × 10 1 3.760 × 10 1 2.225 × 10 2 6.807 × 10 2 1.252 × 10 0 3.846 × 10 1
F7 1 . 278 × 10 4 7 . 789 × 10 5 9.346 × 10 4 4.688 × 10 4 2.904 × 10 3 9.196 × 10 4 1.748 × 10 3 1.266 × 10 3
F8 9.084 × 10 3 4.477 × 10 2 5.899 × 10 3 8.466 × 10 2 8.626 × 10 3 1.475 × 10 3 3.666 × 10 3 3.742 × 10 2
F9 0 . 000 × 10 0 6 . 245 × 10 0 1.956 × 10 1 1.071 × 10 0 2.407 × 10 1 1.285 × 10 1 1.895 × 10 15 1.038 × 10 14
F10 8 . 882 × 10 16 0 . 000 × 10 0 1.735 × 10 14 4.710 × 10 15 6.093 × 10 14 9.014 × 10 15 1.024 × 10 14 3.021 × 10 15
F11 0 . 000 × 10 0 0 . 000 × 10 0 1.511 × 10 3 3.457 × 10 3 7.142 × 10 3 1.088 × 10 2 4.529 × 10 4 2.481 × 10 3
F12 1 . 060 × 10 4 1 . 963 × 10 5 3.828 × 10 2 2.342 × 10 2 3.472 × 10 3 1.894 × 10 2 1.090 × 10 1 9.437 × 10 2
F13 1.827 × 10 3 4.214 × 10 4 5.752 × 10 1 1.917 × 10 1 1.238 × 10 1 1.057 × 10 1 1.167 × 10 0 1.611 × 10 1
F14 9.980 × 10 1 7.377 × 10 12 3.681 × 10 0 3.527 × 10 0 9 . 980 × 10 1 1 . 271 × 10 16 3.356 × 10 0 3.294 × 10 0
F15 3.075 × 10 4 3.863 × 10 8 2.313 × 10 3 6.120 × 10 3 3 . 075 × 10 4 3 . 332 × 10 9 5.226 × 10 3 8.495 × 10 3
F16 1.032 × 10 0 3.824 × 10 10 1.032 × 10 0 5.130 × 10 9 1 . 032 × 10 0 6 . 046 × 10 16 1.032 × 10 0 6.109 × 10 6
F17 3.979 × 10 1 1.914 × 10 8 3.979 × 10 1 4.034 × 10 7 3 . 979 × 10 1 0 . 000 × 10 0 3.982 × 10 1 3.379 × 10 4
F18 3.000 × 10 0 2.785 × 10 9 3.000 × 10 0 1.042 × 10 5 3 . 000 × 10 0 1 . 385 × 10 15 3.000 × 10 0 1.649 × 10 5
F19 3.863 × 10 0 4.499 × 10 7 3.862 × 10 0 2.669 × 10 3 3 . 863 × 10 0 2 . 494 × 10 15 3.858 × 10 0 3.204 × 10 3
F20 3.322 × 10 0 1.335 × 10 5 3.305 × 10 0 5.126 × 10 2 3.322 × 10 0 2.500 × 10 7 3.247 × 10 0 7.994 × 10 2
F21 1.015 × 10 1 6.410 × 10 4 9.984 × 10 0 9.225 × 10 1 9.788 × 10 0 1.196 × 10 0 6.674 × 10 0 2.127 × 10 0
F22 1.040 × 10 1 7.553 × 10 4 1.040 × 10 1 3.609 × 10 4 1.040 × 10 1 1.396 × 10 8 7.482 × 10 0 1.355 × 10 0
F23 1.054 × 10 1 9.043 × 10 4 1.027 × 10 1 1.481 × 10 0 1.054 × 10 1 1.717 × 10 9 7.474 × 10 0 1.178 × 10 0
HGWOSCARWGWOEGWOPSOGWO
MeanStdMeanStdMeanStdMeanStd
F1 2.110 × 10 60 4.918 × 10 60 3.610 × 10 59 6.021 × 10 59 1.148 × 10 66 5.831 × 10 66 2.294 × 10 3 8.690 × 10 3
F2 2.210 × 10 35 2.325 × 10 35 1.042 × 10 34 1.029 × 10 34 7.641 × 10 40 2.453 × 10 39 2.611 × 10 4 1.429 × 10 5
F3 3.240 × 10 15 8.878 × 10 15 1.299 × 10 14 3.241 × 10 14 5.041 × 10 11 2.690 × 10 10 7.001 × 10 3 1.655 × 10 4
F4 7.559 × 10 15 9.375 × 10 15 5.423 × 10 14 1.210 × 10 13 2.669 × 10 4 9.189 × 10 4 8.557 × 10 0 1.774 × 10 1
F5 2.702 × 10 1 7.320 × 10 1 2.677 × 10 1 8.616 × 10 1 2.781 × 10 1 8.242 × 10 1 3.980 × 10 6 1.045 × 10 7
F6 6.026 × 10 1 3.574 × 10 1 4.173 × 10 1 2.537 × 10 1 3.419 × 10 0 5.527 × 10 1 1.841 × 10 3 5.270 × 10 3
F7 7.093 × 10 4 3.457 × 10 4 8.766 × 10 4 4.871 × 10 4 3.693 × 10 3 2.237 × 10 3 1.075 × 10 1 4.728 × 10 1
F8 5.981 × 10 3 9.382 × 10 2 8.065 × 10 3 6.192 × 10 2 7.408 × 10 3 6.512 × 10 2 6.790 × 10 3 1.839 × 10 3
F9 1.556 × 10 1 8.523 × 10 1 4.169 × 10 14 1.558 × 10 13 1.666 × 10 2 4.093 × 10 1 3.975 × 10 1 2.266 × 10 1
F10 1.451 × 10 14 1.638 × 10 15 1.652 × 10 14 3.178 × 10 15 2.065 × 10 1 7.863 × 10 1 4.575 × 10 0 6.102 × 10 0
F11 5.283 × 10 4 2.894 × 10 3 2.089 × 10 3 9.132 × 10 3 8.740 × 10 3 1.466 × 10 2 5.300 × 10 1 1.058 × 10 2
F12 3.033 × 10 2 1.568 × 10 2 2.456 × 10 2 9.966 × 10 3 2.328 × 10 0 3.159 × 10 0 2.177 × 10 6 6.982 × 10 6
F13 4.724 × 10 1 2.215 × 10 1 4.490 × 10 1 2.012 × 10 1 2.464 × 10 0 3.606 × 10 1 2.975 × 10 6 1.629 × 10 7
F14 3.124 × 10 0 3.398 × 10 0 1.097 × 10 0 3.033 × 10 1 9.538 × 10 0 4.624 × 10 0 2.578 × 10 0 2.078 × 10 0
F15 2.985 × 10 3 6.933 × 10 3 6.018 × 10 4 1.607 × 10 3 1.763 × 10 3 5.070 × 10 3 4.553 × 10 3 8.066 × 10 3
F16 1.032 × 10 0 8.592 × 10 9 1.032 × 10 0 5.860 × 10 9 1.032 × 10 0 1.593 × 10 9 1.032 × 10 0 3.153 × 10 6
F17 3.979 × 10 1 4.925 × 10 7 3.979 × 10 1 2.171 × 10 5 3.979 × 10 1 5.481 × 10 8 3.980 × 10 1 5.329 × 10 4
F18 3.000 × 10 0 1.419 × 10 5 3.000 × 10 0 1.353 × 10 5 3.000 × 10 0 5.462 × 10 7 3.001 × 10 0 5.172 × 10 3
F19 3.861 × 10 0 3.020 × 10 3 3.862 × 10 0 1.336 × 10 3 3.861 × 10 0 3.446 × 10 3 3.860 × 10 0 3.767 × 10 3
F20 3.285 × 10 0 6.915 × 10 2 3.293 × 10 0 6.277 × 10 2 3.306 × 10 0 8.792 × 10 2 3.173 × 10 0 2.244 × 10 1
F21 9.744 × 10 0 1.580 × 10 0 9.648 × 10 0 1.542 × 10 0 5.706 × 10 0 2.432 × 10 0 8.159 × 10 0 2.934 × 10 0
F22 1.040 × 10 1 3.213 × 10 4 1.040 × 10 1 2.113 × 10 4 1.040 × 10 1 6.554 × 10 4 8.947 × 10 0 2.706 × 10 0
F23 1.036 × 10 1 9.872 × 10 1 1.027 × 10 1 1.481 × 10 0 9.454 × 10 0 2.805 × 10 0 1.002 × 10 1 1.309 × 10 0
PSOMFOSSAWOA
MeanStdMeanStdMeanStdMeanStd
F1 6.502 × 10 8 3.198 × 10 7 6.667 × 10 2 2.537 × 10 3 1.248 × 10 8 3.180 × 10 9 1.561 × 10 150 8.464 × 10 150
F2 3.143 × 10 4 4.980 × 10 4 4.333 × 10 1 1.953 × 10 1 7.704 × 10 1 9.766 × 10 1 4.748 × 10 101 2.535 × 10 100
F3 1.457 × 10 1 6.441 × 10 0 1.792 × 10 4 1.309 × 10 4 2.644 × 10 2 1.757 × 10 2 2.948 × 10 3 1.683 × 10 3
F4 6.037 × 10 1 1.161 × 10 1 6.781 × 10 1 8.798 × 10 0 6.767 × 10 0 2.862 × 10 0 4.604 × 10 1 3.510 × 10 1
F5 5.793 × 10 1 4.544 × 10 1 2.559 × 10 6 1.383 × 10 7 9.254 × 10 1 1.034 × 10 2 2.715 × 10 1 5.562 × 10 1
F6 3.142 × 10 8 6.443 × 10 8 2.693 × 10 3 5.261 × 10 3 1 . 330 × 10 8 2 . 226 × 10 9 8.895 × 10 2 9.105 × 10 2
F7 6.551 × 10 2 1.837 × 10 2 8.570 × 10 1 2.102 × 10 0 1.108 × 10 1 4.311 × 10 2 1.488 × 10 3 1.826 × 10 3
F8 5.933 × 10 3 7.651 × 10 2 8.022 × 10 3 7.484 × 10 2 7.696 × 10 3 8.065 × 10 2 1 . 139 × 10 4 1 . 664 × 10 3
F9 4.461 × 10 1 1.290 × 10 1 2.029 × 10 2 3.750 × 10 1 5.519 × 10 1 1.460 × 10 1 3.790 × 10 15 1.442 × 10 14
F10 5.493 × 10 2 3.005 × 10 1 1.986 × 10 1 2.037 × 10 1 2.161 × 10 0 8.076 × 10 1 4.086 × 10 15 2.351 × 10 15
F11 9.273 × 10 3 9.641 × 10 3 6.057 × 10 0 2.295 × 10 1 9.597 × 10 3 1.184 × 10 2 2.400 × 10 3 1.315 × 10 2
F12 6.911 × 10 3 2.630 × 10 2 8.095 × 10 1 1.280 × 10 0 6.713 × 10 0 2.662 × 10 0 9.515 × 10 3 7.152 × 10 3
F13 5.494 × 10 3 5.588 × 10 3 4.101 × 10 7 1.251 × 10 8 3.199 × 10 0 9.046 × 10 0 2.356 × 10 1 2.071 × 10 1
F14 3.891 × 10 0 2.418 × 10 0 1.655 × 10 0 1.719 × 10 0 9.980 × 10 1 2.387 × 10 16 1.847 × 10 0 2.497 × 10 0
F15 8.333 × 10 4 1.316 × 10 4 1.723 × 10 3 3.547 × 10 3 1.569 × 10 3 3.559 × 10 3 6.127 × 10 4 2.876 × 10 4
F16 1.032 × 10 0 6.712 × 10 16 1.032 × 10 0 6.775 × 10 16 1.032 × 10 0 7.214 × 10 15 1.032 × 10 0 2.096 × 10 11
F17 3.979 × 10 1 0.000 × 10 0 3.979 × 10 1 0.000 × 10 0 3.979 × 10 1 2.024 × 10 14 3.979 × 10 1 6.162 × 10 6
F18 3.000 × 10 0 7.142 × 10 16 3.000 × 10 0 1.662 × 10 15 3.000 × 10 0 6.351 × 10 14 3.000 × 10 0 2.203 × 10 5
F19 3.863 × 10 0 2.710 × 10 15 3.863 × 10 0 2.710 × 10 15 3.863 × 10 0 2.058 × 10 14 3.861 × 10 0 2.064 × 10 3
F20 3 . 322 × 10 0 1 . 355 × 10 15 3.310 × 10 0 4.740 × 10 2 3.274 × 10 0 6.015 × 10 2 3.321 × 10 0 9.200 × 10 4
F21 5.869 × 10 0 2.297 × 10 0 6.576 × 10 0 3.687 × 10 0 7.052 × 10 0 3.066 × 10 0 9.982 × 10 0 9.305 × 10 1
F22 1 . 040 × 10 1 8 . 727 × 10 16 9.512 × 10 0 2.309 × 10 0 9.461 × 10 0 2.472 × 10 0 7.908 × 10 0 2.913 × 10 0
F23 1.036 × 10 1 9.787 × 10 1 4.470 × 10 0 3.448 × 10 0 9.020 × 10 0 2.867 × 10 0 7.142 × 10 0 3.358 × 10 0
HSCAHCLPSOPSOGSAmSCA
MeanStdMeanStdMeanStdMeanStd
F1 3.397 × 10 12 1.276 × 10 11 1.443 × 10 7 1.468 × 10 7 1.000 × 10 3 3.051 × 10 3 3.046 × 10 43 1.078 × 10 42
F2 2.560 × 10 9 9.389 × 10 9 7.538 × 10 5 6.356 × 10 5 1.235 × 10 1 3.115 × 10 1 1.776 × 10 26 9.294 × 10 26
F3 2.468 × 10 4 2.294 × 10 4 2.144 × 10 2 1.209 × 10 2 6.785 × 10 3 6.192 × 10 3 8.700 × 10 8 4.606 × 10 7
F4 6.826 × 10 1 3.238 × 10 1 2.609 × 10 0 7.573 × 10 1 6.713 × 10 1 2.102 × 10 1 1.282 × 10 12 6.887 × 10 12
F5 2.779 × 10 1 6.040 × 10 1 5.561 × 10 1 3.740 × 10 1 6.034 × 10 3 2.283 × 10 4 2.699 × 10 1 6.103 × 10 1
F6 2.206 × 10 0 2.999 × 10 1 5.383 × 10 7 1.465 × 10 6 1.677 × 10 3 3.813 × 10 3 8.297 × 10 1 4.832 × 10 1
F7 9.681 × 10 3 1.017 × 10 2 1.960 × 10 2 9.357 × 10 3 7.324 × 10 2 3.546 × 10 2 1.982 × 10 4 1.572 × 10 4
F8 4.159 × 10 3 3.152 × 10 2 1.096 × 10 4 3.394 × 10 2 7.161 × 10 3 8.738 × 10 2 6.056 × 10 3 8.557 × 10 2
F9 8.641 × 10 0 2.245 × 10 1 1.657 × 10 1 5.686 × 10 0 1.609 × 10 2 3.848 × 10 1 0.000 × 10 0 0.000 × 10 0
F10 1.755 × 10 1 7.000 × 10 0 1.927 × 10 4 4.538 × 10 4 1.745 × 10 1 2.875 × 10 0 6.112 × 10 12 2.815 × 10 11
F11 1.150 × 10 2 2.371 × 10 2 7.714 × 10 3 9.874 × 10 3 6.094 × 10 0 2.282 × 10 1 0.000 × 10 0 0.000 × 10 0
F12 2.377 × 10 1 4.844 × 10 2 6.911 × 10 3 2.630 × 10 2 7.466 × 10 0 4.759 × 10 0 3.678 × 10 2 2.148 × 10 2
F13 1.801 × 10 0 3.950 × 10 1 1 . 100 × 10 3 3 . 352 × 10 3 1.367 × 10 7 7.487 × 10 7 6.555 × 10 1 2.280 × 10 1
F14 9.981 × 10 1 2.886 × 10 4 9.980 × 10 1 0.000 × 10 0 4.042 × 10 0 3.535 × 10 0 1.197 × 10 0 6.054 × 10 1
F15 7.572 × 10 4 1.921 × 10 4 4.352 × 10 4 9.822 × 10 5 4.955 × 10 3 7.725 × 10 3 7.723 × 10 4 3.876 × 10 4
F16 1.032 × 10 0 7.172 × 10 6 1.032 × 10 0 6.454 × 10 16 1.032 × 10 0 5.608 × 10 16 1.032 × 10 0 3.856 × 10 8
F17 3.981 × 10 1 2.335 × 10 4 3.979 × 10 1 0.000 × 10 0 3.979 × 10 1 0.000 × 10 0 3.979 × 10 1 3.559 × 10 6
F18 3.000 × 10 0 7.177 × 10 6 3.000 × 10 0 7.283 × 10 16 3.000 × 10 0 2.438 × 10 15 3.000 × 10 0 2.343 × 10 5
F19 3.856 × 10 0 2.984 × 10 3 3.863 × 10 0 2.696 × 10 15 3.863 × 10 0 2.470 × 10 15 3.862 × 10 0 1.838 × 10 3
F20 3.029 × 10 0 2.301 × 10 1 3.322 × 10 0 1.372 × 10 15 3.322 × 10 0 1.412 × 10 15 3.193 × 10 0 5.201 × 10 2
F21 3.390 × 10 0 2.672 × 10 0 1 . 015 × 10 1 6 . 019 × 10 15 3.487 × 10 0 1.161 × 10 0 4.978 × 10 0 1.894 × 10 0
F22 6.127 × 10 0 1.968 × 10 0 1.040 × 10 1 7.376 × 10 16 9.067 × 10 0 2.717 × 10 0 8.098 × 10 0 2.678 × 10 0
F23 5.985 × 10 0 2.076 × 10 0 1 . 054 × 10 1 1 . 234 × 10 15 2.441 × 10 0 8.119 × 10 2 8.732 × 10 0 2.592 × 10 0
Table 7. Results of multiple-problem Wilcoxon test.
Table 7. Results of multiple-problem Wilcoxon test.
CDL-DGWO VS.+R+R−p-Valuea = 0.05
GWO221026790.00009+
IGWO13100178980.22376
LGWO230027600.00003+
HGWOSCA221026790.00009+
RWGWO2210265110.00011+
EGWO221026880.00008+
PSOGWO230027600.00003+
PSO1670240360.00192+
MFO1940266100.00010+
SSA1760251250.00059+
WOA2120249270.00074+
HSCA230027600.00003+
HCLPSO111201651110.41153
PSOGSA1850261150.00018+
mSCA210227600.00006+
Table 8. Friedman test results of the 16 algorithms.
Table 8. Friedman test results of the 16 algorithms.
RankAve RankAve
CDL-DGWO13.2PSO67.3
GWO87.7MFO1311.2
IGWO24.4SSA119.7
LGWO1210.1WOA77.3
HGWOSCA57.3HSCA1512.4
RWGWO46.5HCLPSO35.7
EGWO109.6PSOGSA1411.9
PSOGWO1613.4mSCA98.3
Table 9. Comparisons of CDL-DGWO and other algorithms on CEC 2017.
Table 9. Comparisons of CDL-DGWO and other algorithms on CEC 2017.
CDL-DGWOGWOIGWOLGWO
MeanStdMeanStdMeanStdMeanStd
f1 7.183 × 10 4 2.616 × 10 4 2.508 × 10 9 2.050 × 10 9 2.958 × 10 3 3.985 × 10 3 4.265 × 10 9 1.211 × 10 9
f3 4.513 × 10 2 9.020 × 10 1 4.053 × 10 4 1.031 × 10 4 2.794 × 10 3 2.100 × 10 3 2.628 × 10 4 7.301 × 10 3
f4 4.895 × 10 2 1.962 × 10 1 5.800 × 10 2 6.143 × 10 1 4.808 × 10 2 2.247 × 10 1 7.119 × 10 2 1.234 × 10 2
f5 5.489 × 10 2 1.563 × 10 1 5.981 × 10 2 2.627 × 10 1 5 . 415 × 10 2 3 . 279 × 10 1 7.197 × 10 2 2.275 × 10 1
f6 6.007 × 10 2 4.127 × 10 1 6.069 × 10 2 3.709 × 10 0 6.000 × 10 2 7.742 × 10 2 6.288 × 10 2 4.654 × 10 0
f7 7 . 722 × 10 2 9 . 598 × 10 0 8.602 × 10 2 4.584 × 10 1 7.830 × 10 2 4.624 × 10 1 1.017 × 10 3 3.702 × 10 1
f8 8.441 × 10 2 9.207 × 10 0 8.914 × 10 2 2.529 × 10 1 8 . 361 × 10 2 9 . 587 × 10 0 9.893 × 10 2 2.155 × 10 1
f9 9.033 × 10 2 2.461 × 10 0 1.859 × 10 3 7.011 × 10 2 9 . 000 × 10 2 1 . 415 × 10 2 3.101 × 10 3 6.791 × 10 2
f10 3 . 178 × 10 3 8 . 812 × 10 2 3.819 × 10 3 5.983 × 10 2 4.781 × 10 3 2.322 × 10 3 7.377 × 10 3 6.386 × 10 2
f11 1.215 × 10 3 3.289 × 10 1 1.752 × 10 3 6.526 × 10 2 1 . 150 × 10 3 3 . 044 × 10 1 1.681 × 10 3 3.622 × 10 2
f12 2.901 × 10 6 1.654 × 10 6 6.447 × 10 7 7.641 × 10 7 6.921 × 10 5 5.769 × 10 5 3.042 × 10 8 1.507 × 10 8
f13 1.052 × 10 5 4.797 × 10 4 1.891 × 10 7 7.078 × 10 7 5.122 × 10 4 2.883 × 10 4 1.069 × 10 8 8.052 × 10 7
f14 1.067 × 10 4 7.038 × 10 3 1.954 × 10 5 3.830 × 10 5 3.309 × 10 3 1.880 × 10 3 2.006 × 10 5 2.569 × 10 5
f15 3.917 × 10 4 2.745 × 10 4 3.476 × 10 5 7.603 × 10 5 1.135 × 10 4 1.270 × 10 4 1.647 × 10 6 1.848 × 10 6
f16 1.981 × 10 3 1.949 × 10 2 2.355 × 10 3 2.193 × 10 2 1 . 855 × 10 3 2.410 × 10 2 2.952 × 10 3 3.035 × 10 2
f17 1.813 × 10 3 7.537 × 10 1 1.962 × 10 3 1.326 × 10 2 1 . 805 × 10 3 6.059 × 10 1 2.109 × 10 3 1.296 × 10 2
f18 1.910 × 10 5 1.739 × 10 5 7.865 × 10 5 9.127 × 10 5 9.726 × 10 4 5.664 × 10 4 1.033 × 10 6 6.360 × 10 5
f19 1.604 × 10 4 1.626 × 10 4 2.001 × 10 6 5.366 × 10 6 6.462 × 10 3 5.908 × 10 3 6.249 × 10 6 3.526 × 10 6
f20 2.171 × 10 3 7.226 × 10 1 2.391 × 10 3 1.260 × 10 2 2 . 160 × 10 3 6 . 996 × 10 1 2.496 × 10 3 1.274 × 10 2
f21 2.342 × 10 3 1.074 × 10 1 2.387 × 10 3 2.914 × 10 1 2 . 337 × 10 3 3 . 047 × 10 1 2.500 × 10 3 2.145 × 10 1
f22 2.685 × 10 3 8.483 × 10 2 4.696 × 10 3 1.603 × 10 3 2.843 × 10 3 1.484 × 10 3 7.773 × 10 3 2.549 × 10 3
f23 2.693 × 10 3 2.557 × 10 1 2.765 × 10 3 4.404 × 10 1 2 . 680 × 10 3 2 . 167 × 10 1 2.886 × 10 3 2.032 × 10 1
f24 2 . 855 × 10 3 1 . 071 × 10 1 2.927 × 10 3 5.373 × 10 1 2.858 × 10 3 3.882 × 10 1 3.060 × 10 3 2.195 × 10 1
f25 2.887 × 10 3 1.075 × 10 0 3.005 × 10 3 7.306 × 10 1 2.887 × 10 3 2.033 × 10 0 3.013 × 10 3 3.559 × 10 1
f26 3 . 462 × 10 3 5 . 632 × 10 2 4.696 × 10 3 4.182 × 10 2 3.802 × 10 3 1.139 × 10 2 5.797 × 10 3 1.934 × 10 2
f27 3.207 × 10 3 8.679 × 10 0 3.251 × 10 3 2.794 × 10 1 3.198 × 10 3 1.052 × 10 1 3.262 × 10 3 1.598 × 10 1
f28 3.213 × 10 3 1.505 × 10 1 3.409 × 10 3 8.442 × 10 1 3.187 × 10 3 3.844 × 10 1 3.436 × 10 3 6.555 × 10 1
f29 3.466 × 10 3 8.141 × 10 1 3.727 × 10 3 1.568 × 10 2 3 . 400 × 10 3 6 . 621 × 10 1 4.046 × 10 3 2.016 × 10 2
f30 1.901 × 10 5 1.010 × 10 5 7.589 × 10 6 5.473 × 10 6 5.246 × 10 4 3.458 × 10 4 2.331 × 10 7 8.817 × 10 6
HGWOSCARWGWOEGWOPSOGWO
MeanStdMeanStdMeanStdMeanStd
f1 1.892 × 10 9 1.124 × 10 9 8.026 × 10 4 3.752 × 10 4 1.739 × 10 10 7.129 × 10 9 2.801 × 10 9 7.624 × 10 9
f3 3.063 × 10 4 9.958 × 10 3 3.348 × 10 2 2.855 × 10 1 5.511 × 10 4 7.082 × 10 3 3.447 × 10 4 2.447 × 10 4
f4 5.927 × 10 2 8.847 × 10 1 4.871 × 10 2 1.515 × 10 1 3.540 × 10 3 2.152 × 10 3 5.957 × 10 2 1.945 × 10 2
f5 5.924 × 10 2 2.079 × 10 1 5.693 × 10 2 2.064 × 10 1 8.281 × 10 2 6.851 × 10 1 6.037 × 10 2 6.556 × 10 1
f6 6.065 × 10 2 3.887 × 10 0 6.005 × 10 2 5.531 × 10 1 6.733 × 10 2 1.018 × 10 1 6.152 × 10 2 1.771 × 10 1
f7 8.498 × 10 2 3.531 × 10 1 8.043 × 10 2 1.804 × 10 1 1.155 × 10 3 7.159 × 10 1 9.662 × 10 2 2.634 × 10 2
f8 8.848 × 10 2 2.094 × 10 1 8.620 × 10 2 1.592 × 10 1 1.046 × 10 3 4.306 × 10 1 9.244 × 10 2 8.061 × 10 1
f9 1.649 × 10 3 5.717 × 10 2 9.425 × 10 2 1.698 × 10 2 1.192 × 10 4 2.390 × 10 3 2.528 × 10 3 1.897 × 10 3
f10 3.991 × 10 3 5.381 × 10 2 4.014 × 10 3 4.445 × 10 2 5.432 × 10 3 7.264 × 10 2 4.242 × 10 3 9.858 × 10 2
f11 1.673 × 10 3 6.090 × 10 2 1.204 × 10 3 3.410 × 10 1 4.927 × 10 3 1.481 × 10 3 1.712 × 10 3 8.868 × 10 2
f12 6.010 × 10 7 6.450 × 10 7 1.722 × 10 6 1.463 × 10 6 1.845 × 10 9 1.918 × 10 9 7.160 × 10 7 3.061 × 10 8
f13 1.768 × 10 7 5.356 × 10 7 7.785 × 10 4 4.263 × 10 4 2.908 × 10 9 3.760 × 10 9 1.327 × 10 7 5.070 × 10 7
f14 1.744 × 10 5 3.292 × 10 5 2.243 × 10 4 1.653 × 10 4 2.430 × 10 5 2.373 × 10 5 2.531 × 10 5 4.549 × 10 5
f15 2.311 × 10 5 5.384 × 10 5 3.955 × 10 4 2.229 × 10 4 2.543 × 10 7 6.438 × 10 7 5.483 × 10 6 2.109 × 10 7
f16 2.368 × 10 3 2.703 × 10 2 2.268 × 10 3 3.120 × 10 2 4.010 × 10 3 7.070 × 10 2 2.689 × 10 3 6.333 × 10 2
f17 1.885 × 10 3 9.808 × 10 1 1.919 × 10 3 1.124 × 10 2 2.287 × 10 3 1.912 × 10 2 2.132 × 10 3 2.230 × 10 2
f18 5.731 × 10 5 1.235 × 10 6 2.136 × 10 5 2.494 × 10 5 9.151 × 10 5 2.240 × 10 6 2.272 × 10 6 6.050 × 10 6
f19 1.591 × 10 6 3.131 × 10 6 2.508 × 10 4 1.981 × 10 4 1.023 × 10 7 9.289 × 10 6 8.880 × 10 6 4.481 × 10 7
f20 2.365 × 10 3 1.200 × 10 2 2.311 × 10 3 1.057 × 10 2 2.795 × 10 3 1.916 × 10 2 2.390 × 10 3 1.907 × 10 2
f21 2.387 × 10 3 2.226 × 10 1 2.361 × 10 3 1.730 × 10 1 2.553 × 10 3 4.808 × 10 1 2.417 × 10 3 6.833 × 10 1
f22 4.902 × 10 3 1.335 × 10 3 4.818 × 10 3 1.346 × 10 3 7.612 × 10 3 7.756 × 10 2 3.875 × 10 3 1.808 × 10 3
f23 2.742 × 10 3 2.798 × 10 1 2.731 × 10 3 2.302 × 10 1 3.099 × 10 3 6.047 × 10 1 2.801 × 10 3 1.500 × 10 2
f24 2.915 × 10 3 4.728 × 10 1 2.911 × 10 3 1.978 × 10 1 3.278 × 10 3 8.485 × 10 1 2.985 × 10 3 1.059 × 10 2
f25 2.962 × 10 3 3.752 × 10 1 2 . 886 × 10 3 1 . 430 × 10 0 3.464 × 10 3 3.304 × 10 2 3.018 × 10 3 2.433 × 10 2
f26 4.472 × 10 3 3.028 × 10 2 4.282 × 10 3 2.011 × 10 2 9.010 × 10 3 1.005 × 10 3 5.100 × 10 3 1.090 × 10 3
f27 3.234 × 10 3 1.203 × 10 1 3.217 × 10 3 8.677 × 10 0 3.593 × 10 3 1.141 × 10 2 3.268 × 10 3 1.573 × 10 2
f28 3.340 × 10 3 5.101 × 10 1 3.212 × 10 3 1.852 × 10 1 3.586 × 10 3 2.501 × 10 2 3.318 × 10 3 6.933 × 10 1
f29 3.680 × 10 3 1.222 × 10 2 3.510 × 10 3 1.191 × 10 2 4.872 × 10 3 3.584 × 10 2 3.908 × 10 3 3.214 × 10 2
f30 4.908 × 10 6 3.804 × 10 6 2.699 × 10 5 1.556 × 10 5 1.117 × 10 7 3.277 × 10 6 4.281 × 10 6 9.186 × 10 6
MIGWOPSOMFOSSA
MeanStdMeanStdMeanStdMeanStd
f1 2.958 × 10 3 2.963 × 10 3 2 . 750 × 10 3 3 . 950 × 10 3 1.730 × 10 10 1.017 × 10 10 5.821 × 10 3 6.783 × 10 3
f3 1.019 × 10 4 2.951 × 10 3 3 . 000 × 10 2 6 . 908 × 10 8 9.905 × 10 4 6.987 × 10 4 3.000 × 10 2 1.080 × 10 8
f4 4.913 × 10 2 1.788 × 10 1 4 . 619 × 10 2 2 . 545 × 10 1 1.437 × 10 3 8.620 × 10 2 4.879 × 10 2 1.858 × 10 1
f5 5.860 × 10 2 2.386 × 10 1 7.112 × 10 2 2.259 × 10 1 7.114 × 10 2 5.572 × 10 1 6.218 × 10 2 3.161 × 10 1
f6 6 . 000 × 10 2 8 . 228 × 10 7 6.321 × 10 2 8.016 × 10 0 6.388 × 10 2 9.977 × 10 0 6.267 × 10 2 9.452 × 10 0
f7 8.921 × 10 2 1.109 × 10 1 8.297 × 10 2 2.944 × 10 1 1.181 × 10 3 2.654 × 10 2 8.709 × 10 2 4.392 × 10 1
f8 8.566 × 10 2 1.908 × 10 1 9.437 × 10 2 1.461 × 10 1 1.006 × 10 3 4.301 × 10 1 9.111 × 10 2 3.050 × 10 1
f9 9.001 × 10 2 3.152 × 10 1 3.710 × 10 3 6.253 × 10 2 6.880 × 10 3 1.894 × 10 3 2.482 × 10 3 7.975 × 10 2
f10 4.327 × 10 3 1.448 × 10 3 4.424 × 10 3 6.930 × 10 2 5.329 × 10 3 6.262 × 10 2 4.712 × 10 3 8.184 × 10 2
f11 1.246 × 10 3 2.813 × 10 1 1.189 × 10 3 1.962 × 10 1 3.839 × 10 3 3.764 × 10 3 1.287 × 10 3 5.469 × 10 1
f12 5.974 × 10 6 1.627 × 10 6 3 . 525 × 10 4 1 . 418 × 10 4 5.578 × 10 8 9.341 × 10 8 1.163 × 10 6 1.036 × 10 6
f13 1.543 × 10 6 9.175 × 10 5 5 . 506 × 10 3 4 . 310 × 10 3 2.877 × 10 7 1.675 × 10 8 9.051 × 10 4 6.699 × 10 4
f14 1.562 × 10 4 1.018 × 10 4 3 . 063 × 10 3 1 . 242 × 10 3 1.158 × 10 5 2.291 × 10 5 4.578 × 10 3 2.643 × 10 3
f15 7.833 × 10 4 5.184 × 10 4 5 . 369 × 10 3 2 . 742 × 10 3 6.489 × 10 4 5.183 × 10 4 6.134 × 10 4 4.507 × 10 4
f16 2.047 × 10 3 1.981 × 10 2 2.637 × 10 3 2.209 × 10 2 3.105 × 10 3 4.018 × 10 2 2.424 × 10 3 3.006 × 10 2
f17 1.830 × 10 3 4.314 × 10 1 2.190 × 10 3 2.112 × 10 2 2.606 × 10 3 3.060 × 10 2 2.055 × 10 3 1.558 × 10 2
f18 3.322 × 10 5 2.010 × 10 5 9 . 476 × 10 4 4 . 135 × 10 4 2.585 × 10 6 4.165 × 10 6 1.281 × 10 5 9.244 × 10 4
f19 8.312 × 10 4 5.671 × 10 4 2 . 481 × 10 3 5 . 620 × 10 2 2.183 × 10 7 5.219 × 10 7 2.747 × 10 5 1.671 × 10 5
f20 2.184 × 10 3 7.148 × 10 1 2.615 × 10 3 1.537 × 10 2 2.757 × 10 3 1.819 × 10 2 2.379 × 10 3 1.203 × 10 2
f21 2.372 × 10 3 2.163 × 10 1 2.449 × 10 3 2.126 × 10 1 2.483 × 10 3 4.310 × 10 1 2.400 × 10 3 2.430 × 10 1
f22 2 . 300 × 10 3 2 . 650 × 10 7 4.150 × 10 3 2.313 × 10 3 7.021 × 10 3 1.126 × 10 3 5.703 × 10 3 1.314 × 10 3
f23 2.771 × 10 3 6.283 × 10 1 3.195 × 10 3 1.096 × 10 2 2.842 × 10 3 3.893 × 10 1 2.747 × 10 3 2.801 × 10 1
f24 2.952 × 10 3 2.206 × 10 1 3.251 × 10 3 7.596 × 10 1 2.981 × 10 3 3.764 × 10 1 2.915 × 10 3 2.325 × 10 1
f25 2.889 × 10 3 8.318 × 10 0 2.887 × 10 3 1.496 × 10 1 3.052 × 10 3 2.541 × 10 2 2.902 × 10 3 2.257 × 10 1
f26 3.509 × 10 3 7.992 × 10 2 5.495 × 10 3 1.931 × 10 3 6.007 × 10 3 4.835 × 10 2 4.487 × 10 3 8.828 × 10 2
f27 3.204 × 10 3 1.078 × 10 1 3 . 173 × 10 3 8 . 508 × 10 0 3.252 × 10 3 2.324 × 10 1 3.235 × 10 3 1.768 × 10 1
f28 3 . 173 × 10 3 4 . 698 × 10 1 3.184 × 10 3 5.199 × 10 1 3.503 × 10 3 1.564 × 10 2 3.189 × 10 3 4.191 × 10 1
f29 3.562 × 10 3 6.350 × 10 1 3.570 × 10 3 1.953 × 10 2 4.163 × 10 3 3.262 × 10 2 3.883 × 10 3 1.947 × 10 2
f30 7.462 × 10 5 3.776 × 10 5 7 . 687 × 10 3 2 . 490 × 10 3 1.016 × 10 6 1.519 × 10 6 1.082 × 10 6 8.138 × 10 5
WOAHSCAPSOGSAmSCA
MeanStdMeanStdMeanStdMeanStd
f1 2.902 × 10 6 1.834 × 10 6 6.485 × 10 9 1.327 × 10 9 6.453 × 10 9 7.820 × 10 9 4.234 × 10 8 4.429 × 10 8
f3 1.677 × 10 5 6.343 × 10 4 6.422 × 10 4 1.927 × 10 4 5.010 × 10 4 5.324 × 10 4 4.436 × 10 4 6.226 × 10 3
f4 5.307 × 10 2 3.171 × 10 1 1.036 × 10 3 1.542 × 10 2 1.304 × 10 3 1.120 × 10 3 5.319 × 10 2 2.516 × 10 1
f5 8.284 × 10 2 5.966 × 10 1 7.783 × 10 2 2.375 × 10 1 8.090 × 10 2 6.117 × 10 1 5.822 × 10 2 1.582 × 10 1
f6 6.741 × 10 2 9.136 × 10 0 6.359 × 10 2 4.100 × 10 0 6.611 × 10 2 1.006 × 10 1 6.027 × 10 2 1.392 × 10 0
f7 1.218 × 10 3 7.427 × 10 1 1.061 × 10 3 2.761 × 10 1 1.408 × 10 3 2.361 × 10 2 8.237 × 10 2 2.059 × 10 1
f8 1.011 × 10 3 3.564 × 10 1 1.049 × 10 3 2.060 × 10 1 1.055 × 10 3 5.163 × 10 1 8.713 × 10 2 1.558 × 10 1
f9 8.313 × 10 3 2.662 × 10 3 4.892 × 10 3 1.339 × 10 3 1.057 × 10 4 2.606 × 10 3 1.028 × 10 3 1.221 × 10 2
f10 5.967 × 10 3 7.782 × 10 2 8.582 × 10 3 3.595 × 10 2 5.216 × 10 3 7.810 × 10 2 3.976 × 10 3 5.503 × 10 2
f11 1.419 × 10 3 6.189 × 10 1 1.813 × 10 3 1.887 × 10 2 3.020 × 10 3 3.723 × 10 3 1.313 × 10 3 4.849 × 10 1
f12 2.933 × 10 7 2.175 × 10 7 6.253 × 10 8 1.488 × 10 8 9.344 × 10 8 1.460 × 10 9 2.654 × 10 7 4.546 × 10 7
f13 1.473 × 10 5 7.488 × 10 4 1.816 × 10 8 6.742 × 10 7 4.712 × 10 8 8.209 × 10 8 4.060 × 10 6 2.109 × 10 7
f14 3.196 × 10 5 2.016 × 10 5 1.612 × 10 5 1.042 × 10 5 5.418 × 10 4 1.028 × 10 5 3.805 × 10 4 2.521 × 10 4
f15 3.609 × 10 4 2.452 × 10 4 3.762 × 10 6 4.250 × 10 6 2.260 × 10 7 1.428 × 10 8 6.007 × 10 4 2.998 × 10 4
f16 3.621 × 10 3 4.487 × 10 2 3.702 × 10 3 3.142 × 10 2 3.241 × 10 3 3.878 × 10 2 2.176 × 10 3 2.395 × 10 2
f17 2.620 × 10 3 2.216 × 10 2 2.474 × 10 3 2.065 × 10 2 2.811 × 10 3 2.727 × 10 2 1.862 × 10 3 9.638 × 10 1
f18 1.973 × 10 6 1.616 × 10 6 5.933 × 10 6 5.500 × 10 6 5.804 × 10 6 1.424 × 10 7 2.806 × 10 5 2.313 × 10 5
f19 4.442 × 10 6 2.701 × 10 6 1.984 × 10 7 1.306 × 10 7 9.239 × 10 7 3.052 × 10 8 9.513 × 10 5 1.878 × 10 6
f20 2.705 × 10 3 1.913 × 10 2 2.671 × 10 3 1.742 × 10 2 2.817 × 10 3 2.240 × 10 2 2.225 × 10 3 7.728 × 10 1
f21 2.558 × 10 3 3.666 × 10 1 2.562 × 10 3 2.436 × 10 1 2.541 × 10 3 3.982 × 10 1 2.362 × 10 3 2.690 × 10 1
f22 7.391 × 10 3 1.375 × 10 3 9.922 × 10 3 3.785 × 10 2 7.329 × 10 3 8.299 × 10 2 2.476 × 10 3 2.541 × 10 2
f23 3.070 × 10 3 1.055 × 10 2 2.943 × 10 3 2.290 × 10 1 3.048 × 10 3 1.014 × 10 2 2.718 × 10 3 1.887 × 10 1
f24 3.172 × 10 3 8.501 × 10 1 3.118 × 10 3 2.220 × 10 1 3.143 × 10 3 7.963 × 10 1 2.883 × 10 3 1.533 × 10 1
f25 2.934 × 10 3 2.213 × 10 1 3.083 × 10 3 4.636 × 10 1 3.179 × 10 3 4.275 × 10 2 2.927 × 10 3 1.478 × 10 1
f26 7.456 × 10 3 1.271 × 10 3 6.522 × 10 3 2.646 × 10 2 7.821 × 10 3 1.310 × 10 3 4.243 × 10 3 1.634 × 10 2
f27 3.349 × 10 3 5.435 × 10 1 3.278 × 10 3 1.870 × 10 1 3.305 × 10 3 5.616 × 10 1 3.219 × 10 3 1.055 × 10 1
f28 3.341 × 10 3 5.283 × 10 1 3.630 × 10 3 1.076 × 10 2 3.531 × 10 3 2.805 × 10 2 3.282 × 10 3 3.395 × 10 1
f29 4.878 × 10 3 3.360 × 10 2 4.645 × 10 3 2.693 × 10 2 4.331 × 10 3 2.916 × 10 2 3.549 × 10 3 1.044 × 10 2
f30 1.941 × 10 7 9.071 × 10 6 3.772 × 10 7 1.686 × 10 7 9.832 × 10 5 1.629 × 10 6 3.246 × 10 6 2.402 × 10 6
Table 10. Results of multiple-problem Wilcoxon test.
Table 10. Results of multiple-problem Wilcoxon test.
CDL-DGWO VS.+R+R−p-Valuea = 0.05
GWO290043500.00000+
IGWO7220943410.00757
LGWO290043500.00000+
HGWOSCA290043500.00000+
RWGWO2180347880.00511
EGWO290043500.00000+
PSOGWO290043500.00000+
MIGWO2360369660.00105+
PSO161302052300.78694
MFO290043500.00000+
SSA21802931420.10256
WOA2810417180.00002+
HSCA290043500.00000+
PSOGSA290043500.00000+
mSCA2810417180.00002+
Table 11. Test results of the 16 algorithms.
Table 11. Test results of the 16 algorithms.
RankAve RankAve
CDL-DGWO23.03MIGWO44.76
GWO98.55PSO65.79
IGWO12.21MFO1212.21
LGWO1111.45SSA76.45
HGWOSCA87.55WOA1312.28
RWGWO34.21HSCA1513.69
EGWO1614.66PSOGSA1413.66
PSOGWO109.93mSCA55.59
Table 12. Comparison of results on welded beam design problem.
Table 12. Comparison of results on welded beam design problem.
MeanStdBestBest Values for Variables
x1 x2 x3 x4 x5
CDL-DGWO−30,939.86240.84−31,453.6358.166330.282030.908446.2942.19
GWO−30,657.623.37−30,663.58783330.019934536.72181
MIGWO−30,621.1519.16−30,652.9578.184336.352232.849035.804934.0235
IGWO−30,653.645.15−30,660.8778.033033.000530.032044.993536.6943
LGWO−30,514.9084.48−30,631.71783330.52604535.8899
HGWOSCA−30,655.913.64−30,662.48783330.02444536.7473
RWGWO−30,659.203.56−30,664.49783330.00674536.7748
EGWO−30,658.425.88−30,664.797833.018230.03584536.6768
PSOGWO−30,650.9627.82−30,665.28783330.020844.976336.7329
MFO−30,642.8943.32−30,665.54783329.99534536.7758
SSA−29,739.53 1.1053 × 10 11 −29,739.5378.042735.689633.952441.255830.1411
WOA−29,913.76142.60−30,323.917835.569033.755440.930329.9783
HSCA−30,280.57188.77−30,566.36783330.42824536.1814
PSOGSA−30,660.7719.52−30,665.54783329.99534536.7758
mSCA−30,607.1234.17−30,647.257833.153130.104744.998936.5289
Table 13. Comparison of results on tension/compression spring design.
Table 13. Comparison of results on tension/compression spring design.
MeanStdBestBest Values for Variables
x1 x2 x3
CDL-DGWO0.01270.00000.01270.05190.359911.1646
GWO0.01280.00000.01270.05460.43038.0324
MIGWO0.01280.00000.01270.05350.381411.7483
IGWO0.01270.00000.01270.05170.356211.3484
LGWO0.01320.00050.01280.05390.40889.1575
HGWOSCA0.01280.00020.01270.05000.317214.0638
RWGWO0.01280.00020.01270.05000.317214.0646
EGWO0.01320.00050.01270.05850.54135.2880
PSOGWO0.01280.00010.01270.05010.318713.9319
MFO0.01310.00080.01270.05000.317414.0278
SSA 7.39 × 10 10 3.35 × 10 11 0.01850.07771.36571.4074
WOA0.01420.00110.01270.06570.79602.6487
HSCA0.01390.00090.01300.05000.311515.0000
PSOGSA0.01350.00090.01270.05280.38289.9089
mSCA0.01280.00020.01270.05070.331812.9811
Table 14. Comparison of results on welded beam design problem.
Table 14. Comparison of results on welded beam design problem.
MeanStdBestBest Values for Variables
x1 x2 x3 x4
CDL-DGWO1.7249 1 . 12 × 10 15 1.72490.20573.47059.03660.2057
GWO1.72950.00321.72620.20463.49909.03820.2058
MIGWO1.74100.00811.73140.23513.24648.40590.2436
IGWO1.72950.00151.72670.20533.48099.06140.2058
LGWO1.84060.02971.76220.20643.50799.02550.2170
HGWOSCA1.73020.00301.72630.20563.47779.04530.2060
RWGWO1.72900.00171.72660.20393.51429.04290.2057
EGWO1.74720.04041.72640.20463.49629.03490.2059
PSOGWO1.74440.03521.72510.20543.62649.08160.2088
MFO1.82130.15871.72490.30872.54707.37540.3088
SSA9.5473 3.60 × 10 15 9.54731.19003.14443.45811.6228
WOA4.20990.97871.91540.16544.151210.00000.2050
HSCA1.87320.04891.78350.21304.06988.79290.2195
PSOGSA2.27230.27321.74720.29902.65607.36230.3099
mSCA1.74150.01111.72770.20263.55569.07900.2081
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, S.; Xu, M.; Zhao, X.; Yan, X. Double-Swarm Grey Wolf Optimizer with Covariance and Dimension Learning for Engineering Optimization Problems. Symmetry 2025, 17, 2030. https://doi.org/10.3390/sym17122030

AMA Style

Ma S, Xu M, Zhao X, Yan X. Double-Swarm Grey Wolf Optimizer with Covariance and Dimension Learning for Engineering Optimization Problems. Symmetry. 2025; 17(12):2030. https://doi.org/10.3390/sym17122030

Chicago/Turabian Style

Ma, Shuidong, Miao Xu, Xiaodong Zhao, and Xiaodong Yan. 2025. "Double-Swarm Grey Wolf Optimizer with Covariance and Dimension Learning for Engineering Optimization Problems" Symmetry 17, no. 12: 2030. https://doi.org/10.3390/sym17122030

APA Style

Ma, S., Xu, M., Zhao, X., & Yan, X. (2025). Double-Swarm Grey Wolf Optimizer with Covariance and Dimension Learning for Engineering Optimization Problems. Symmetry, 17(12), 2030. https://doi.org/10.3390/sym17122030

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop