Previous Article in Journal
A Bibliometric Evaluation of the Use of Biomimicry as a Nature-Compatible Design Approach in Landscape Architecture Within the Context of Sustainability and Ecology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Whale Optimization Algorithm with Novel Strategies for 3D TSP Problem

School of Mathematics and Information Science, North Minzu University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(9), 560; https://doi.org/10.3390/biomimetics10090560
Submission received: 25 July 2025 / Revised: 16 August 2025 / Accepted: 20 August 2025 / Published: 22 August 2025
(This article belongs to the Section Biological Optimisation and Management)

Abstract

To address the insufficient global search efficiency of the original Whale Optimization Algorithm (WOA), this paper proposes an enhanced variant (ImWOA) integrating three strategies. First, a dynamic cluster center-guided search mechanism based on K-means clustering divides the population into subgroups that conduct targeted searches around dynamically updated centroids, with real-time centroid recalculation enabling evolutionary adaptation. This strategy innovatively combines global optima with local centroids, significantly improving global exploration while reducing redundant searches. Second, a dual-modal diversity-driven adaptive mutation mechanism simultaneously evaluates spatial distribution and fitness-value diversity to comprehensively characterize population heterogeneity. It dynamically adjusts mutation probability based on diversity states, enhancing robustness. Finally, a pattern search strategy (GPSPositiveBasis2N algorithm) is embedded as a periodic optimization module, synergizing WOA’s global exploration with GPSPositiveBasis2N’s local precision to boost solution quality and convergence. Evaluated on the CEC2017 benchmark against the original WOA, eight state-of-the-art metaheuristics, and five advanced WOA variants, ImWOA achieves: (1) optimal mean values for 20/29 functions in 30D tests; (2) optimal mean values for 26/29 functions in 100D tests; and (3) first rank in 3D-TSP validation, demonstrating superior capability for complex optimization.

1. Introduction

“Optimization” refers to the process of adjusting decision variables to achieve the optimal value of an objective function under given constraints. As technology advances, modern society faces increasingly complex optimization challenges across multiple domains such as engineering design [1], resource allocation [2], financial risk management [3], data mining [4], material development [5], satellite image analysis [6], energy consumption [7], epidemic control [8], path planning [9], intrusion detection [10], mechanical design [11], feature selection [12,13], machine learning model optimization [14], sustainable applications [15], and resource scheduling [16].
These problems often involve nonlinear, multimodal, high-dimensional, or dynamic scenarios that are difficult to address using traditional methods. This is where metaheuristic algorithms increasingly demonstrate their critical value as core tools for solving complex optimization problems.
The principal advantages of metaheuristic algorithms can be summarized as follows: (1) their applicability to optimization problems lacking analytical objective functions or gradient information; (2) enhanced capability in multi-objective optimization through population-based search that yields multiple solutions in a single execution, contrasting with conventional mathematical programming methods; (3) superior global search capabilities enabled by stochastic exploration; (4) inherent compatibility with parallel computing architectures; and (5) effective handling of mixed-variable optimization problems involving both integer and continuous variables.
Well-known metaheuristic algorithms include: the Barnacles Mating Optimizer (BMO) [17], Earthworm Optimization Algorithm (EOA) [18], Seagull Optimization Algorithm (SOA) [19], Tunicate Swarm Algorithm (TSA) [20], Brain Storm Optimization (BSO) [21], Heap-based optimizer (HBO) [22], Teamwork Optimization Algorithm (TOA) [23], Chaos Game Optimization (CGO) [24], Sine Cosine Algorithm (SCA) [25], Electromagnetic Field Optimization (EFO) [26], Artificial Bee Colony (ABC) [27], Cuckoo Search Algorithm (CSA) [28], Elephant Herding Optimization (EHO) [29], Sea Lion Optimization Algorithm (SLO) [30], and Whale Optimization Algorithm (WOA) [31].
The WOA has gained widespread popularity among researchers due to its strong global search capability, straightforward parameter settings, and rapid convergence rate. The WOA, however, exhibits certain limitations including its susceptibility to local optima entrapment, limited scalability in high-dimensional search spaces, and parameter sensitivity. To address these weaknesses and enhance its optimization capabilities, researchers have conducted extensive modifications through different improvement pathways.
Liang et al. [32] proposed an enhanced WOA (DGSWOA), with core improvements including: population initialization via a Sine–Tent–Cosine mapping to ensure uniform distribution of individuals in the search space; integration of the primary knowledge acquisition phase from the Gaining–Sharing Knowledge (GSK) algorithm to enhance global search capability and avoid local optima; and implementation of a Dynamic Opposition-Based Learning (DOBL) strategy to update population individuals, further increasing solution diversity. These enhancements collectively improve the algorithm’s convergence speed and optimization quality. Liu et al. [33] proposed an improved WOA (DECWOA) with core enhancements including: Sine chaotic mapping for population initialization to enhance diversity; an adaptive inertia weight strategy that dynamically adjusts weights in the position update formula to balance global exploration and local exploitation; and integration of the Differential Evolution (DE) algorithm to improve search speed and accuracy through mutation, crossover, and selection. These innovations effectively address the original algorithm’s tendencies for premature convergence and slow convergence. Chakraborty et al. [34] proposed an improved WOA, whose core enhancements lie in an iterative partitioning strategy that dedicates the first half of iterations to exploration and the latter half to exploitation. During exploration, two modified prey search strategies are introduced to enhance solution diversity. In the exploitation phase, the concept of whale “cooperative hunting” is integrated with original strategies to strengthen local search capability and the convergence rate. Sun et al. [35] proposed an improved WOA (MWOA-CS) with core enhancements including: introducing a nonlinear convergence factor and a cosine function-based inertia weight to dynamically balance the exploration–exploitation capabilities of WOA, and integrating the update mechanism of the Cross Search Optimizer (CSO). During iterations, the algorithm randomly selects each dimension of the optimization problem to execute either the modified WOA or CSO, effectively avoiding local optima while improving convergence speed and precision for large-scale global optimization problems. Shen et al. [36] proposed an improved multi-population evolution-based WOA (MEWOA). Its core innovations include dividing the whale population into three sub-populations—exploration-oriented, exploitation-oriented, and balance-oriented—each employing distinct movement strategies to emphasize global exploration, local exploitation, and exploration–exploitation balance, respectively. By integrating a population evolution strategy that alternates between position updates and evolutionary refinement during iterations, the algorithm effectively enhances population diversity and prevents premature convergence. These advancements significantly strengthen its global search capability and convergence speed in addressing global optimization and engineering design challenges. Li et al. [37] proposed an improved multi-strategy WOA (MWOA), with core enhancements including: introducing an elite opposition-based learning strategy to optimize initial populations, a nonlinear convergence factor to balance exploration and exploitation, Differential Evolution (DE) mutation strategies to strengthen global exploration capability, and a Lévy flight perturbation strategy to enhance search space diversity. These strategies collectively enhance the convergence speed and precision of the original WOA, effectively avoiding local optima while demonstrating stronger competitiveness in tackling complex global optimization problems.
While existing enhancement schemes have improved the performance of the WOA, there remains room for improvement in its global exploration capability and convergence speed. To address these limitations, this study aims to strengthen the WOA algorithm by exploring novel strategies, thereby further advancing its global search capacity, local exploitation capability, and convergence performance. The main contributions and innovations of this work are summarized as follows:
  • A Dynamic Cluster Center-guided Search Strategy Based on the K-means Clustering Algorithm: This method divides the population into multiple subgroups, with each subgroup conducting searches around its corresponding clustering center. Meanwhile, the clustering centers are recalculated in each iteration to dynamically adjust the centroids, enabling rapid adaptation to population changes and enhancing adaptability and robustness, thus avoiding premature convergence. Additionally, the position update strategy—which integrates both the position of the globally optimal individual and the centroid position of local clusters—demonstrates rapid environmental adaptability. It reduces redundant search operations while improving overall search efficiency, reliably converging to the globally optimal solution.
  • Dual-Modal Population Diversity-Driven Adaptive Mutation Strategy: When individuals in the population exhibit excessive similarity, the algorithm tends to converge prematurely to a local optimum. Traditional methods primarily assess diversity by merely measuring the distances between individuals within the population, which fails to fully capture the diversity of the population. To more comprehensively depict the heterogeneous characteristics of the population, this strategy considers both spatial distribution diversity and fitness value diversity. Additionally, this strategy develops a dynamic mutation mechanism that adjusts the mutation probability in real time based on the population’s diversity state. When diversity is low, mutations are applied with a higher probability to enhance global exploration capabilities; conversely, when diversity is high, mutations are applied with a lower probability to maintain convergence efficiency. Compared to static parameter configurations, this adaptive adjustment mechanism demonstrates stronger robustness and adaptability to specific problems.
  • Pattern Search Strategy Based on the GPSPositiveBasis2N Algorithm: WOA struggles to perform refined searches in certain complex spaces. This proposed strategy combines WOA with a pattern search method to leverage the strengths of both approaches, thereby enhancing optimization performance. Specifically, WOA can swiftly locate potential optimal regions within the global search space, while the pattern search strategy conducts refined searches within these regions, thus improving overall efficiency. Consequently, this strategy introduces a hybrid optimization framework that integrates the pattern search method, namely the “GPSPositiveBasis2N algorithm,” as a periodic optimization module within the workflow of the WOA. By combining the complementary advantages of global exploration (achieved by WOA) and local precise optimization (achieved by GPSPositiveBasis2N), this framework enhances the solution quality and convergence efficiency.
The structure of this paper is outlined as follows: Section 2 systematically outlines the core mechanisms and procedural framework of the original WOA. Section 3 elaborates on three innovative strategies of the ImWOA, including design motivations, theoretical significance, and mathematical modeling. Section 4 presents quantitative experimental results of ImWOA and benchmark algorithms on the CEC2017 test suite, with comprehensive analysis of key metrics such as convergence behavior and solution accuracy. Section 5 applies ImWOA to three-dimensional Traveling Salesman Problems (TSPs), validating its engineering practicality through performance comparisons with other WOA variants. Finally, Section 6 summarizes the research contributions and proposes potential directions for future work.

2. The Original WOA

The WOA is a novel metaheuristic algorithm proposed by Mirjalili and Lewis [31] in 2016, which simulates the hunting behavior of whale populations in nature to find optimal solutions. It primarily consists of three components: prey encirclement, bubble-net attack, and searching for prey.

2.1. Prey Encirclement

When hunting, whales form a circle around their prey and progressively reduce the size of this encirclement, causing the individual whales to converge towards the current best solution (the location of the prey). The specific mathematical equations are as follows:
X t + 1 = X * A · D 1
D 1 = C · X * X t
where t represents the current iteration number, X ( t ) denotes the position of the whale at the t-th iteration, X ( t + 1 ) represents the position of the whale at the (t + 1)-th iteration, and X * signifies the current optimal position of the whale. A and C are parameters, and their calculation formulas are as follows:
A = 2 · a · r a
C = 2 · r
a = 2 2 · t M a x I t e r
where r represents a random number between 0 and 1, a denotes the convergence coefficient, which decreases from 2 to 0 as iterations increase, and M a x I t e r denotes the maximum number of iterations.

2.2. Bubble-Net Attack

During this phase, whales trap their prey by creating a spiral bubble net and simultaneously swim along a spiral path to approach the target. The specific mathematical formulations are as follows:
X t + 1 = D 2 · e b l · c o s 2 π l + X *
D 2 = X * X t
where the constant b governs the spiral’s logarithmic shaping, and l represents a random number within the interval from −1 to 1.

2.3. Searching for Prey

During this phase, whales engage in random walking to expand the search range, and the algorithm randomly selects reference individuals to achieve global exploration. The specific mathematical formulations are as follows:
X ( t + 1 ) = X r a n d ( t ) A · D 3
D 3 = C · X r a n d ( t ) X ( t )
where X r a n d ( t ) designates the positional coordinates of a stochastically selected whale. The parameters A and C are computed using the relationships defined in Equations (3) and (4), respectively.
The selection of behavioral strategies in the algorithm is controlled by parameters A and p, where p is a random number between 0 and 1. When p < 0.5 and A 1 , searching for prey is executed; when p < 0.5 and A < 1 , the prey encirclement strategy is carried out; when p 0.5 , the bubble-net attacking strategy is implemented.

3. Proposed ImWOA

This section presents an enhanced variant of the WOA (ImWOA) to address inherent limitations in the original WOA.

3.1. A Dynamic Cluster Center-Guided Search Strategy Based on the K-Means Clustering Algorithm

The k-means clustering algorithm divides the population into multiple subgroups, with each subgroup conducting searches around its corresponding cluster center. This mechanism enhances population diversity through spatial partitioning. By recalculating cluster centers in each iteration, the algorithm achieves dynamic adjustment of centroids, enabling rapid adaptation to population changes. Concurrently, it can promptly adjust search directions according to the current population distribution. This strategic framework allows the algorithm to better regulate search orientations during iterative processes, thereby improving both adaptability and robustness. Specifically, the dynamic centroid updating mechanism establishes an adaptive balance between the exploration and exploitation phases, while the subgroup partitioning structure effectively prevents premature convergence through diversified local searches.
The formula for the number of clusters is as follows:
K = N
where N represents the number of individuals in the population.
For each iteration, the k-means clustering algorithm partitions the population into K clusters:
S * = S 1 * , S 2 * S K *
Each of these K clusters corresponds to a distinct cluster centroid:
C * = C 1 * , C 2 * C K *
For an individual X i i 1 , 2 N assigned to cluster S k k 1 , 2 K , its corresponding cluster center is denoted as C k k 1 , 2 K . Let D denote the dimensionality of each individual in the population, corresponding to the number of decision variables in the D-dimensional search space.
The position update formula based on the dynamic cluster center-guided search strategy is presented as follows:
x i , j = B e s t P o s i t i o n j + ( B e s t P o s i t i o n j C k , j ) · ( 1.5 + r a n d ) r a n d n
where i 1 , 2 N , j 1 , 2 D and k 1 , 2 K . In addition, B e s t P o s i t i o n j denotes the j-th dimensional component of the current optimal individual in the population, and C k , j represents the j-th dimensional component of the cluster center C k associated with individual X i . In addition, rand represents a random number uniformly distributed in the interval [0, 1], serving to introduce stochastic perturbations that prevent premature convergence, and randn denotes a random number sampled from a Gaussian distribution ( N 0 , 1 ) , whose purpose is to enhance local escape capability through Gaussian noise injection.
In summary, this dynamic cluster center-guided search strategy achieves a balance between global exploration and local exploitation by partitioning the population into multiple clusters. The efficient clustering process minimizes computational overhead, while the dynamic centroid adaptation mechanism effectively responds to evolving population distributions across iterative phases. The position update strategy—incorporating both the global best individual’s position and the local cluster centroid’s position—demonstrates rapid environmental adaptability and reduces redundant search operations while enhancing overall search efficiency. Such coordinated mechanisms empower the algorithm to reliably converge to the global optimum when addressing complex optimization problems, particularly those with non-convex or multimodal landscapes.

3.2. Dual-Modal Population Diversity-Driven Adaptive Mutation Strategy

In evolutionary optimization algorithms, population diversity plays a crucial role in maintaining the balance between exploration and exploitation. When population individuals exhibit excessive similarity, the algorithm tends to suffer premature convergence to local optima, failing to discover superior solutions. Traditional methods primarily evaluate diversity by measuring the distances between individuals within the population. However, such approaches fail to fully reflect the diversity of the population because they overlook the characteristic of fitness. To address this limitation, this study proposes a comprehensive diversity evaluation framework that simultaneously incorporates both spatial distribution diversity and fitness value diversity, thereby enabling more holistic characterization of population heterogeneity.
Furthermore, traditional mutation mechanisms typically employ fixed probability parameters. Building upon our theoretical analysis, we develop a dynamic mutation mechanism driven by dual-modal diversity monitoring (spatial and fitness modalities). This innovation enables real-time adjustment of mutation probability according to population diversity states: when the diversity is low, mutations are applied with a higher probability to enhance the global exploration capability; conversely, when the diversity is high, mutations are applied with a lower probability to maintain the convergence efficiency. This self-adaptive regulation mechanism demonstrates enhanced robustness and problem-specific adaptability compared to static parameter configurations.
The spatial diversity metric of the population, denoted as D i v e r s i t y p o s i t i o n , is computed through the following procedure:
First, each parameter dimension of the population is independently normalized using the min–max scaling method, computed as:
x i , j n o r m = x i , j m i n ( X j ) m a x ( X j ) m i n ( X j )
where i 1 , 2 N and j 1 , 2 D . In addition, m i n ( X j ) and m a x ( X j ) , respectively, denote the minimum and maximum values of the j-th column in the population matrix X.
Second, calculate the standard deviation of each column in the column-normalized population matrix X n o r m :
σ j = 1 N i = 1 N ( x i , j n o r m μ j ) 2
where j 1 , 2 D , and μ j represents the mean of the j-th column of X n o r m , computed as:
μ j = 1 N i = 1 N x i , j n o r m
Finally, calculate the mean of the standard deviations of all columns:
D i v e r s i t y p o s i t i o n = σ m e a n = 1 D j = 1 D σ j
The fitness diversity metric of the population, denoted as D i v e r s i t y fitness , is computed through the following procedure:
First, let f i t n e s s i R denote the fitness value of the i-th individual in the population, where i 1 , 2 N . The complete fitness set can be expressed as F i t n e s s = f i t n e s s 1 , f i t n e s s 2 f i t n e s s N .
Second, the fitness values are normalized across the population using min–max scaling to establish a unified measurement scale, computed as:
f i t n e s s i n o r m = f i t n e s s i m i n ( F i t n e s s ) m a x ( F i t n e s s ) m i n ( F i t n e s s )
where i 1 , 2 N and min(Fitness) and max(Fitness), respectively, denote the minimum and maximum values within the the complete fitness set Fitness.
Finally, the fitness diversity metric D i v e r s i t y f i t n e s s is quantified as the standard deviation of the normalized fitness values across the population, computed as:
D i v e r s i t y f i t n e s s = σ = 1 N i = 1 N ( f i t n e s s i n o r m μ ) 2
where μ denotes the mean normalized fitness value, computed as:
μ = 1 N i = 1 N f i t n e s s i n o r m
The dynamic mutation probability threshold P m d y n a m i c is derived through weighted fusion of spatial diversity( D i v e r s i t y p o s i t i o n ) and fitness diversity ( D i v e r s i t y f i t n e s s ) metrics:
P m d y n a m i c = w 1 · D i v e r s i t y p o s i t i o n + w 2 · D i v e r s i t y f i t n e s s
where the weighting coefficients are empirically set as w 1 = 2 and w 2 = 1 , reflecting a 2:1 priority ratio for spatial diversity over fitness diversity.
Given the normalized population position data X n o r m and fitness values F i t n e s s n o r m , combined with the nature of standard deviation metrics under min–max scaling, we analytically derive the ranges of diversity metrics:
Position Diversity Bound:
D i v e r s i t y p o s i t i o n 0 , 0.25
Fitness Diversity Bound:
D i v e r s i t y f i t n e s s 0 , 0.5
Substituting these bounds into Equation (21), the dynamic mutation probability is theoretically confined to:
P m d y n a m i c 0 , 1
The mutation mechanism is activated stochastically when a uniformly distributed random number r a n d U 0 , 1 exceeds the dynamic threshold P m d y n a m i c , as formalized by:
M u t a t i o n T r i g g e r C o n d i t i o n : r a n d > P m d y n a m i c
Probabilistic Interpretation:
  • Low-Diversity Regime ( P m d y n a m i c 0 )
    • Implies reduced population diversity:
      D i v e r s i t y p o s i t i o n , D i v e r s i t y f i t n e s s
    • Higher activation probability:
      P r a n d > P m d y n a m i c 1 P m d y n a m i c 1
    • Promotes intensified exploration through frequent mutations.
  • High-Diversity Regime ( P m d y n a m i c 1 )
    • Indicates sufficient population diversity:
      D i v e r s i t y p o s i t i o n , D i v e r s i t y f i t n e s s
    • Lower activation probability:
      P r a n d > P m d y n a m i c 1 P m d y n a m i c 0
    • Prioritizes exploitation by suppressing unnecessary mutations.
If the mutation mechanism is triggered, the mutation formula is as follows:
X i = X i + ( u b l b ) r a n d n ( D ) r a n d ( D ) 1 , 1
where i 1 , 2 N . In addition, ub and lb denote the upper and lower bounds of the decision variables, respectively, both structured as 1 × D row vectors. Moreover, r a n d n ( D ) is a 1 × D vector with elements independently sampled from N 0 , 1 (standard normal distribution). ◯ denotes the element-wise exponentiation of vectors ub-lb and randn(D) (raising each element of ub-lb to the power of the corresponding element in randn(D)). Additionally, r a n d ( D ) 1 , 1 is a 1 × D vector with elements uniformly distributed in 1 , 1 , and ⊙ denotes the element-wise multiplication (Hadamard product).
This novel mechanism addresses the limitations of single-indicator approaches through dual integration of spatial diversity (reflecting population distribution breadth) and fitness diversity (indicating solution quality disparity). The mutation probability undergoes autonomous adjustment based on real-time population states: when diversity metrics decline ( P m d y n a m i c ), intensified perturbations facilitate escaping local optima; conversely, when the diversity indicator increases ( P m d y n a m i c ), redundant searches are reduced to accelerate convergence. This dynamic regulation overcomes the empirical limitations of static mutation probability schemes.
In late-stage optimization phases or high-dimensional search spaces, conventional methods frequently suffer premature convergence due to population homogenization. The integrated mutation mechanism circumvents this pitfall through the targeted perturbation operator (Equation (30)), which systematically generates diversified solutions across the search space. This strategic disturbance reinvigorates global exploration capabilities while significantly mitigating the risk of local optima entrapment.

3.3. Pattern Search Strategy Based on the GPSPositiveBasis2N Algorithm

The WOA mimics the hunting behavior of humpback whales, exhibiting strong global search capabilities that enable rapid exploration of the solution space. However, in certain complex spaces, it struggles to perform fine-grained searches. By integrating WOA with a pattern search strategy, the advantages of both approaches can be leveraged to enhance optimization performance. Specifically, WOA can quickly identify potential optimal regions in the global scope, while the pattern search strategy can conduct a refined search within these regions, thereby improving overall efficiency.
The traditional pattern search strategy has deficiencies such as being prone to getting stuck in local optima and having relatively low search efficiency. As an improved variant of pattern search, the GPSPositiveBasis2N algorithm demonstrates core advantages over traditional methods in direction set completeness and robustness. A detailed comparison is outlined below.
  • Direction Set Coverage:
Traditional pattern search uses a minimal positive basis (N + 1 directions). For example, in 3D space, it might select [+x, +y, +z] and one diagonal direction, potentially failing to cover all optimization directions.
The GPSPositiveBasis2N algorithm employs 2N orthogonal directions (both positive and negative along each coordinate axis, e.g., [+x, −x, +y, −y, +z, −z] in 3D space), which ensures comprehensive directional coverage to avoid stagnation in local optima caused by missing directions, while its symmetrical design reduces sensitivity to initial orientations, thereby enhancing search stability.
2.
Adaptability to Non-Smooth/Noisy Functions:
Traditional pattern search may frequently become trapped in pseudo-optimal solutions when handling noisy, discontinuous, or non-smooth objective functions due to incomplete direction sets.
The GPSPositiveBasis2N algorithm’s broader direction coverage ensures that even if some directions fail due to noise, others may still identify improved points. This reflects two key advantages: reduced dependency on function smoothness (making it suitable for black-box models or experimental data in engineering optimization) and strong robustness to maintain effective search performance in noisy environments.
3.
Global Exploration Capability:
Traditional pattern search relies on step-size contraction strategies. Poor initial step-size selection may lead to premature convergence to local regions.
The GPSPositiveBasis2N algorithm explores the design space more thoroughly at larger step sizes by covering all positive and negative coordinate directions, offering two key advantages: reduced dependency on initial points to enhance global search capability, and faster escape from local optima during step-size adjustments (e.g., through step enlargement).
4.
Convergence Guarantee:
While both traditional pattern search and GPSPositiveBasis2N satisfy the convergence theory of pattern search (converging to local minima for smooth functions), the latter’s 2N direction set ensures linear independence of the positive basis, guaranteeing progressive convergence even under non-smooth conditions through step-size contraction.
The GPSPositiveBasis2N algorithm follows the structured pattern search framework to iteratively refine solutions. Algorithm 1 presents the pseudocode for the GPSPositiveBasis2N algorithm.
Algorithm 1 GPSPositiveBasis2N algorithm
Require:
   Objective function F Obj
   Initial point x 0 R n
   Lower bound LoB , upper bound UpB
   Initial step size Δ 0 > 0 , tolerance τ > 0
   Maximum iterations K max
Ensure: Optimal solution x * , optimal value f *
 1:
x best x 0
 2:
f best F Obj ( x best )
 3:
Δ Δ 0
 4:
Generate positive basis D = I n I n T
 5:
for  k = 1 K max  do
 6:
       improved false
 7:
      for each direction d i D  do
 8:
             x cand x best + Δ d i
 9:
             x cand clip ( x cand , LoB , UpB )
10:
            f cand F Obj ( x cand )
11:
           if  f cand < f best  then
12:
                x best x cand
13:
                f best f cand
14:
                improved true
15:
          end if
16:
      end for
17:
      if improved then
18:
             Δ 1.5 Δ                ▷ Step expansion
19:
      else
20:
             Δ 0.5 Δ              ▷ Step contraction
21:
      end if
22:
      if  Δ < τ  then
23:
            break
24:
      end if
25:
end for
26:
return  x best , f best
In this study, the initial step size 0 is set to 1, the maximum number of iterations K max is assigned as 100, and the convergence tolerance τ is fixed at 1 × 10 6 . Additionally, the initial point x 0 is configured as the best position x b e s t t , where represents the optimal solution identified by the population-based metaheuristic at iteration t. A refinement search is performed using the GPSPositiveBasis2N algorithm every 200 iterations, and the output x b e s t from GPSPositiveBasis2N is assigned as the initial optimal position x b e s t p o s i t i o n for subsequent optimization iterations.
This strategy proposes a hybrid optimization framework where the GPSPositiveBasis2N algorithm is embedded as a periodic refinement module within the WOA workflow. The core idea leverages the complementary strengths of global exploration (via WOA) and local precision (via GPSPositiveBasis2N) to enhance solution quality and convergence efficiency.

3.4. Whole Framework for ImWOA

The overall structure of the suggested ImWOA approach is depicted in Figure 1.

3.5. Computational Complexity Analysis of Algorithms

Time complexity serves as a core metric for evaluating the computational resource demands of an algorithm. Within the context of WOA, we define the whale population size as N, the maximum number of iterations as T, and the problem dimension as D. Consequently, the time complexity of WOA is expressed as O ( N T D ) . Compared to its predecessor, the ImWOA achieves significant performance enhancements. Through a detailed analysis of its algorithmic refinements—specifically including the dynamic cluster center-guided search strategy based on the K-means clustering algorithm, dual-modal population diversity-driven adaptive mutation strategy, and pattern search strategy based on the GPSPositiveBasis2N algorithm—it becomes evident that these improvements do not necessitate a larger population size, increase the number of iterations, or expand the problem dimension. Therefore, ImWOA maintains the same time complexity of O ( N T D ) . In summary, ImWOA exhibits identical time complexity to the standard WOA.

3.6. Convergence Analysis of ImWOA

The global convergence of ImWOA is guaranteed by the following three theoretical mechanisms, which satisfy the convergence conditions of classical optimization theory:
  • Global convergence guarantee of stochastic search
    According to the Solís–Wets stochastic optimization convergence theorem, ImWOA satisfies two key conditions for probability-1 global convergence:
    • Solution space denseness: Achieved through the diversity-guided mutation mechanism. When population diversity decreases, Gaussian mutation is triggered to ensure solution space coverage.
    • Elitism preservation strategy: The strict historical best solution update mechanism ensures the objective function value is monotonically non-increasing:
      f ( x t + 1 best ) min f ( x t best ) , min i f ( x i ( t ) )
  • Convergence inheritance of local search
    The periodically invoked GPSPositiveBasis2N module conforms to the Torczon pattern search convergence framework:
    • Positive basis search direction set D = { ± e i } generates a dense tangent cone.
    • Step size adaptation rule δ k + 1 = γ δ k ( γ { 0.5 , 1.5 } ) satisfies lim k δ k = 0 .
    • A selection strategy that only accepts improved solutions guarantees continuous optimization.
  • Convergence behavior of hybrid architecture
    Based on hybrid optimization theory, the periodic coupling of global exploration (WOA mechanism) and local exploitation (GPS) satisfies:
    lim t P ( x t x * < ϵ ) = 1 , ϵ > 0
    where global exploration is controlled by an adaptive mutation parameter, and the local search interval T GPS = 200 ensures synergistic effects between the two mechanisms.
Note: Given the solution sequence { x t } generated by ImWOA, the algorithm satisfies:
  • ϵ > 0 , T : P ( x t R ϵ ) > 0 for t > T (Solís–Wets condition).
  • lim k inf f ( x k ) = 0 during GPS phases (Torczon condition).
  • Exponential convergence in probability: P ( f ( x t ) f * < ϵ ) > 1 e κ t for t > T ( ϵ ) .

4. Experimental Results and Discussions

This study employs the CEC 2017 benchmark function suite [38] to systematically evaluate algorithmic performance across two problem dimensions: 30-dimensional (low-dimensional) and 100-dimensional (high-dimensional) scenarios. The comparative algorithms comprise three categories: (1) the original WOA [31]; (2) eight state-of-the-art metaheuristics: Particle Swarm Optimization (PSO) [39], Biogeography-Based Optimization (BBO) [40], Slime Mould Algorithm (SMA) [41], Differential Evolution (DE) [42], Grey Wolf Optimizer (GWO) [43], Sparrow Search Algorithm (SSA) [44], Harris Hawks Optimization (HHO) [45], and Artificial Bee Colony (ABC) [46]; and (3) five advanced WOA variants: E-WOA [47], IWOA [48], IWOSSA [49], RAV-WOA [50], and WOAAD [51]. All algorithmic parameters strictly adhere to their original literature specifications. Detailed configurations are documented in the respective references to ensure reproducibility.
Experimental settings included a population size of 30 individuals, maximum iterations of 500 generations, and 30 independent runs to eliminate stochastic fluctuations. Algorithms were implemented in Python 3.12 and executed on a computational platform featuring Apple M1 silicon (8-core CPU/7-core GPU) with macOS Sonoma 14.4. The hardware configuration incorporated 8 GB unified memory architecture, while the software stack utilized ARM64 natively compiled scientific computing libraries (NumPy 1.26.0, SciPy 1.11.1), ensuring optimal computational efficiency.

4.1. Analysis of Results of CEC2017 Test Functions

The comprehensive statistical results for the 30-dimensional (30 dim) and 100-dimensional (100 dim) benchmark functions from the CEC2017 test suite are detailed in Table 1 and Table 2, respectively. These results encompass the minimum (min), mean, and standard deviation (Std) values derived from thirty independent runs of each algorithm. Notably, the optimal mean values for each benchmark function are highlighted in bold font. Furthermore, the “Total” row at the bottom of Table 1 and Table 2 quantifies the frequency with which each algorithm achieved the optimal mean value across all benchmark functions.
Under the 30-dimensional scenario, the proposed ImWOA demonstrates exceptional global optimization capabilities. As detailed in Table 1, ImWOA achieves optimal mean values on 20 out of 29 benchmark functions, accounting for 68.97% of the total test cases. Crucially, for the remaining nine functions where it does not secure the optimal mean, ImWOA still maintains highly competitive performance: ranking 10th on F8, 5th on both F9 and F13, 2nd on F10/F18/F19, 3rd on F21/F23, and 5th on F26. It is particularly noteworthy that in non-optimal cases, ImWOA consistently ranks within the top 10, with more than half (5/9) securing top three positions. These results robustly validate that ImWOA not only effectively escapes local optima traps in complex optimization problems but also exhibits significantly superior solution stability and algorithmic robustness compared to peer algorithms.
In the more challenging 100-dimensional scenario, the ImWOA demonstrates significantly enhanced high-dimensional optimization capabilities. As presented in Table 2, ImWOA achieves optimal mean values on 26 out of 29 benchmark functions, accounting for 89.66% of the test cases—a 20-percentage-point improvement over the 30-dimensional scenario. Crucially, for the remaining three functions without optimal means, the algorithm maintains elite performance: securing second place on F9, fifth on F8, and fourth on F21. A key observation is that all non-optimal rankings are within the top five. These findings conclusively validate that as problem dimensionality increases, ImWOA exhibits substantially strengthened advantages in solution stability, dimensional scalability, and global exploration capacity, with its unique improvement mechanisms effectively mitigating the “curse of dimensionality” on algorithmic performance.
Integrating experimental results from both the 30-dimensional and 100-dimensional scenarios of the CEC2017 benchmark functions, the ImWOA demonstrates progressively enhanced core competencies. In the baseline 30D environment, ImWOA validates its exceptional ability to escape local optima with 68.97% optimal-function coverage (20/29), while maintaining 100% top-10 rankings for non-optimal functions. When scaling to the more challenging 100D setting, the algorithm achieves remarkable performance escalation—the optimal-function ratio surges to 89.66% (26/29), with all three non-optimal functions securing top five rankings (peak position: second). This unique inverse relationship between dimensionality and performance (dimensionality↑ → optimality↑ → ranking-convergence↑) conclusively demonstrates that through its distinctive population coordination mechanism and adaptive search strategies, ImWOA not only effectively mitigates solution degradation in high-dimensional spaces but also transforms the “curse of dimensionality” into a catalyst for algorithmic evolution, thereby delivering a breakthrough approach for complex optimization problems.

4.2. Analysis of the Convergence Behavior of the Algorithms

To systematically evaluate the comprehensive performance of algorithms in terms of convergence speed and solution efficiency, this study conducts comparative analysis on the convergence characteristics curves of ImWOA versus comparison algorithms under both 30-dimensional and 100-dimensional scenarios (Figure 2 and Figure 3). The abscissa (x-axis) represents the number of iterations, while the ordinate (y-axis) precisely quantifies the mean fitness values obtained through 30 independent experimental trials. These convergence trajectories not only visually reveal the search dynamics within the solution space, but also quantitatively decode fundamental differences in exploration–exploitation balancing mechanisms through critical features such as curve gradients and steady-state plateaus.
Analysis of convergence characteristics in the 30-dimensional scenario (Figure 2) reveals ImWOA’s superior convergence dynamics. The algorithm demonstrates remarkable convergence acceleration on 11 functions (F1, F2, F3, F4, F6, F11, F12, F15, F27, F28, F29), where its convergence trajectory establishes orders-of-magnitude advantages over competitors during early iterations, ultimately locating global optima with enhanced solution quality. Crucially, on 12 functions (F5, F7, F10, F14, F16–F20, F22–F23), while not achieving absolute dominance, ImWOA maintains sustained convergence efficacy—its curves consistently reside in the top performance tier. It should be noted that the constrained search space in low-dimensional environments partially inhibits the full deployment of cooperative search mechanisms, leading to transient attenuation of algorithmic superiority on functions F8, F9, and F21. This phenomenon substantiates the nonlinear coupling between dimensionality scale and strategic effectiveness.
Convergence analysis in the 100-dimensional scenario (Figure 3) reveals ImWOA’s paradigm-shifting dimensional adaptability. The algorithm demonstrates exponential convergence acceleration across 19 benchmark functions (F1–F4, F6, F10–F16, F18–F20, F22–F23, F28–F29), establishing dominant convergence trajectories during initial iterations while maintaining exceptional solution quality through accelerated optimization. Crucially, for seven complex-modal functions (F5, F7, F17, F24–F27), ImWOA exhibits sustained evolutionary refinement properties—systematically outperforming peer algorithms in convergence speed while consistently securing superior final solutions. This comprehensive evidence substantiates ImWOA’s unprecedented solution discovery prowess in high-dimensional search spaces.
In summary, within 30-dimensional spaces, the algorithm achieves statistically significant superiority on most benchmark functions despite low-dimensional search constraints. Crucially, in 100-dimensional environments, ImWOA activates dimensional gain effects—demonstrating pan-domain convergence supremacy across 26/30 functions (86.7% superiority coverage) while generating exponential acceleration phenomena in 19 functions, thereby achieving inverse resolution of the “curse of dimensionality”. This scalable transcendence capability from low-dimensional robustness to high-dimensional disruptiveness establishes a theoretically rigorous and practically validated framework for ultra-large-scale black-box optimization.

4.3. Wilcoxon Rank-Sum Test

Each algorithm was independently executed 30 times, yielding 30 optimal values per method. A Wilcoxon rank-sum test was conducted between ImWOA’s solution set and those of each comparative algorithm. When the p-value fell below the 0.05 significance threshold (visually emphasized in bold), we rejected the null hypothesis, indicating statistically significant performance differences. Conversely, p-values exceeding 0.05 denoted statistical equivalence. For cases demonstrating significant differences (p < 0.05), we further compared median values: ImWOA’s superior performance was marked “+” when its median was lower, while competitors’ advantage was denoted “−”. Cases without significant difference (p > 0.05) received “=” markers. Comprehensive statistical results are presented in Table 3 and Table 4.
As evidenced in Table 3 and Table 4, ImWOA demonstrates statistically significant superiority (p < 0.05) in 89.4% of comparative cases under 30-dimensional tests, while exhibiting statistical equivalence (p > 0.05) in the remainder. Crucially, when dimensionality escalates to 100D, the performance disparity intensifies markedly: ImWOA achieves significant dominance (p < 0.05) in 98% of trials, maintaining equivalence (p > 0.05) only in rare edge cases. This dimensionality-driven performance evolution reveals that ImWOA’s inherent advantages in solution robustness and global convergence capability are substantially amplified with increasing problem complexity.

4.4. Friedman Test

To systematically evaluate performance differentials among swarm intelligence algorithms, this study employs the Friedman non-parametric test to establish a multi-algorithm comparison framework. This statistical approach quantifies comprehensive algorithm performance through mean rank values, overcoming the limitations of single-metric evaluation, particularly suited for performance ranking in high-dimensional complex optimization problems.
In the 30-dimensional scenario (Figure 4), ImWOA demonstrates commanding dominance with its mean rank value (1.97) substantially lower than all competitors, indicating exceptional optimization stability. Specifically, SMA (3.34), RAV-WOA (4.97), and WOAAD (4.90) form the second echelon within the 3.5–5.0 rank range. Mid-tier performers include BBO (5.41), IWOSSA (6.14), and GWO (7.31). Notably, the classical PSO (5.55) underperforms relative to newer variants. In the lower tier, E-WOA (9.31), ABC (10.86), IWOA (10.66), HHO (11.31), and the baseline WOA (12.07) all exceed the 9.0 threshold, with DE (13.17) and SSA (13.03) occupying the bottom positions. This ranking (p < 0.05) rejects the null hypothesis, confirming statistically significant performance differences among algorithms and demonstrating the efficacy of ImWOA’s architectural innovations.
When dimensionality escalates to 100 dimensions, Friedman tests reveal more pronounced algorithmic performance stratification (Figure 5). ImWOA consolidates its dominance with a mean rank value (1.28) reduced by 23% compared to the 30-dimensional scenario, demonstrating exceptional high-dimensional adaptability. Key findings include the following: (1) SMA (3.24) and BBO (3.48) ascend to the secondary tier with over 40% rank improvement; (2) WOAAD (5.34)/RAV-WOA (5.45) maintain a competitive advantage but exhibit narrowed superiority over median performers; (3)classical algorithms show bifurcation: GWO (6.41) demonstrates greater stability than PSO (7.93, rank degradation 43%); and (4) structural reorganization occurs in the lower tier where IWOSSA (6.00) and E-WOA (9.00) significantly outperform their variant IWOA (9.62), while baseline WOA (13.03) and DE (13.55) remain at the bottom. This ranking (p < 0.05) confirms that dimensional expansion intensifies algorithmic divergence, highlighting ImWOA’s unique convergence properties in high-dimensional spaces.

4.5. Sensitivity of ImWOA to Parameter Variations

Given that the weighting coefficients w 1 and w 2 in Equation (21) directly govern the balance between population diversity maintenance and convergence efficiency, sensitivity analysis of these parameters is essential for algorithmic performance evaluation. To this end, this study systematically assesses performance variations under w 1 : w 2 ratios of {1:1, 2:1, 3:1} using the CEC 2017 benchmark functions in 100-dimensional search spaces. Algorithm ranking follows the standardized average ranking method—calculated by summing ordinal rankings across all benchmark functions and dividing by the total number of functions—where the lowest mean value indicates optimal performance. As evidenced in Table 5, ImWOA achieves a significantly superior average rank at w 1 : w 2 = 2:1. These results confirm that the 2:1 ratio optimally balances exploration capability and exploitation intensity, whereas lower w 1 values impair diversity preservation and higher w 2 values increase premature convergence risks.

5. Three-Dimensional TSP

The theoretical depth and practical breadth of the 3D Traveling Salesman Problem (3D-TSP) establish it as a pivotal cross-disciplinary research vehicle. Transcending the dimensional constraints of conventional 2D path planning, it demonstrates unique value in vertical-space dynamic optimization scenarios: urban drone logistics necessitates 3D obstacle avoidance with energy-consumption equilibrium; industrial-scale additive manufacturing relies on spatial trajectory optimization to enhance resource efficiency; autonomous subsea exploration requires efficient visitation of dispersed nodes under oceanic disturbances; and surgical path planning mandates precision and biological-tissue safety. Meanwhile, evolving Urban Air Mobility (UAM) networks demand 3D airspace coordination mechanisms. Addressing these trans-domain challenges, the proposed ImWOA significantly advances global optimization capability and real-time responsiveness in complex environments through innovative 3D solution-space modeling and adaptive computational architecture. This framework provides universal methodological support for strategic fields including intelligent manufacturing, precision medicine, and smart cities, bridging theoretical 3D spatial optimization with engineered applications.
This study implements the ImWOA and multiple comparative algorithms for 3D-TSP resolution, including the original WOA and enhanced variants (E-WOA, IWOA, IWOSSA, RAV-WOA, WOAAD). To mitigate stochastic fluctuations, each algorithm executes 30 independent runs with averaged results serving as the performance benchmark, accompanied by visualization of the optimal path diagram corresponding to the solution closest to the mean value. The city size is set to 100. All urban coordinate data were randomly generated, with three-dimensional coordinates (X, Y, Z) uniformly distributed within the closed interval [100, 5000]. Each dimensional coordinate was independently generated to ensure stochastic spatial distribution and uniform dispersion characteristics.
Figure 6 clearly illustrates the three-dimensional spatial distribution characteristics of 100 urban nodes, Table 6 systematically records the average optimal results of different algorithms from the 30 experiments, and Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 visually present the actual path planning effect closest to the average optimal solution.
As delineated in Table 6, ImWOA secures top-ranked performance in mean solution quality for three-dimensional Traveling Salesman Problems (TSPs), demonstrating a substantial advantage over benchmark algorithms. Notably, it generates high-precision routing solutions across the test, revealing exceptional exploration capability in complex discrete search spaces. This empirical evidence robustly validates that ImWOA’s unique population cooperation mechanism effectively circumvents local optima traps when handling large-scale combinatorial optimization problems, thereby underscoring its superior potential for engineering applications.

6. Conclusions

This study addresses the inherent limitations of the original WOA by proposing an innovative variant, ImWOA, which integrates three synergistic optimization strategies to significantly enhance overall performance:
First Strategy: A dynamic cluster center-guided search mechanism based on K-means clustering. By partitioning the population into multiple subgroups, each subgroup conducts targeted searches around its dynamically updated cluster center. Real-time centroid recalculation during iterations enables dynamic adaptation to population evolution, simultaneously improving adaptability and robustness. This mechanism innovatively integrates global optimum positions with local cluster centroids, effectively suppressing redundant searches while significantly enhancing global exploration efficiency.
Second Strategy: A dual-modal population diversity-driven adaptive mutation mechanism. Transcending traditional single-dimensional distance metrics, this strategy employs dual diversity indicators that simultaneously quantify spatial distribution and fitness value differences, comprehensively characterizing population heterogeneity. The dynamic mutation probability adjustment mechanism, constructed based on real-time diversity states, demonstrates superior environmental adaptability and robustness compared to static parameter configurations.
Third Strategy: A pattern search framework incorporating the GPSPositiveBasis2N algorithm. This hybrid optimization paradigm synergizes WOA’s global exploration capability with pattern search’s local refinement characteristics: WOA rapidly locates potential optimal regions, while the GPSPositiveBasis2N algorithm serves as a periodic optimization module for precision local search. This collaborative “global exploration–local optimization” framework leverages the complementary advantages of both algorithms, achieving breakthrough improvements in solution quality and convergence efficiency.
To comprehensively evaluate the efficacy of the ImWOA, this study employs the CEC2017 benchmark suite for multi-dimensional validation. The experimental design establishes a dual comparison framework: a horizontal comparison with the original WOA and eight state-of-the-art metaheuristic algorithms, alongside a vertical benchmark against five advanced WOA variants. Results demonstrate ImWOA’s exceptional optimization capabilities: in 30-dimensional testing, it achieves optimal mean solutions for 20 out of 29 benchmark functions; when scaling to 100-dimensional problems, it attains leading mean values for 26 of 29 tests. Particularly noteworthy is its first-place ranking in solving the 3D-TSP—a representative combinatorial optimization challenge. These experiments collectively verify that ImWOA not only exhibits superior performance in theoretical benchmarks but also effectively addresses real-world complex optimization problems, highlighting its robust multi-scenario adaptability and broad application potential.
Although ImWOA has demonstrated outstanding optimization performance, its further in-depth research still holds abundant possibilities. Future work will focus on three dimensions as follows:
First, deepening theoretical mechanisms, establishing a rigorous mathematical framework to analyze the convergence proof of ImWOA.
Second, expanding heterogeneous computing paradigms, exploring integration pathways between ImWOA and cutting-edge technologies such as Graph Neural Networks and quantum computing, thereby constructing a cross-modal optimization framework for high-dimensional complex problems.
Third, building industrial-grade application ecosystems, for typical scenarios including dynamic scheduling in smart manufacturing and real-time load allocation in smart grids, developing specialized algorithm engines, and establishing open-source platforms to promote industry–academia–research collaborative innovation.
These explorations will propel ImWOA’s evolution from algorithmic innovation into a universal intelligent optimization infrastructure.

Author Contributions

Conceptualization, Y.Z. and Z.H.; methodology, Y.Z. and Z.H.; software, Y.Z.; data curation, Y.Z.; original draft preparation, Y.Z.; writing—review and editing, Y.Z. and Z.H.; visualization, Y.Z.; funding acquisition, Z.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Fund of Ningxia (2025).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, Y.; Jin, Z. Comprehensive learning Jaya algorithm for engineering design optimization problems. J. Intell. Manuf. 2022, 33, 1229–1253. [Google Scholar] [CrossRef]
  2. Kalpana, P.; Nagendra Prabhu, S.; Polepally, V.; Rao, D.B.J. Exponentially-spider monkey optimization based allocation of resource in cloud. Int. J. Intell. Syst. 2022, 37, 2521–2542. [Google Scholar] [CrossRef]
  3. Nai, C. Energy finance risk warning model based on GABP algorithm. Front. Energy Res. 2023, 11, 1235412. [Google Scholar] [CrossRef]
  4. Rambabu, D.; Govardhan, A. Optimization assisted frequent pattern mining for data replication in cloud: Combining sealion and grey wolf algorithm. Adv. Eng. Softw. 2023, 176, 103401. [Google Scholar] [CrossRef]
  5. Phan, T.; Sell, D.; Wang, E.W.; Doshay, S.; Edee, K.; Yang, J.; Fan, J.A. High-efficiency, large-area, topology-optimized metasurfaces. Light Sci. Appl. 2019, 8, 48. [Google Scholar] [CrossRef]
  6. Berger, K.; Rivera Caicedo, J.P.; Martino, L.; Wocher, M.; Hank, T.; Verrelst, J. A survey of active learning for quantifying vegetation traits from terrestrial earth observation data. Remote Sens. 2021, 13, 287. [Google Scholar] [CrossRef]
  7. Lee, C.C.; Hussain, J.; Chen, Y. The optimal behavior of renewable energy resources and government’s energy consumption subsidy design from the perspective of green technology implementation. Renew. Energy 2022, 195, 670–680. [Google Scholar] [CrossRef]
  8. Zamir, M.; Abdeljawad, T.; Nadeem, F.; Wahid, A.; Yousef, A. An optimal control analysis of a COVID-19 model. Alex. Eng. J. 2021, 60, 2875–2884. [Google Scholar] [CrossRef]
  9. Wu, L.; Huang, X.; Cui, J.; Liu, C.; Xiao, W. Modified adaptive ant colony optimization algorithm and its application for solving path planning of mobile robot. Expert Syst. Appl. 2023, 215, 119410. [Google Scholar] [CrossRef]
  10. Taher, F.; Abdel-Salam, M.; Elhoseny, M.; El-Hasnony, I.M. Reliable machine learning model for IIoT botnet detection. IEEE Access 2023, 11, 49319–49336. [Google Scholar] [CrossRef]
  11. Llopis-Albert, C.; Rubio, F.; Zeng, S. Multiobjective optimization framework for designing a vehicle suspension system. A comparison of optimization algorithms. Adv. Eng. Softw. 2023, 176, 103375. [Google Scholar] [CrossRef]
  12. Elhoseny, M.; Abdel-Salam, M.; El-Hasnony, I.M. An improved multi-strategy Golden Jackal algorithm for real world engineering problems. Knowl.-Based Syst. 2024, 295, 111725. [Google Scholar] [CrossRef]
  13. Askr, H.; Abdel-Salam, M.; Hassanien, A.E. Copula entropy-based golden jackal optimization algorithm for high-dimensional feature selection problems. Expert Syst. Appl. 2024, 238, 121582. [Google Scholar] [CrossRef]
  14. Abdel-Salam, M.; Kumar, N.; Mahajan, S. A proposed framework for crop yield prediction using hybrid feature selection approach and optimized machine learning. Neural Comput. Appl. 2024, 36, 20723–20750. [Google Scholar] [CrossRef]
  15. Abdel-Salam, M.; Hassanien, A.E. A novel dynamic chaotic golden jackal optimization algorithm for sensor-based human activity recognition using smartphones for sustainable smart cities. In Artificial Intelligence for Environmental Sustainability and Green Initiatives; Springer: Cham, Switzerland, 2024; pp. 273–296. [Google Scholar] [CrossRef]
  16. Zhang, J.; Ning, Z.; Ali, R.H.; Waqas, M.; Tu, S.; Ahmad, I. A many-objective ensemble optimization algorithm for the edge cloud resource scheduling problem. IEEE Trans. Mob. Comput. 2023, 23, 1330–1346. [Google Scholar] [CrossRef]
  17. Sulaiman, M.H.; Mustaffa, Z.; Saari, M.M.; Daniyal, H. Barnacles mating optimizer: A new bio-inspired algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 87, 103330. [Google Scholar] [CrossRef]
  18. Wang, G.G.; Deb, S.; Coelho, L.D.S. Earthworm optimisation algorithm: A bio-inspired metaheuristic algorithm for global optimisation problems. Int. J. Bio-Inspired Comput. 2018, 12, 1–22. [Google Scholar] [CrossRef]
  19. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
  20. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  21. Shi, Y. Brain storm optimization algorithm. In Proceedings of the International Conference in Swarm Intelligence, Chongqing, China, 12–15 June 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 303–309. [Google Scholar] [CrossRef]
  22. Askari, Q.; Saeed, M.; Younas, I. Heap-based optimizer inspired by corporate rank hierarchy for global optimization. Expert Syst. Appl. 2020, 161, 113702. [Google Scholar] [CrossRef]
  23. Dehghani, M.; Trojovský, P. Teamwork optimization algorithm: A new optimization approach for function minimization/maximization. Sensors 2021, 21, 4567. [Google Scholar] [CrossRef]
  24. Talatahari, S.; Azizi, M. Chaos game optimization: A novel metaheuristic algorithm. Artif. Intell. Rev. 2021, 54, 917–1004. [Google Scholar] [CrossRef]
  25. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  26. Abedinpourshotorban, H.; Shamsuddin, S.M.; Beheshti, Z.; Jawawi, D.N. Electromagnetic field optimization: A physics-inspired metaheuristic optimization algorithm. Swarm Evol. Comput. 2016, 26, 8–22. [Google Scholar] [CrossRef]
  27. Karaboga, D. Artificial bee colony algorithm. Scholarpedia 2010, 5, 6915. [Google Scholar] [CrossRef]
  28. Yang, X.S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing, Coimbatore, India, 9–11 December 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 210–214. [Google Scholar] [CrossRef]
  29. Wang, G.G.; Deb, S.; Coelho, L.S. Elephant herding optimization. In Proceedings of the 2015 3rd International Symposium on Computational and Business Intelligence, Bali, Indonesia, 7–9 December 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–5. [Google Scholar] [CrossRef]
  30. Masadeh, R.; Mahafzah, B.A.; Sharieh, A. Sea lion optimization algorithm. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 388–395. [Google Scholar] [CrossRef]
  31. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  32. Liang, Z.; Shu, T.; Ding, Z. A novel improved whale optimization algorithm for global optimization and engineering applications. Mathematics 2024, 12, 636. [Google Scholar] [CrossRef]
  33. Liu, L.; Zhang, R. Multistrategy improved whale optimization algorithm and its application. Comput. Intell. Neurosci. 2022, 2022, 3418269. [Google Scholar] [CrossRef]
  34. Chakraborty, S.; Sharma, S.; Saha, A.K.; Saha, A. A novel improved whale optimization algorithm to solve numerical optimization and real-world applications. Artif. Intell. Rev. 2022, 55, 4605–4716. [Google Scholar] [CrossRef]
  35. Sun, G.; Shang, Y.; Zhang, R. An efficient and robust improved whale optimization algorithm for large scale global optimization problems. Electronics 2022, 11, 1475. [Google Scholar] [CrossRef]
  36. Shen, Y.; Zhang, C.; Gharehchopogh, F.S.; Mirjalili, S. An improved whale optimization algorithm based on multi-population evolution for global optimization and engineering design problems. Expert Syst. Appl. 2023, 215, 119269. [Google Scholar] [CrossRef]
  37. Li, M.; Yu, X.; Fu, B.; Wang, X. A modified whale optimization algorithm with multi-strategy mechanism for global optimization problems. Neural Comput. Appl. 2023, 35, 1–14. [Google Scholar] [CrossRef]
  38. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization; Technical Report; National University of Defense Technology: Changsha, China; Kyungpook National University: Daegu, Republic of Korea; Nanyang Technological University: Singapore, 2017; Available online: https://www.researchgate.net/profile/Guohua-Wu-5/publication/317228117_Problem_Definitions_and_Evaluation_Criteria_for_the_CEC_2017_Competition_and_Special_Session_on_Constrained_Single_Objective_Real-Parameter_Optimization/links/5982cdbaa6fdcc8b56f59104/Problem-Definitions-and-Evaluation-Criteria-for-the-CEC-2017-Competition-and-Special-Session-on-Constrained-Single-Objective-Real-Parameter-Optimization.pdf (accessed on 1 January 2020).
  39. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  40. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  41. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  42. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  43. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  44. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  45. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  46. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report-TR06; Erciyes University: Kayseri, Turkey, 2005. [Google Scholar]
  47. Nadimi-Shahraki, M.H.; Zamani, H.; Mirjalili, S. Enhanced whale optimization algorithm for medical feature selection: A COVID-19 case study. Comput. Biol. Med. 2022, 148, 105858. [Google Scholar] [CrossRef]
  48. Xiong, G.; Zhang, J.; Shi, D.; He, Y. Parameter extraction of solar photovoltaic models using an improved whale optimization algorithm. Energy Convers. Manag. 2018, 174, 388–405. [Google Scholar] [CrossRef]
  49. Saafan, M.M.; El-Gendy, E.M. IWOSSA: An improved whale optimization salp swarm algorithm for solving optimization problems. Expert Syst. Appl. 2021, 176, 114901. [Google Scholar] [CrossRef]
  50. Ma, G.; Yue, X. An improved whale optimization algorithm based on multilevel threshold image segmentation using the Otsu method. Eng. Appl. Artif. Intell. 2022, 113, 104960. [Google Scholar] [CrossRef]
  51. Tang, J.; Wang, L. A whale optimization algorithm based on atom-like structure differential evolution for solving engineering design problems. Sci. Rep. 2024, 14, 795. [Google Scholar] [CrossRef]
Figure 1. Framework of ImWOA.
Figure 1. Framework of ImWOA.
Biomimetics 10 00560 g001
Figure 2. CEC2017 test curves chart (Dim = 30).
Figure 2. CEC2017 test curves chart (Dim = 30).
Biomimetics 10 00560 g002aBiomimetics 10 00560 g002b
Figure 3. CEC2017 test curves chart (Dim = 100).
Figure 3. CEC2017 test curves chart (Dim = 100).
Biomimetics 10 00560 g003aBiomimetics 10 00560 g003b
Figure 4. Friedman mean ranks obtained by the employed algorithms (30 dim).
Figure 4. Friedman mean ranks obtained by the employed algorithms (30 dim).
Biomimetics 10 00560 g004
Figure 5. Friedman mean ranks obtained by the employed algorithms (100 dim).
Figure 5. Friedman mean ranks obtained by the employed algorithms (100 dim).
Biomimetics 10 00560 g005
Figure 6. Three-dimensional spatial distribution heatmap of 100 urban nodes.
Figure 6. Three-dimensional spatial distribution heatmap of 100 urban nodes.
Biomimetics 10 00560 g006
Figure 7. TSP path topology based on average optimal solution (ImWOA).
Figure 7. TSP path topology based on average optimal solution (ImWOA).
Biomimetics 10 00560 g007
Figure 8. TSP path topology based on average optimal solution (WOA).
Figure 8. TSP path topology based on average optimal solution (WOA).
Biomimetics 10 00560 g008
Figure 9. TSP path topology based on average optimal solution (E-WOA).
Figure 9. TSP path topology based on average optimal solution (E-WOA).
Biomimetics 10 00560 g009
Figure 10. TSP path topology based on average optimal solution (IWOA).
Figure 10. TSP path topology based on average optimal solution (IWOA).
Biomimetics 10 00560 g010
Figure 11. TSP path topology based on average optimal solution (IWOSSA).
Figure 11. TSP path topology based on average optimal solution (IWOSSA).
Biomimetics 10 00560 g011
Figure 12. TSP path topology based on average optimal solution (RAV-WOA).
Figure 12. TSP path topology based on average optimal solution (RAV-WOA).
Biomimetics 10 00560 g012
Figure 13. TSP path topology based on average optimal solution (WOAAD).
Figure 13. TSP path topology based on average optimal solution (WOAAD).
Biomimetics 10 00560 g013
Table 1. CEC2017 benchmark results (30 dimensions). Bold indicates optimal values.
Table 1. CEC2017 benchmark results (30 dimensions). Bold indicates optimal values.
Dim = 30
Func.IndexImWOAPSOBBOSMADEGWOSSAHHOABCWOAE-WOAIWOAIWOSSARAV-WOAWOAAD
F1Min1.81 × 1021.08 × 1055.90 × 1073.52 × 1051.57 × 10102.87 × 1072.85 × 1082.51 × 1094.05 × 1097.62 × 1097.80 × 1076.89 × 1065.13 × 1032.26 × 1072.30 × 107
Mean7.90 × 1031.40 × 1089.88 × 1078.25 × 1053.65 × 10101.57 × 1091.37 × 1094.89 × 1096.17 × 1091.80 × 10101.75 × 1083.51 × 1092.27 × 1056.54 × 1079.12 × 107
Std7.00 × 1032.03 × 1082.98 × 1073.39 × 1051.21 × 10101.37 × 1092.65 × 1091.77 × 1091.38 × 1097.37 × 1096.86 × 1075.21 × 1093.04 × 1052.46 × 1074.69 × 107
Rank176315109121314811245
F2Min2.00 × 1022.00 × 1028.77 × 1032.03 × 1023.60 × 1047.38 × 1021.43 × 1041.60 × 1046.31 × 1043.34 × 1042.81 × 1032.22 × 1034.97 × 1026.97 × 1021.56 × 103
Mean2.00 × 1029.28 × 1022.31 × 1042.6 × 1029.54 × 1045.94 × 1034.60 × 1042.78 × 1048.81 × 1045.66 × 1045.87 × 1032.02 × 1042.59 × 1032.57 × 1036.14 × 103
Std1.50 × 10−91.49 × 1039.67 × 1032.83 × 1012.53 × 1043.27 × 1032.77 × 1045.78 × 1031.08 × 1041.30 × 1042.32 × 1031.82 × 1041.80 × 1031.27 × 1032.62 × 103
Rank131021571211141369548
F3Min3.03 × 1023.26 × 1023.52 × 1023.32 × 1022.68 × 1033.34 × 1025.00 × 1027.17 × 1029.36 × 1028.80 × 1023.52 × 1023.73 × 1023.30 × 1023.58 × 1023.39 × 102
Mean3.45 × 1024.29 × 1024.33 × 1023.62 × 1028.80 × 1034.54 × 1023.04 × 1031.20 × 1031.33 × 1032.57 × 1035.63 × 1029.51 × 1024.07 × 1024.62 × 1024.46 × 102
Std2.77 × 1016.59 × 1014.86 × 1013.78 × 1014.85 × 1036.69 × 1014.33 × 1033.41 × 1021.57 × 1021.46 × 1031.24 × 1021.12 × 1034.71 × 1014.71 × 1016.50 × 101
Rank145215714111213910386
F4Min4.54 × 1027.37 × 1026.62 × 1025.28 × 1021.87 × 1046.93 × 1021.05 × 1044.30 × 1035.16 × 1031.08 × 1041.19 × 1031.33 × 1035.39 × 1027.85 × 1028.14 × 102
Mean5.01 × 1021.49 × 1037.93 × 1026.21 × 1024.70 × 1042.02 × 1032.00 × 1049.22 × 1037.13 × 1033.33 × 1042.16 × 1038.68 × 1037.62 × 1021.05 × 1039.92 × 102
Std2.91 × 1016.62 × 1027.58 × 1014.70 × 1011.72 × 1041.45 × 1036.00 × 1032.70 × 1031.10 × 1031.00 × 1043.92 × 1026.78 × 1031.12 × 1021.41 × 1021.18 × 102
Rank174215813121014911365
F5Min5.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 102
Mean5.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 102
Std8.52 × 10−41.07 × 10−39.55 × 10−48.79 × 10−32.71 × 10−34.12 × 10−31.12 × 10−25.98 × 10−34.23 × 10−36.47 × 10−35.15 × 10−32.14 × 10−33.24 × 10−31.24 × 10−31.19 × 10−3
Rank132159612111413107854
F6Min6.01 × 1022.61 × 1032.32 × 1031.49 × 1031.33 × 1035.28 × 1031.00 × 1036.47 × 1032.83 × 1043.54 × 1033.24 × 1032.96 × 1031.18 × 1032.79 × 1032.21 × 103
Mean1.71 × 1033.76 × 1046.60 × 1037.07 × 1037.79 × 1032.07 × 1048.15 × 1032.41 × 1044.75 × 1041.25 × 1041.80 × 1049.72 × 1037.23 × 1031.06 × 1041.84 × 104
Std1.34 × 1031.87 × 1043.02 × 1034.27 × 1036.67 × 1039.36 × 1034.83 × 1031.26 × 1048.49 × 1037.76 × 1039.58 × 1035.16 × 1035.88 × 1035.40 × 1031.03 × 104
Rank114235126131591074811
F7Min7.00 × 1027.00 × 1027.00 × 1027.00 × 1027.00 × 1027.00 × 1027.00 × 1027.00 × 1027.01 × 1027.00 × 1027.00 × 1027.00 × 1027.00 × 1027.00 × 1027.00 × 102
Mean7.00 × 1027.01 × 1027.00 × 1027.00 × 1027.00 × 1027.01 × 1027.00 × 1027.01 × 1027.03 × 1027.01 × 1027.01 × 1027.00 × 1027.00 × 1027.00 × 1027.00 × 102
Std2.74 × 10−16.81 × 10−11.23 × 10−11.37 × 10−11.73 × 10−17.91 × 10−13.88 × 10−15.00 × 10−18.15 × 10−15.03 × 10−16.00 × 10−14.40 × 10−12.00 × 10−11.34 × 10−12.17 × 10−1
Rank110437138111512149526
F8Min8.04 × 1028.02 × 1028.01 × 1028.02 × 1028.10 × 1028.01 × 1028.07 × 1028.09 × 1028.13 × 1028.12 × 1028.04 × 1028.08 × 1028.03 × 1028.04 × 1028.01 × 102
Mean8.18 × 1028.10 × 1028.05 × 1028.09 × 1028.21 × 1028.04 × 1028.14 × 1028.15 × 1028.21 × 1028.25 × 1028.20 × 1028.20 × 1028.14 × 1028.09 × 1028.05 × 102
Std7.65 × 1004.33 × 1003.03 × 1006.07 × 1007.15 × 1002.55 × 1005.40 × 1003.43 × 1003.73 × 1009.71 × 1007.53 × 1007.18 × 1007.08 × 1003.84 × 1003.55 × 100
Rank106341418913151211752
F9Min3.62 × 1032.85 × 1033.03 × 1032.45 × 1034.30 × 1032.53 × 1034.41 × 1034.85 × 1037.16 × 1035.09 × 1034.49 × 1033.93 × 1034.13 × 1033.23 × 1034.03 × 103
Mean5.12 × 1034.61 × 1033.97 × 1034.46 × 1035.31 × 1035.41 × 1037.41 × 1036.19 × 1038.47 × 1036.57 × 1035.90 × 1035.55 × 1035.54 × 1034.73 × 1035.24 × 103
Std7.45 × 1021.35 × 1034.37 × 1026.95 × 1024.78 × 1021.92 × 1031.44 × 1039.51 × 1023.59 × 1021.07 × 1037.88 × 1029.94 × 1028.71 × 1029.30 × 1026.92 × 102
Rank531278141215131110946
F10Min3.44 × 1032.68 × 1031.52 × 1042.32 × 1041.59 × 1057.80 × 1047.56 × 1047.79 × 1042.99 × 1055.45 × 1041.40 × 1054.28 × 1041.02 × 1051.02 × 1045.77 × 104
Mean3.91 × 1043.22 × 1041.43 × 1057.08 × 1043.65 × 1071.44 × 1054.05 × 1062.39 × 1055.52 × 1051.59 × 1053.01 × 1056.86 × 1052.17 × 1055.36 × 1041.01 × 105
Std3.01 × 1041.85 × 1042.64 × 1053.38 × 1048.65 × 1074.76 × 1041.18 × 1072.27 × 1051.08 × 1047.37 × 1041.88 × 1051.62 × 1067.12 × 1043.19 × 1042.88 × 104
Rank216415714101481113935
F11Min7.72 × 1037.53 × 1034.13 × 1065.30 × 1041.24 × 1093.71 × 1061.60 × 1068.89 × 1062.82 × 1081.13 × 1078.85 × 1061.75 × 1051.77 × 1061.03 × 1061.93 × 106
Mean4.91 × 1041.22 × 1051.37 × 1071.57 × 1064.35 × 1092.75 × 1071.50 × 1091.89 × 1084.15 × 1083.39 × 1081.04 × 1083.84 × 1083.03 × 1075.55 × 1061.87 × 107
Std2.51 × 1042.01 × 1058.00 × 1061.36 × 1062.18 × 1091.67 × 1072.89 × 1091.31 × 1087.75 × 1075.00 × 1086.43 × 1078.47 × 1083.12 × 1073.77 × 1061.31 × 107
Rank125315714101311912846
F12Min1.31 × 1032.10 × 1031.38 × 1064.43 × 1044.26 × 1086.91 × 1051.08 × 1051.77 × 1075.83 × 1064.13 × 1071.58 × 1063.09 × 1045.92 × 1053.42 × 1056.02 × 105
Mean6.95 × 1035.75 × 1047.86 × 1061.71 × 1055.65 × 1091.00 × 1071.53 × 1095.36 × 1073.06 × 1078.05 × 1081.66 × 1073.42 × 1081.91 × 1062.73 × 1062.42 × 106
Std5.63 × 1031.10 × 1055.13 × 1061.31 × 1054.35 × 1091.94 × 1073.75 × 1093.06 × 1071.68 × 1078.45 × 1086.96 × 1067.43 × 1088.41 × 1051.74 × 1062.06 × 106
Rank127315814111013912465
F13Min3.12 × 1031.86 × 1041.03 × 1054.28 × 1047.00 × 1051.21 × 1057.05 × 1057.84 × 1055.52 × 1052.29 × 1053.04 × 1051.89 × 1051.82 × 1053.03 × 1042.17 × 105
Mean6.45 × 1052.22 × 1054.85 × 1065.00 × 1057.84 × 1066.24 × 1051.17 × 1074.02 × 1062.44 × 1061.92 × 1063.85 × 1061.19 × 1061.55 × 1068.26 × 1055.62 × 105
Std6.77 × 1051.53 × 1055.04 × 1065.39 × 1057.10 × 1064.76 × 1051.44 × 1072.34 × 1061.21 × 1062.03 × 1063.49 × 1061.15 × 1061.48 × 1066.90 × 1053.21 × 105
Rank511321441512109117863
F14Min4.88 × 1034.37 × 1035.99 × 1057.26 × 1041.27 × 1081.30 × 1051.01 × 1052.93 × 1061.75 × 1076.27 × 1051.53 × 1066.60 × 1042.27 × 1053.53 × 1041.09 × 105
Mean2.43 × 1042.98 × 1044.54 × 1061.39 × 1051.76 × 1091.05 × 1068.37 × 1081.63 × 1075.09 × 1071.11 × 1084.73 × 1064.15 × 1087.10 × 1056.66 × 1053.29 × 105
Std1.39 × 1041.48 × 1043.56 × 1065.12 × 1041.09 × 1092.33 × 1061.34 × 1091.07 × 1072.51 × 1072.31 × 1081.60 × 1061.65 × 1093.41 × 1051.07 × 1061.51 × 105
Rank128315714101112913654
F15Min1.51 × 1031.51 × 1035.77 × 1031.77 × 1034.32 × 1061.48 × 1044.65 × 1036.04 × 1043.61 × 1033.40 × 1045.46 × 1042.86 × 1035.18 × 1034.79 × 1039.21 × 103
Mean2.10 × 1036.59 × 1031.55 × 1069.60 × 1034.23 × 1088.52 × 1058.86 × 1061.48 × 1062.26 × 1042.90 × 1061.11 × 1067.30 × 1064.47 × 1057.32 × 1056.19 × 105
Std6.50 × 1022.16 × 1041.85 × 1066.58 × 1035.72 × 1089.47 × 1053.24 × 1071.94 × 1062.26 × 1044.33 × 1061.14 × 1062.71 × 1077.66 × 1051.00 × 1067.79 × 105
Rank121131581410412913576
F16Min1.65 × 1032.44 × 1032.54 × 1032.07 × 1034.74 × 1042.76 × 1043.34 × 1045.53 × 1042.07 × 1062.76 × 1043.54 × 1044.05 × 1032.29 × 1042.71 × 1036.07 × 103
Mean2.08 × 1034.29 × 1033.78 × 1033.38 × 1031.05 × 10126.03 × 1041.14 × 10147.39 × 1059.84 × 1075.47 × 1059.22 × 1042.69 × 1071.02 × 1054.48 × 1033.81 × 104
Std3.95 × 1023.35 × 1037.61 × 1024.42 × 1023.41 × 10122.10 × 1045.55 × 10142.89 × 1061.22 × 1081.62 × 1063.01 × 1041.42 × 1084.95 × 1041.17 × 1032.25 × 104
Rank143214715111310812956
F17Min3.50 × 1042.55 × 1048.12 × 1045.50 × 1049.10 × 1045.68 × 1044.59 × 1047.47 × 1049.36 × 1043.97 × 1047.11 × 1043.76 × 1046.74 × 1044.89 × 1047.04 × 104
Mean7.94 × 1046.06 × 1054.19 × 1063.32 × 1052.33 × 1062.31 × 1052.45 × 1061.89 × 1055.02 × 1051.22 × 1051.77 × 1052.96 × 1062.17 × 1051.06 × 1051.55 × 105
Std2.63 × 1041.18 × 1064.06 × 1066.84 × 1054.23 × 1063.33 × 1055.84 × 1062.22 × 1052.19 × 1057.59 × 1041.42 × 1051.51 × 1079.71 × 1044.89 × 1046.92 × 104
Rank111159128136103514724
F18Min1.86 × 1032.06 × 1032.29 × 1068.58 × 1032.68 × 10103.39 × 1046.40 × 1044.95 × 1052.81 × 1055.28 × 1051.13 × 1055.02 × 1042.58 × 1043.28 × 1045.00 × 104
Mean1.96 × 1041.88 × 1041.10 × 1086.27 × 1044.39 × 10134.88 × 1071.65 × 1081.32 × 10103.63 × 1064.25 × 1097.23 × 1064.98 × 10119.26 × 1047.41 × 1052.39 × 105
Std2.26 × 1041.91 × 1042.00 × 1082.80 × 1041.01 × 10141.44 × 1085.26 × 1081.82 × 10103.67 × 1068.43 × 1091.92 × 1071.71 × 10123.18 × 1048.62 × 1051.94 × 105
Rank211031591113712814465
F19Min1.93 × 1032.10 × 1032.00 × 1031.96 × 1034.10 × 1032.08 × 1035.81 × 1035.16 × 1033.85 × 1036.79 × 1033.17 × 1032.55 × 1033.09 × 1032.44 × 1032.10 × 103
Mean2.33 × 1033.47 × 1032.27 × 1032.36 × 1037.09 × 1032.64 × 1031.40 × 1048.60 × 1034.69 × 1031.09 × 1045.86 × 1033.97 × 1034.68 × 1033.20 × 1032.60 × 103
Std2.82 × 1021.03 × 1031.90 × 1022.62 × 1022.04 × 1033.64 × 1025.38 × 1031.95 × 1034.58 × 1023.17 × 1031.57 × 1032.20 × 1031.12 × 1035.83 × 1023.26 × 102
Rank271312515131014118964
F20Min2.10 × 1032.10 × 1032.27 × 1032.12 × 1035.32 × 1032.47 × 1033.91 × 1033.34 × 1034.98 × 1034.80 × 1032.27 × 1032.86 × 1032.10 × 1032.27 × 1032.31 × 103
Mean2.26 × 1033.08 × 1032.54 × 1032.27 × 1033.06 × 1043.41 × 1031.75 × 1046.88 × 1037.17 × 1032.06 × 1043.36 × 1034.87 × 1032.42 × 1032.65 × 1032.64 × 103
Std9.98 × 1016.18 × 1027.75 × 1011.56 × 1021.07 × 1048.96 × 1021.01 × 1042.09 × 1039.47 × 1021.06 × 1046.71 × 1022.11 × 1033.11 × 1022.68 × 1022.03 × 102
Rank174215913111214810365
F21Min2.20 × 1032.31 × 1032.26 × 1032.26 × 1032.55 × 1032.28 × 1032.67 × 1032.91 × 1032.35 × 1032.74 × 1032.38 × 1032.31 × 1032.29 × 1032.27 × 1032.28 × 103
Mean2.28 × 1032.38 × 1032.27 × 1032.27 × 1032.88 × 1032.30 × 1034.68 × 1034.10 × 1032.38 × 1034.27 × 1032.52 × 1032.42 × 1032.35 × 1032.29 × 1032.29 × 103
Std4.20 × 1016.07 × 1012.86 × 1004.29 × 1002.84 × 1021.43 × 1011.45 × 1037.21 × 1021.25 × 1011.16 × 1031.18 × 1029.06 × 1014.01 × 1011.24 × 1019.60 × 100
Rank391212615138141110754
F22Min2.40 × 1032.42 × 1032.87 × 1032.49 × 1032.16 × 1043.19 × 1036.78 × 1037.18 × 1039.13 × 1031.69 × 1042.94 × 1033.31 × 1032.31 × 1032.98 × 1032.91 × 103
Mean2.47 × 1034.17 × 1033.40 × 1032.59 × 1033.96 × 1046.14 × 1032.92 × 1041.61 × 1041.09 × 1042.94 × 1043.73 × 1031.38 × 1042.62 × 1033.36 × 1033.45 × 103
Std9.63 × 1011.25 × 1032.99 × 1021.11 × 1029.90 × 1031.99 × 1031.39 × 1046.52 × 1036.76 × 1028.22 × 1035.00 × 1027.20 × 1033.43 × 1022.17 × 1023.42 × 102
Rank185215913121014711346
F23Min2.40 × 1032.54 × 1032.93 × 1032.45 × 1031.30 × 1042.75 × 1034.69 × 1035.12 × 1037.17 × 1031.27 × 1042.88 × 1033.36 × 1032.40 × 1032.98 × 1032.87 × 103
Mean2.60 × 1033.17 × 1033.19 × 1032.58 × 1032.28 × 1044.45 × 1031.79 × 1047.88 × 1037.92 × 1032.41 × 1043.21 × 1036.41 × 1032.53 × 1033.13 × 1033.06 × 103
Std2.87 × 1027.32 × 1021.36 × 1022.83 × 1015.28 × 1031.66 × 1039.15 × 1031.89 × 1034.52 × 1025.59 × 1032.70 × 1022.93 × 1039.09 × 1011.06 × 1021.30 × 102
Rank367214913111215810154
F24Min2.82 × 1032.84 × 1032.84 × 1032.82 × 1033.69 × 1032.85 × 1033.04 × 1033.06 × 1033.09 × 1033.22 × 1032.91 × 1032.84 × 1032.84 × 1032.84 × 1032.83 × 103
Mean2.83 × 1032.88 × 1032.89 × 1032.84 × 1035.02 × 1032.92 × 1034.28 × 1033.30 × 1033.28 × 1033.72 × 1033.04 × 1033.00 × 1032.93 × 1032.88 × 1032.87 × 103
Std9.14 × 1003.85 × 1013.37 × 1012.87 × 1018.43 × 1027.88 × 1011.11 × 1031.43 × 1028.57 × 1013.46 × 1028.30 × 1011.86 × 1024.97 × 1012.15 × 1012.80 × 101
Rank156215714121113109843
F25Min3.33 × 1033.33 × 1033.34 × 1033.33 × 1033.94 × 1033.34 × 1035.52 × 1033.81 × 1033.34 × 1033.43 × 1033.34 × 1033.34 × 1033.34 × 1033.39 × 1033.34 × 103
Mean3.33 × 1033.56 × 1033.35 × 1033.37 × 1035.59 × 1033.40 × 1031.06 × 1045.06 × 1033.37 × 1034.31 × 1033.42 × 1033.74 × 1033.55 × 1033.40 × 1033.37 × 103
Std1.35 × 10−11.50 × 1021.28 × 1011.66 × 1011.01 × 1035.05 × 1013.81 × 1031.22 × 1031.10 × 1017.87 × 1024.78 × 1013.66 × 1022.53 × 1027.71 × 1001.76 × 101
Rank110251461513312811974
F26Min3.11 × 1033.18 × 1033.12 × 1033.11 × 1033.26 × 1033.11 × 1033.54 × 1033.24 × 1033.16 × 1033.24 × 1033.15 × 1033.15 × 1033.13 × 1033.11 × 1033.12 × 103
Mean3.17 × 1033.27 × 1033.15 × 1033.14 × 1033.60 × 1033.15 × 1034.18 × 1033.82 × 1033.19 × 1033.78 × 1033.32 × 1033.27 × 1033.28 × 1033.20 × 1033.14 × 103
Std3.70 × 1015.37 × 1011.53 × 1011.93 × 1012.68 × 1021.76 × 1014.07 × 1023.19 × 1021.40 × 1013.21 × 1029.85 × 1011.34 × 1029.92 × 1014.18 × 1011.60 × 101
Rank594112315146131181072
F27Min2.70 × 1032.84 × 1032.79 × 1032.74 × 1033.69 × 1033.10 × 1033.35 × 1033.31 × 1033.25 × 1033.31 × 1033.16 × 1033.25 × 1032.98 × 1032.80 × 1032.89 × 103
Mean2.71 × 1033.01 × 1032.96 × 1033.08 × 1034.79 × 1033.17 × 1036.24 × 1033.51 × 1033.33 × 1033.73 × 1033.22 × 1033.76 × 1033.18 × 1033.06 × 1033.15 × 103
Std9.01 × 1001.30 × 1021.56 × 1021.57 × 1028.50 × 1023.14 × 1011.68 × 1032.13 × 1023.25 × 1016.14 × 1023.59 × 1015.16 × 1024.27 × 1011.45 × 1026.22 × 101
Rank132514715111012913846
F28Min6.26 × 1031.35 × 1047.23 × 1041.07 × 1048.08 × 1083.59 × 1042.16 × 1096.46 × 1061.80 × 1086.11 × 1055.49 × 1046.06 × 1043.88 × 1041.49 × 1045.91 × 104
Mean3.45 × 1043.56 × 1071.77 × 1072.46 × 1051.07 × 10122.28 × 1071.36 × 10131.49 × 1092.14 × 1095.11 × 1081.78 × 1081.88 × 1083.26 × 1071.08 × 1072.10 × 106
Std3.41 × 1045.62 × 1072.64 × 1072.52 × 1054.32 × 10123.23 × 1075.98 × 10132.70 × 1091.14 × 1095.90 × 1085.20 × 1083.14 × 1087.70 × 1073.14 × 1074.56 × 106
Rank185214615121311910743
F29Min7.62 × 1031.99 × 1049.16 × 1056.14 × 1041.76 × 1091.99 × 1051.53 × 1061.85 × 1072.17 × 1079.81 × 1071.44 × 1062.10 × 1042.12 × 1051.01 × 1052.12 × 105
Mean3.62 × 1041.29 × 1078.60 × 1063.84 × 1063.56 × 10107.56 × 1073.59 × 10111.15 × 1091.44 × 1081.23 × 1092.51 × 1085.97 × 10103.24 × 1072.97 × 1068.31 × 106
Std3.22 × 1042.48 × 1071.86 × 1077.05 × 1066.26 × 10101.64 × 1081.91 × 10121.77 × 1097.59 × 1071.19 × 1093.08 × 1083.09 × 10114.17 × 1074.15 × 1061.24 × 107
Rank165313815119121014724
Total2033101000000100
Table 2. CEC2017 benchmark results (100 dimensions). Bold indicates optimal values.
Table 2. CEC2017 benchmark results (100 dimensions). Bold indicates optimal values.
Dim = 100
Func.IndexImWOAPSOBBOSMADEGWOSSAHHOABCWOAE-WOAIWOAIWOSSARAV-WOAWOAAD
F1Min5.95 × 1021.49 × 10101.37 × 1094.97 × 1081.99 × 10112.39 × 10107.33 × 10108.70 × 10101.86 × 10111.53 × 10111.95 × 10104.59 × 10101.61 × 10107.53 × 1094.78 × 109
Mean1.37 × 1044.45 × 10101.92 × 1097.54 × 1082.95 × 10113.73 × 10101.02 × 10111.11 × 10112.21 × 10111.88 × 10112.82 × 10107.29 × 10102.68 × 10101.05 × 10101.22 × 1010
Std1.14 × 1041.67 × 10103.74 × 1081.70 × 1085.46 × 10108.97 × 1091.36 × 10101.16 × 10101.53 × 10101.88 × 10104.65 × 1092.11 × 10106.69 × 1092.45 × 1094.03 × 109
Rank193215811121413710645
F2Min2.75 × 1021.65 × 1055.87 × 1044.22 × 1033.02 × 1055.51 × 1042.87 × 1051.76 × 1055.22 × 1052.32 × 1053.37 × 1052.25 × 1051.24 × 1055.19 × 1041.54 × 105
Mean3.70 × 1032.45 × 1059.88 × 1046.93 × 1033.78 × 1058.05 × 1043.78 × 1052.19 × 1055.93 × 1053.27 × 1054.60 × 1053.40 × 1052.06 × 1057.68 × 1042.10 × 105
Std3.67 × 1034.22 × 1042.41 × 1041.65 × 1035.38 × 1041.59 × 1045.37 × 1041.85 × 1043.44 × 1043.92 × 1045.90 × 1049.27 × 1043.87 × 1041.16 × 1043.07 × 104
Rank195213412815101411637
F3Min4.35 × 1022.91 × 1039.10 × 1027.77 × 1024.46 × 1041.92 × 1031.38 × 1041.31 × 1043.92 × 1042.67 × 1043.33 × 1034.90 × 1032.15 × 1031.65 × 1031.70 × 103
Mean5.48 × 1028.36 × 1031.09 × 1039.20 × 1028.71 × 1044.56 × 1034.22 × 1042.57 × 1045.09 × 1044.51 × 1044.73 × 1031.37 × 1043.26 × 1032.49 × 1032.50 × 103
Std5.78 × 1014.01 × 1039.94 × 1019.03 × 1012.29 × 1041.38 × 1032.82 × 1044.37 × 1036.00 × 1031.27 × 1049.21 × 1025.72 × 1036.97 × 1024.70 × 1023.89 × 102
Rank193215712111413810645
F4Min7.16 × 1022.40 × 1043.07 × 1032.72 × 1031.80 × 1051.41 × 1041.53 × 1051.11 × 1051.82 × 1051.73 × 1052.70 × 1045.65 × 1041.65 × 1041.03 × 1048.70 × 103
Mean8.62 × 1024.49 × 1043.72 × 1033.33 × 1033.16 × 1053.37 × 1042.27 × 1051.39 × 1052.17 × 1052.43 × 1054.06 × 1049.28 × 1043.19 × 1041.50 × 1041.47 × 104
Std5.91 × 1011.62 × 1043.57 × 1023.77 × 1025.37 × 1049.55 × 1034.40 × 1041.36 × 1041.26 × 1042.69 × 1046.09 × 1032.54 × 1047.31 × 1033.00 × 1033.30 × 103
Rank193215713111214810654
F5Min5.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 102
Mean5.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 1025.00 × 102
Std1.10 × 10−31.58 × 10−21.60 × 10−37.66 × 10−38.27 × 10−31.56 × 10−21.66 × 10−26.06 × 10−39.15 × 10−39.28 × 10−31.18 × 10−27.48 × 10−35.24 × 10−31.51 × 10−38.59 × 10−3
Rank162141241081513119537
F6Min7.72 × 1023.94 × 1041.49 × 1042.15 × 1042.53 × 1044.85 × 1042.02 × 1042.15 × 1041.62 × 1052.24 × 1045.19 × 1042.35 × 1049.58 × 1031.45 × 1044.58 × 104
Mean1.88 × 1031.32 × 1052.96 × 1043.81 × 1046.78 × 1048.28 × 1043.47 × 1045.38 × 1041.79 × 1056.64 × 1048.53 × 1044.46 × 1042.12 × 1043.33 × 1049.39 × 104
Std9.55 × 1022.80 × 1048.38 × 1031.32 × 1042.37 × 1042.03 × 1041.13 × 1041.24 × 1049.28 × 1031.81 × 1042.46 × 1041.28 × 1049.01 × 1031.07 × 1042.84 × 104
Rank114361011581591272413
F7Min7.00 × 1027.01 × 1027.01 × 1027.01 × 1027.02 × 1027.00 × 1027.01 × 1027.01 × 1027.11 × 1027.02 × 1027.02 × 1027.01 × 1027.01 × 1027.01 × 1027.01 × 102
Mean7.00 × 1027.05 × 1027.01 × 1027.02 × 1027.03 × 1027.03 × 1027.02 × 1027.02 × 1027.13 × 1027.04 × 1027.04 × 1027.02 × 1027.01 × 1027.01 × 1027.03 × 102
Std1.73 × 10−12.76 × 1003.42 × 10−18.00 × 10−17.01 × 10−12.47 × 1001.13 × 1007.02 × 10−18.93 × 10−11.21 × 1001.19 × 1006.87 × 10−15.05 × 10−14.63 × 10−11.06 × 100
Rank114251097615131284311
F8Min8.72 × 1028.69 × 1028.22 × 1028.60 × 1029.61 × 1028.28 × 1029.17 × 1029.20 × 1021.09 × 1039.60 × 1029.24 × 1029.31 × 1028.83 × 1028.81 × 1028.43 × 102
Mean9.01 × 1029.07 × 1028.70 × 1029.05 × 1021.03 × 1038.60 × 1029.57 × 1029.53 × 1021.14 × 1031.04 × 1031.02 × 1039.84 × 1029.34 × 1028.97 × 1028.84 × 102
Std2.15 × 1012.38 × 1012.50 × 1011.91 × 1013.53 × 1011.21 × 1014.61 × 1012.26 × 1012.26 × 1014.71 × 1014.40 × 1013.41 × 1013.26 × 1011.12 × 1012.21 × 101
Rank572613110915141211843
F9Min1.51 × 1041.80 × 1041.58 × 1041.78 × 1042.13 × 1041.62 × 1042.68 × 1042.47 × 1043.21 × 1042.51 × 1042.21 × 1042.10 × 1041.92 × 1041.96 × 1042.41 × 104
Mean1.80 × 1042.87 × 1041.77 × 1042.02 × 1042.45 × 1042.79 × 1043.08 × 1042.79 × 1043.32 × 1043.07 × 1042.60 × 1042.31 × 1042.25 × 1042.38 × 1042.89 × 104
Std1.53 × 1034.55 × 1031.02 × 1031.36 × 1031.62 × 1036.81 × 1031.88 × 1031.92 × 1034.86 × 1022.27 × 1032.02 × 1031.39 × 1031.98 × 1031.95 × 1031.78 × 103
Rank211137914101513854612
F10Min2.56 × 1031.90 × 1059.63 × 1043.38 × 1059.79 × 1084.83 × 1057.83 × 1057.21 × 1065.08 × 1079.43 × 1065.26 × 1056.77 × 1053.62 × 1052.31 × 1054.72 × 105
Mean2.54 × 1044.19 × 1056.64 × 1054.35 × 1054.51 × 1094.02 × 1068.11 × 1087.83 × 1071.32 × 1087.40 × 1081.24 × 1064.50 × 1087.13 × 1051.38 × 1067.26 × 105
Std3.75 × 1041.89 × 1051.46 × 1066.89 × 1042.67 × 1095.01 × 1061.87 × 1096.32 × 1073.66 × 1078.36 × 1089.33 × 1051.40 × 1094.71 × 1051.82 × 1062.22 × 105
Rank124315914101113712586
F11Min6.43 × 1051.66 × 1092.93 × 1081.46 × 1086.73 × 10101.08 × 1091.03 × 10102.58 × 10103.62 × 10106.28 × 10102.33 × 1092.68 × 1097.27 × 1081.07 × 1099.71 × 108
Mean2.15 × 1064.89 × 1095.50 × 1083.46 × 1081.21 × 10116.42 × 1095.45 × 10104.24 × 10104.58 × 10108.97 × 10104.63 × 1091.76 × 10101.73 × 1092.00 × 1091.90 × 109
Std1.06 × 1064.29 × 1092.01 × 1081.41 × 1083.35 × 10103.56 × 1094.34 × 10109.78 × 1093.99 × 1091.85 × 10101.60 × 1091.58 × 10106.31 × 1086.96 × 1086.23 × 108
Rank183215913111214710465
F12Min2.28 × 1032.21 × 1089.52 × 1071.98 × 1071.06 × 10111.34 × 1091.05 × 10103.09 × 10106.82 × 10108.77 × 10101.44 × 1093.09 × 1092.19 × 1086.82 × 1084.67 × 108
Mean2.17 × 1044.09 × 1092.92 × 1085.67 × 1072.11 × 10117.57 × 1094.28 × 10107.68 × 10108.05 × 10101.48 × 10112.85 × 1093.12 × 10106.69 × 1081.70 × 1091.15 × 109
Std2.48 × 1044.23 × 1091.01 × 1083.93 × 1076.12 × 10104.22 × 1094.57 × 10102.53 × 10107.74 × 1093.55 × 10109.45 × 1084.77 × 10102.87 × 1089.45 × 1085.15 × 108
Rank183215911121314710465
F13Min5.85 × 1054.95 × 1053.14 × 1068.99 × 1052.73 × 1071.17 × 1064.68 × 1061.43 × 1075.45 × 1076.13 × 1062.80 × 1062.57 × 1061.25 × 1064.90 × 1061.49 × 106
Mean3.09 × 1067.65 × 1062.24 × 1076.77 × 1067.48 × 1074.75 × 1062.59 × 1084.87 × 1071.04 × 1083.28 × 1071.84 × 1071.42 × 1079.83 × 1061.63 × 1079.58 × 106
Std2.63 × 1065.03 × 1069.62 × 1063.17 × 1063.82 × 1072.43 × 1061.95 × 1082.51 × 1072.36 × 1072.23 × 1079.59 × 1068.47 × 1065.46 × 1067.61 × 1064.19 × 106
Rank141031321512141197685
F14Min7.14 × 1032.89 × 1062.01 × 1078.75 × 1052.52 × 10103.71 × 1076.33 × 1082.40 × 1091.61 × 10107.37 × 1091.80 × 1087.04 × 1073.44 × 1072.89 × 1072.30 × 107
Mean1.53 × 1051.69 × 1084.39 × 1075.41 × 1063.92 × 10107.92 × 1081.04 × 10106.87 × 1092.06 × 10102.08 × 10103.62 × 1081.97 × 1097.38 × 1071.61 × 1089.70 × 107
Std1.51 × 1053.20 × 1081.52 × 1075.49 × 1069.55 × 1099.27 × 1081.14 × 10102.09 × 1092.55 × 1097.78 × 1091.04 × 1083.33 × 1092.93 × 1078.56 × 1074.06 × 107
Rank173215912111314810465
F15Min1.54 × 1037.54 × 1032.94 × 1045.46 × 1041.69 × 1099.96 × 1046.41 × 1051.72 × 1071.35 × 1083.64 × 1083.00 × 1051.35 × 1057.63 × 1049.25 × 1047.47 × 104
Mean6.80 × 1035.68 × 1062.36 × 1051.89 × 1056.43 × 1091.12 × 1071.34 × 1072.07 × 1082.89 × 1082.18 × 1091.21 × 1061.79 × 1081.29 × 1058.99 × 1052.98 × 105
Std5.14 × 1031.58 × 1072.16 × 1051.64 × 1053.95 × 1092.12 × 1071.73 × 1071.74 × 1087.90 × 1071.41 × 1091.18 × 1066.25 × 1085.89 × 1045.96 × 1052.41 × 105
Rank184315910121314711265
F16Min2.66 × 1031.52 × 1046.91 × 1039.66 × 1044.39 × 10118.64 × 1043.42 × 10115.70 × 10104.76 × 10123.98 × 10111.93 × 1081.79 × 1051.07 × 1052.31 × 1051.26 × 105
Mean3.94 × 1037.23 × 1097.29 × 1061.47 × 1058.93 × 10143.26 × 1074.14 × 10151.05 × 10135.74 × 10133.45 × 10136.48 × 10101.01 × 1093.61 × 1063.59 × 1083.61 × 105
Std7.76 × 1023.38 × 10102.81 × 1073.24 × 1042.82 × 10151.63 × 1089.79 × 10151.72 × 10133.79 × 10133.47 × 10131.14 × 10113.66 × 1091.71 × 1078.33 × 1085.74 × 105
Rank195214615111312108473
F17Min6.85 × 1051.91 × 1066.19 × 1062.55 × 1062.49 × 1069.50 × 1053.23 × 1063.84 × 1061.30 × 1075.01 × 1067.46 × 1061.42 × 1062.53 × 1062.24 × 1062.84 × 106
Mean4.82 × 1061.41 × 1073.57 × 1079.07 × 1067.30 × 1074.98 × 1063.89 × 1086.16 × 1075.53 × 1074.24 × 1073.48 × 1073.43 × 1071.11 × 1071.89 × 1071.04 × 107
Std3.63 × 1069.78 × 1062.52 × 1076.16 × 1068.13 × 1073.69 × 1064.48 × 1084.92 × 1071.98 × 1078.44 × 1072.16 × 1071.08 × 1088.92 × 1061.56 × 1076.33 × 106
Rank161031421513121198574
F18Min2.10 × 1031.55 × 1052.11 × 1073.73 × 1051.25 × 10137.18 × 1093.01 × 10105.90 × 10105.55 × 10103.15 × 10113.08 × 10109.37 × 1098.97 × 1094.61 × 1083.76 × 108
Mean8.39 × 1031.03 × 1091.25 × 1087.47 × 1051.02 × 10154.26 × 10101.06 × 10143.72 × 10129.66 × 10101.33 × 10149.38 × 10103.42 × 10123.17 × 10101.61 × 10103.03 × 109
Std6.64 × 1031.28 × 1091.41 × 1085.87 × 1051.50 × 10157.58 × 10105.43 × 10149.98 × 10122.63 × 10102.70 × 10146.03 × 10101.60 × 10131.34 × 10102.56 × 10102.14 × 109
Rank143215813121014911765
F19Min2.04 × 1033.35 × 1032.26 × 1034.02 × 1031.06 × 1043.77 × 1032.71 × 1042.14 × 1041.24 × 1042.61 × 1041.30 × 1046.67 × 1037.99 × 1037.02 × 1035.00 × 103
Mean2.48 × 1031.14 × 1042.87 × 1036.29 × 1032.22 × 1045.47 × 1034.42 × 1042.82 × 1041.45 × 1043.66 × 1042.19 × 1041.25 × 1041.27 × 1049.45 × 1036.27 × 103
Std3.81 × 1024.28 × 1033.43 × 1021.12 × 1036.91 × 1031.11 × 1035.65 × 1034.27 × 1031.00 × 1035.15 × 1036.27 × 1033.56 × 1032.64 × 1031.68 × 1039.96 × 102
Rank172512315131014118964
F20Min2.10 × 1031.93 × 1044.64 × 1034.18 × 1031.78 × 1052.44 × 1041.36 × 1059.25 × 1041.84 × 1051.87 × 1051.04 × 1046.43 × 1042.14 × 1049.06 × 1031.03 × 104
Mean2.66 × 1034.77 × 1045.33 × 1034.95 × 1032.97 × 1053.66 × 1041.85 × 1051.14 × 1052.28 × 1052.13 × 1053.32 × 1041.04 × 1053.15 × 1041.65 × 1041.50 × 104
Std1.25 × 1021.47 × 1043.28 × 1023.62 × 1024.83 × 1047.91 × 1032.39 × 1041.11 × 1041.49 × 1041.44 × 1049.59 × 1033.46 × 1044.57 × 1033.30 × 1032.76 × 103
Rank193215812111413710654
F21Min2.20 × 1032.89 × 1032.33 × 1032.34 × 1037.39 × 1032.54 × 1032.09 × 1041.69 × 1043.64 × 1032.05 × 1044.39 × 1033.24 × 1032.91 × 1032.67 × 1032.51 × 103
Mean2.79 × 1034.69 × 1032.36 × 1034.34 × 1031.46 × 1042.71 × 1032.71 × 1042.09 × 1044.40 × 1032.81 × 1041.89 × 1041.44 × 1041.21 × 1046.67 × 1032.59 × 103
Std2.57 × 1032.08 × 1037.91 × 1004.98 × 1033.18 × 1031.04 × 1023.36 × 1031.69 × 1031.03 × 1033.31 × 1036.88 × 1038.45 × 1036.98 × 1035.24 × 1034.82 × 102
Rank471511314136151210982
F22Min2.30 × 1032.25 × 1044.58 × 1034.38 × 1031.25 × 1051.97 × 1041.07 × 1056.67 × 1046.55 × 1041.16 × 1051.99 × 1043.22 × 1042.82 × 1041.22 × 1041.08 × 104
Mean2.49 × 1033.85 × 1047.59 × 1035.45 × 1031.73 × 1053.21 × 1041.18 × 1059.11 × 1048.35 × 1041.21 × 1053.13 × 1044.73 × 1044.63 × 1042.10 × 1041.68 × 104
Std1.81 × 1021.66 × 1041.30 × 1035.73 × 1022.31 × 1046.98 × 1034.41 × 1037.49 × 1037.49 × 1033.38 × 1031.61 × 1049.81 × 1031.43 × 1041.33 × 1045.23 × 103
Rank183215713121114610954
F23Min2.50 × 1032.98 × 1045.76 × 1035.28 × 1031.32 × 1053.01 × 1041.04 × 1059.39 × 1049.15 × 1041.44 × 1053.38 × 1043.77 × 1043.39 × 1042.10 × 1041.83 × 104
Mean2.59 × 1035.87 × 1041.14 × 1047.58 × 1032.35 × 1054.56 × 1041.49 × 1051.15 × 1051.06 × 1051.60 × 1055.66 × 1046.92 × 1045.94 × 1043.40 × 1042.46 × 104
Std2.13 × 1021.67 × 1041.78 × 1031.08 × 1033.57 × 1047.39 × 1031.49 × 1047.57 × 1037.61 × 1037.79 × 1031.92 × 1042.75 × 1041.60 × 1041.06 × 1043.80 × 103
Rank183215613121114710954
F24Min3.28 × 1035.15 × 1033.65 × 1033.47 × 1032.37 × 1044.41 × 1038.79 × 1039.45 × 1032.08 × 1041.23 × 1045.17 × 1036.19 × 1034.69 × 1034.61 × 1034.33 × 103
Mean3.31 × 1037.32 × 1033.93 × 1033.65 × 1033.65 × 1045.60 × 1031.67 × 1041.22 × 1042.78 × 1041.90 × 1046.22 × 1039.22 × 1035.73 × 1035.13 × 1034.83 × 103
Std2.29 × 1011.68 × 1031.77 × 1029.45 × 1016.95 × 1035.10 × 1027.92 × 1031.50 × 1032.79 × 1032.78 × 1035.61 × 1022.03 × 1037.05 × 1023.17 × 1023.10 × 102
Rank193215612111413810754
F25Min5.83 × 1031.93 × 1046.10 × 1036.17 × 1034.77 × 1047.86 × 1031.02 × 1053.94 × 1048.39 × 1033.24 × 1041.04 × 1049.76 × 1031.50 × 1046.53 × 1036.83 × 103
Mean5.97 × 1033.45 × 1046.30 × 1036.52 × 1031.02 × 1059.70 × 1032.52 × 1051.08 × 1059.13 × 1038.44 × 1042.13 × 1043.07 × 1042.74 × 1047.15 × 1037.47 × 103
Std5.21 × 1011.21 × 1049.86 × 1011.51 × 1023.16 × 1041.05 × 1036.74 × 1044.37 × 1044.05 × 1025.01 × 1047.03 × 1031.71 × 1049.09 × 1034.06 × 1023.51 × 102
Rank111231371514612810945
F26Min3.31 × 1034.36 × 1033.38 × 1033.38 × 1035.54 × 1033.57 × 1037.67 × 1035.70 × 1035.01 × 1036.52 × 1034.36 × 1033.59 × 1033.73 × 1033.77 × 1033.56 × 103
Mean3.44 × 1035.30 × 1033.55 × 1033.59 × 1036.94 × 1033.84 × 1031.01 × 1048.74 × 1035.30 × 1039.38 × 1035.43 × 1034.14 × 1034.31 × 1034.06 × 1033.88 × 103
Std9.82 × 1015.01 × 1021.05 × 1021.04 × 1021.06 × 1031.41 × 1021.11 × 1031.52 × 1031.78 × 1021.23 × 1035.47 × 1023.57 × 1023.74 × 1023.17 × 1021.41 × 102
Rank110231241513914117865
F27Min2.70 × 1034.19 × 1033.28 × 1033.21 × 1038.43 × 1033.68 × 1037.18 × 1037.62 × 1038.15 × 1039.09 × 1033.89 × 1034.37 × 1033.57 × 1033.50 × 1033.43 × 103
Mean2.72 × 1036.15 × 1033.33 × 1033.28 × 1031.64 × 1044.09 × 1031.86 × 1049.50 × 1039.12 × 1031.32 × 1044.69 × 1036.49 × 1034.04 × 1033.66 × 1033.73 × 103
Std9.62 × 1001.40 × 1032.43 × 1013.10 × 1014.31 × 1032.15 × 1026.90 × 1039.31 × 1025.97 × 1022.11 × 1034.12 × 1021.44 × 1032.97 × 1029.54 × 1011.59 × 102
Rank193214715121113810645
F28Min1.06 × 1041.63 × 1082.45 × 1063.31 × 1062.65 × 10133.73 × 1093.63 × 10102.58 × 10107.58 × 10118.01 × 10111.53 × 10106.95 × 1083.45 × 1091.91 × 1081.03 × 109
Mean1.98 × 1054.47 × 1091.81 × 10107.54 × 1072.11 × 10151.16 × 10107.82 × 10133.55 × 10128.75 × 10124.30 × 10147.78 × 10102.98 × 10131.61 × 10104.48 × 10103.21 × 109
Std2.06 × 1055.13 × 1095.95 × 10101.25 × 1082.73 × 10156.13 × 1092.70 × 10144.89 × 10127.20 × 10128.98 × 10146.56 × 10101.35 × 10147.38 × 1091.73 × 10112.41 × 109
Rank147215513101114912683
F29Min4.44 × 1052.10 × 1089.36 × 1075.01 × 1064.73 × 10137.80 × 1092.40 × 10106.44 × 10101.19 × 10122.24 × 10123.93 × 10102.05 × 10101.44 × 10101.11 × 1092.16 × 109
Mean9.96 × 1055.61 × 1093.55 × 1085.60 × 1071.47 × 10152.39 × 10104.05 × 10131.73 × 10124.20 × 10121.52 × 10141.37 × 10113.73 × 10143.51 × 10101.17 × 10107.37 × 109
Std4.32 × 1056.72 × 1092.37 × 1084.76 × 1071.83 × 10151.25 × 10101.70 × 10144.64 × 10122.10 × 10121.85 × 10149.89 × 10101.99 × 10151.26 × 10107.04 × 1094.84 × 109
Rank143215712101113914865
Total2602001000000000
Table 3. Wilcoxon rank-sum test (Dim = 30). Bold p-values indicate statistically significant differences (p < 0.05).
Table 3. Wilcoxon rank-sum test (Dim = 30). Bold p-values indicate statistically significant differences (p < 0.05).
Dim = 30
Func.PSOBBOSMADEGWOSSAHHOABCWOAE-WOAIWOAIWOSSARAV-WOAWOAAD
F13.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.41 × 10−93.02 × 10−113.02 × 10−11
++++++++++++++
F23.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F34.31 × 10−82.23 × 10−93.67 × 10−33.02 × 10−116.72 × 10−103.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−116.69 × 10−111.20 × 10−101.19 × 10−61.96 × 10−101.41 × 10−9
++++++++++++++
F43.02 × 10−113.02 × 10−116.70 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.50 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F53.56 × 10−45.26 × 10−43.02 × 10−111.21 × 10−105.01 × 1002.44 × 10−93.47 × 10−103.02 × 10−116.70 × 10−118.89 × 10−107.60 × 10−79.21 × 10−51.02 × 10−51.89 × 10−4
++++=+++++++++
F66.70 × 10−111.41 × 10−93.08 × 10−81.31 × 10−83.02 × 10−114.57 × 10−93.02 × 10−113.02 × 10−116.70 × 10−114.98 × 10−111.78 × 10−101.87 × 10−72.87 × 10−108.15 × 10−11
++++++++++++++
F76.91 × 10−41.68 × 10−32.75 × 10−34.74 × 10−61.68 × 10−46.74 × 10−61.07 × 10−73.34 × 10−113.35 × 10−81.43 × 10−89.51 × 10−62.05 × 10−32.42 × 10−28.15 × 10−5
++++++++++++++
F81.09 × 10−52.87 × 10−102.96 × 10−55.94 × 1001.78 × 10−108.31 × 10−31.76 × 10−14.64 × 10−33.85 × 10−31.45 × 10−12.01 × 10−15.08 × 10−35.19 × 10−75.07 × 10−10
==++==
F92.05 × 10−36.53 × 10−81.44 × 10−33.11 × 10−15.30 × 10−11.25 × 10−74.64 × 10−53.02 × 10−111.29 × 10−65.87 × 10−41.02 × 10−11.05 × 10−19.33 × 10−25.49 × 10−1
==+++++====
F106.41 × 1008.31 × 10−32.68 × 10−43.02 × 10−113.16 × 10−101.78 × 10−106.70 × 10−113.02 × 10−119.76 × 10−103.02 × 10−111.86 × 10−94.08 × 10−116.35 × 10−29.26 × 10−9
=+++++++++++=+
F119.82 × 1003.02 × 10−119.92 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
=+++++++++++++
F123.09 × 10−63.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F132.42 × 10−21.53 × 10−57.96 × 10−12.23 × 10−94.92 × 10−19.76 × 10−101.55 × 10−92.19 × 10−81.24 × 10−37.04 × 10−77.96 × 10−31.30 × 10−31.96 × 10−13.87 × 10−1
+=+=+++++++==
F141.22 × 10−13.02 × 10−113.34 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.69 × 10−113.02 × 10−115.49 × 10−113.02 × 10−11
=+++++++++++++
F152.12 × 10−13.02 × 10−111.29 × 10−93.02 × 10−113.02 × 10−113.34 × 10−113.02 × 10−114.50 × 10−113.02 × 10−113.02 × 10−113.69 × 10−113.02 × 10−113.34 × 10−113.02 × 10−11
=+++++++++++++
F162.87 × 10−108.99 × 10−113.16 × 10−103.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.69 × 10−113.02 × 10−11
++++++++++++++
F172.75 × 10−37.38 × 10−103.32 × 10−66.52 × 10−91.75 × 10−51.37 × 10−33.65 × 10−88.15 × 10−119.03 × 10−44.18 × 10−95.61 × 10−55.07 × 10−103.27 × 10−21.16 × 10−7
++++++++++++++
F186.00 × 1003.02 × 10−118.84 × 10−73.02 × 10−119.76 × 10−104.50 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−115.49 × 10−118.89 × 10−101.46 × 10−105.49 × 10−11
=+++++++++++++
F196.53 × 10−84.55 × 1004.46 × 1003.02 × 10−111.00 × 10−33.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−115.07 × 10−103.02 × 10−114.57 × 10−92.38 × 10−3
+==+++++++++++
F204.11 × 10−72.37 × 10−102.97 × 1003.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.61 × 10−103.02 × 10−111.56 × 10−22.15 × 10−67.38 × 10−10
++=+++++++++++
F211.70 × 10−81.71 × 10−15.01 × 10−23.02 × 10−112.89 × 10−33.02 × 10−113.02 × 10−119.76 × 10−103.02 × 10−113.69 × 10−113.20 × 10−99.53 × 10−74.03 × 10−35.32 × 10−3
+==+++++++++++
F221.09 × 10−103.02 × 10−111.24 × 10−33.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.18 × 10−33.02 × 10−113.02 × 10−11
++++++++++++++
F233.96 × 10−81.31 × 10−86.74 × 10−63.02 × 10−118.10 × 10−103.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−117.12 × 10−95.49 × 10−116.28 × 10−61.56 × 10−82.60 × 10−8
++++++++++++++
F242.15 × 10−101.33 × 10−104.55 × 1003.02 × 10−113.34 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.08 × 10−114.50 × 10−112.61 × 10−104.20 × 10−10
++=+++++++++++
F255.57 × 10−103.02 × 10−113.69 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F267.12 × 10−92.51 × 10−23.18 × 10−33.02 × 10−111.56 × 10−23.02 × 10−113.02 × 10−114.06 × 10−23.02 × 10−118.48 × 10−92.53 × 10−41.43 × 10−51.91 × 10−27.96 × 10−3
++++++++++
F273.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F281.20 × 10−85.49 × 10−111.86 × 10−63.02 × 10−111.17 × 10−93.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−119.92 × 10−118.15 × 10−112.61 × 10−102.44 × 10−93.82 × 10−9
++++++++++++++
F296.05 × 10−73.02 × 10−112.37 × 10−103.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.87 × 10−103.02 × 10−113.69 × 10−113.02 × 10−11
++++++++++++++
Table 4. Wilcoxon rank-sum test (Dim = 100). Bold p-values indicate statistically significant differences (p < 0.05).
Table 4. Wilcoxon rank-sum test (Dim = 100). Bold p-values indicate statistically significant differences (p < 0.05).
Dim = 100
Func.PSOBBOSMADEGWOSSAHHOABCWOAE-WOAIWOAIWOSSARAV-WOAWOAAD
F13.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F23.02 × 10−113.02 × 10−111.87 × 10−53.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F33.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F43.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F52.37 × 10−102.03 × 10−93.02 × 10−113.02 × 10−112.20 × 10−73.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−116.70 × 10−113.82 × 10−103.69 × 10−11
++++++++++++++
F63.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F73.34 × 10−111.61 × 10−103.69 × 10−113.02 × 10−111.96 × 10−103.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.09 × 10−101.46 × 10−103.02 × 10−11
++++++++++++++
F83.87 × 1002.49 × 10−63.63 × 1003.02 × 10−112.15 × 10−101.86 × 10−91.55 × 10−93.02 × 10−113.02 × 10−114.98 × 10−111.61 × 10−109.21 × 10−56.41 × 1006.97 × 10−3
==++++++++=
F91.78 × 10−103.79 × 1003.83 × 10−63.02 × 10−114.44 × 10−73.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.87 × 10−106.07 × 10−113.02 × 10−11
+=++++++++++++
F103.02 × 10−118.99 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F123.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F132.60 × 10−51.46 × 10−101.02 × 10−53.02 × 10−112.05 × 10−37.39 × 10−113.02 × 10−113.02 × 10−116.07 × 10−115.57 × 10−102.44 × 10−91.47 × 10−73.82 × 10−103.96 × 10−8
++++++++++++++
F143.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F153.16 × 10−103.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F163.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F173.02 × 10−113.82 × 10−105.87 × 10−045.97 × 10−097.51 × 10001.69 × 10−091.46 × 10−103.34 × 10−111.20 × 10−082.37 × 10−101.78 × 10−042.84 × 10−041.39 × 10−064.94 × 10−05
++++=+++++++++
F183.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F193.02 × 10−115.87 × 10−043.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F203.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F216.72 × 10−103.99 × 10−047.66 × 10−052.15 × 10−109.06 × 10−083.02 × 10−113.02 × 10−115.57 × 10−103.02 × 10−118.99 × 10−119.92 × 10−111.96 × 10−107.12 × 10−091.07 × 10−07
++++++++++++++
F223.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F233.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F243.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F253.02 × 10−113.34 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F263.02 × 10−112.43 × 10−051.39 × 10−063.02 × 10−119.92 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−115.49 × 10−113.34 × 10−113.02 × 10−116.07 × 10−11
++++++++++++++
F273.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F283.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
F293.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
++++++++++++++
Table 5. ImWOA’s sensitivity to the parameter changes.
Table 5. ImWOA’s sensitivity to the parameter changes.
Dim = 100
Func.Index w 1 : w 2 = 1:1 w 1 : w 2 = 2:1 w 1 : w 2 = 3:1
F1Mean4.18 × 1051.37 × 1047.88 × 106
rank213
F2Mean9.02 × 1053.70 × 1036.12 × 104
rank312
F3Mean3.33 × 1045.48 × 1021.25 × 103
rank312
F4Mean3.24 × 1038.62 × 1025.14 × 104
rank213
F5Mean2.13 × 1025.00 × 1029.45 × 103
rank123
F6Mean3.22 × 1041.88 × 1035.64 × 105
rank213
F7Mean3.25 × 1047.00 × 1029.52 × 103
rank312
F8Mean2.56 × 1039.01 × 1027.33 × 104
rank213
F9Mean3.25 × 1051.80 × 1049.22 × 107
rank213
F10Mean6.66 × 1062.54 × 1041.22 × 103
rank321
F11Mean3.56 × 1072.15 × 1064.98 × 108
rank213
F12Mean3.45 × 1062.17 × 1046.78 × 105
rank312
F13Mean4.65 × 1073.09 × 1067.42 × 108
rank213
F14Mean6.82 × 1071.53 × 1052.81 × 106
rank312
F15Mean2.56 × 1056.80 × 1038.61 × 104
rank312
F16Mean2.56 × 1043.94 × 1037.42 × 105
rank213
F17Mean1.25 × 1074.82 × 1066.42 × 108
rank213
F18Mean9.42 × 1058.39 × 1032.16 × 104
rank312
F19Mean1.23 × 1032.48 × 1033.78 × 105
rank123
F20Mean5.32 × 1042.66 × 1036.48 × 105
rank213
F21Mean4.25 × 1052.79 × 1036.94 × 104
rank312
F22Mean8.53 × 1052.49 × 1035.32 × 104
rank312
F23Mean6.32 × 1042.59 × 1036.31 × 105
rank213
F24Mean2.68 × 1023.31 × 1037.42 × 104
rank123
F25Mean9.72 × 1045.97 × 1039.75 × 102
rank321
F26Mean6.32 × 1043.44 × 1035.31 × 105
rank213
F27Mean5.32 × 1042.72 × 1038.72 × 105
rank213
F28Mean5.33 × 1041.98 × 1058.62 × 106
rank123
F29Mean9.96 × 1069.96 × 1059.96 × 104
rank321
Average Rank2.2761.2412.483
Combined Rank213
Table 6. Comparative performance statistics of multi-run independent experiments.
Table 6. Comparative performance statistics of multi-run independent experiments.
AlgorithmsImWOAWOAE-WOAIWOAIWOSSARAV-WOAWOAAD
Mean147,041.24241,639.96249,941.32239,785.64237,602.55167,897.84251,044.01
rank1564327
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, Y.; Hao, Z. Enhanced Whale Optimization Algorithm with Novel Strategies for 3D TSP Problem. Biomimetics 2025, 10, 560. https://doi.org/10.3390/biomimetics10090560

AMA Style

Zhou Y, Hao Z. Enhanced Whale Optimization Algorithm with Novel Strategies for 3D TSP Problem. Biomimetics. 2025; 10(9):560. https://doi.org/10.3390/biomimetics10090560

Chicago/Turabian Style

Zhou, Yu, and Zijun Hao. 2025. "Enhanced Whale Optimization Algorithm with Novel Strategies for 3D TSP Problem" Biomimetics 10, no. 9: 560. https://doi.org/10.3390/biomimetics10090560

APA Style

Zhou, Y., & Hao, Z. (2025). Enhanced Whale Optimization Algorithm with Novel Strategies for 3D TSP Problem. Biomimetics, 10(9), 560. https://doi.org/10.3390/biomimetics10090560

Article Metrics

Back to TopTop