Next Article in Journal
Phishing Email Detection Using BERT and RoBERTa
Previous Article in Journal
Nonlinear System Modelling and Control: Trends, Challenges, and Future Perspectives
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Projection-Iterative-Methods-Based Optimizer for Complex Constrained Engineering Design Problems

1
Experimental Training Teaching Management Department, West Anhui University, Yu’an District, Lu’an 237012, China
2
School of Electrical and Optoelectronic Engineering, West Anhui University, Yu’an District, Lu’an 237012, China
3
School of Electronic Information and Artificial Intelligence, West Anhui University, Yu’an District, Lu’an 237012, China
4
School of Automotive Technology, Anhui Vocational College of Defense Technology, Yu’an District, Lu’an 237011, China
*
Author to whom correspondence should be addressed.
Computation 2026, 14(2), 45; https://doi.org/10.3390/computation14020045
Submission received: 11 January 2026 / Revised: 28 January 2026 / Accepted: 5 February 2026 / Published: 6 February 2026
(This article belongs to the Section Computational Engineering)

Abstract

This paper proposes an Enhanced Projection-Iterative-Methods-based Optimizer (EPIMO) to overcome the limitations of its predecessor, the Projection-Iterative-Methods-based Optimizer (PIMO), including deterministic parameter decay, insufficient diversity maintenance, and static exploration–exploitation balance. The enhancements incorporate three core strategies: (1) an adaptive decay strategy that introduces stochastic perturbations into the step-size evolution; (2) a mirror opposition-based learning strategy to actively inject structured population diversity; and (3) an adaptive adjustment mechanism for the Lévy flight parameter β to enable phase-sensitive optimization behavior. The effectiveness of EPIMO is validated through a multi-stage experimental framework. Systematic evaluations on the CEC 2017 and CEC 2022 benchmark suites, alongside four classical engineering optimization problems (Himmelblau function, step-cone pulley design, hydrostatic thrust bearing design, and three-bar truss design), demonstrate its comprehensive superiority. The Wilcoxon rank-sum test confirms statistically significant performance improvements over its predecessor (PIMO) and a range of state-of-the-art and classical algorithms. EPIMO exhibits exceptional performance in convergence accuracy, stability, robustness, and constraint-handling capability, establishing it as a highly reliable and efficient metaheuristic optimizer. This research contributes a systematic, adaptive enhancement framework for projection-based metaheuristics, which can be generalized to improve other swarm intelligence systems when facing complex, constrained, and high-dimensional engineering optimization tasks.

1. Introduction

Background and Motivations

The growing complexity of contemporary societal challenges has exposed fundamental limitations in traditional optimization methods that rely on linear assumptions and gradient information, particularly when addressing high-dimensional, non-convex, discontinuous, and large-scale constrained problems [1]. These shortcomings become especially pronounced in solving NP-hard, multi-objective, or computationally expensive “black-box” optimization problems where gradient information is unavailable [2].
In response, researchers and practitioners have turned to metaheuristic algorithms [3]. By emulating abstract principles derived from natural or social intelligent behaviors—such as evolution, swarm cooperation, and physical processes—these methods construct flexible high-level search strategies [4]. Their primary aim is to efficiently approximate solutions to complex, large-scale real-world optimization problems that are difficult to model precisely [5]. Rather than guaranteeing global optimality, metaheuristics strive to find high-quality feasible solutions within acceptable computational time by balancing global exploration and local exploitation across vast and intricate search spaces [6]. They effectively mitigate issues inherent in traditional approaches, including high computational complexity, model dependence, and susceptibility to local optima [7]. Demonstrating strong adaptability in handling high-dimensional, multi-objective, nonlinear, tightly constrained, and uncertain real-world problems [8], metaheuristics have become indispensable tools in fields such as engineering design and manufacturing, AI hyperparameter tuning, operations and logistics, financial decision-making, and bioinformatics, serving as key enablers for optimizing complex systems [9,10].
Over the past two decades, bio-inspired, self-organizing, and physics-based intelligent optimization algorithms have proliferated [11]. Early research centered on evolutionary frameworks, including Genetic Algorithms (GA) [12], Differential Evolution (DE) [13], and early Particle Swarm Optimization (PSO) models [14], which established foundational theories in population encoding, fitness-driven iterative evolution, and neighborhood search construction. Subsequent advances led to novel methods that simulate collective decision-making mechanisms, such as the Artificial Bee Colony (ABC) algorithm [15], Butterfly Optimization Algorithm [16], Firefly Algorithm [17], and Artificial Fish Swarm Algorithm [18], achieving progressive results in multi-objective optimization, feature selection, and complex system control. Concurrently, optimization techniques inspired by natural physical phenomena—including Gravitational Search Algorithm (GSA) [19], and Quantum-behaved Particle Swarm Optimization (QPSO) [20]—were pursued as a distinct research direction. These typically employ dynamic models to describe the evolution of search agents on energy landscapes or potential surfaces, offering specific advantages in large-scale global optimization [21]. Nevertheless, challenges such as degraded search capability and declining convergence rates remain prevalent in high-dimensional and complex search spaces [22].
A notable trend in recent research is the shift toward hybrid paradigms and adaptive control mechanisms, driven by the need to address limitations of single-method algorithms in balancing exploration and exploitation, adapt to increasingly complex real-world problems with high dimensionality and dynamic constraints, and improve generalization across domains through nature-inspired fusion and self-adjusting strategies, as demonstrated by their superior empirical performance in benchmarks and practical applications [23]. Examples include hybridizing swarm intelligence with differential evolution, embedding simulated annealing or Markov chain strategies within PSO structures, introducing migration operators based on mixed-probability distributions, and applying variational inference or Bayesian methods for dynamic parameter adjustment [24,25]. These integrated approaches have partially alleviated the parameter sensitivity of conventional algorithms, enhancing the flexibility and generalization potential of the search process [26]. Furthermore, parameter-free design and self-adaptive strategies are gaining prominence, aiming to reduce reliance on manual tuning and better address the implications of the “No Free Lunch” theorem [27].
The Projection-Iterative-Methods-based Optimizer (PIMO) [28] is a novel optimization algorithm that integrates mathematical projection theory with a metaheuristic framework. Its search mechanism is restructured around four core operators: Residual-Guided Projection (RGP), Dual Random Projection (DRP), Weighted Random Projection Update (WRPU), and Lévy Flight-Guided Projection (LFGP). These operators collectively enable an effective balance between global exploration and local exploitation. In practical tasks such as engineering optimization and feature selection, PIMO demonstrates excellent performance and exhibits favorable scalability. However, despite its competitive performance, PIMO exhibits fundamental limitations, including rigid parameter update mechanisms, passive diversity maintenance, and a static balance between exploration and exploitation, which collectively restrict its stability and robustness when tackling complex, high-dimensional, or tightly constrained optimization problems [29]. Moreover, the algorithm still presents certain limitations, including excessive reliance on random factors, reduced computational efficiency in high dimensions, lack of adaptive mechanisms in its strategic design, limited capability in handling complex constraints, and dependence on empirical parameter tuning. Consequently, further improvements are necessary to enhance its stability and adaptability in dynamic, high-dimensional, and tightly constrained environments.
While the original PIMO algorithm has demonstrated competitive performance on various benchmarks, its limitations—particularly in parameter rigidity, passive diversity maintenance, and static phase management—constrain its adaptability in complex real-world optimization scenarios. To systematically overcome these shortcomings, this paper proposes the Enhanced Projection-Iterative-Methods-based Optimizer (EPIMO). EPIMO integrates three synergistic improvement strategies:
1.
An adaptive stochastic decay mechanism that replaces PIMO’s deterministic step-size control, introducing controlled randomness to enhance robustness and escape local optima.
2.
A structured mirror opposition-based learning strategy that actively preserves population diversity and mitigates premature convergence.
3.
A phase-sensitive adjustment of the Lévy flight exponent β that dynamically balances exploration and exploitation across different optimization stages.
These improvements collectively transform PIMO into a more adaptive, robust, and context-aware optimizer. In the remainder of this paper, we detail the design of EPIMO (Section 3, validate its performance through comprehensive experiments on CEC 2017/2022 benchmarks and four classical engineering problems (Section 4), and discuss its implications and future directions (Section 5). By doing so, this work not only advances the family of projection-iterative optimizers but also offers a reference framework for adaptively enhancing metaheuristic algorithms in complex constrained optimization settings.

2. PIMO

The Projection-Iterative-Methods-based Optimizer (PIMO) is a novel optimization algorithm that integrates mathematical projection theory with a metaheuristic framework. Its core concept lies in employing four iterative operators Residual-Guided Projection (RGP), Dual Random Projection (DRP), Weighted Random Projection Update (WRPU), and Lévy Flight-Guided Projection (LFGP) to conduct multimodal search within the solution space, effectively balancing global exploration and local exploitation. Based on a population-based iterative framework, the main steps of the algorithm are as follows:

2.1. Initialization Phase

An initial population of N individuals is randomly generated within the search space. For the j-th dimension ( j = 1 , 2 , , D ) of the i-th individual ( i = 1 , 2 , , N ), its position is generated as:
x i , j = r 0 × ( U B j L B j ) + L B j
where r 0 is a uniform random number in [ 0 , 1 ] , and L B j and U B j represent the fixed lower and upper bounds for variable j. These bounds are determined by physical constraints, engineering specifications, or operational limits in the actual problem. For example, in the three-bar truss design, the cross-sectional areas are bounded as 0 A 1 ,   A 2 1 cm 2 based on material and manufacturing constraints.
The fitness value f i = f ( x i ) is computed for each individual, and the initial global best solution x best along with its corresponding fitness f best = min ( f 1 , f 2 , , f N ) is identified.

2.2. Residual-Guided Projection (RGP)

The RGP operator aims to guide the search direction by utilizing the “residual” information between the current population and the optimal solution, thereby accelerating convergence and avoiding local optima.
  • Guiding Individual Selection: The algorithm first generates two random numbers r 1 and r 2 . If r 1 r 2 , two guiding individuals x a and x b are selected probabilistically based on the proportion of each individual’s squared fitness value (squared residual) to the total sum. This probability is calculated as:
    P i = f i 2 k = 1 N f k 2
    Individuals with larger residuals have a higher probability of being selected, which helps focus the search on areas requiring more improvement. If r 1 < r 2 , the current global best solution x best is directly selected as the guide for local refinement.
  • Gradient Direction Calculation: Based on the selected guiding individuals, the gradient-guided direction d 1 is computed:
    d 1 = α ( x a x best ) + ( 1 α ) ( x b x best ) 2 , if 3 r 3 2 r 4 α ( x b x best ) + ( 1 α ) ( x a x best ) 2 , otherwise
    Here, α , r 3 , r 4 [ 0 , 1 ] are random numbers uniformly distributed within the interval [0, 1]. This formula constructs a search direction with both explorative and exploitative properties by randomly weighting the differential vectors between the guiding individuals and the best solution.
  • Local Information Acquisition (Jacobian Matrix): To more precisely describe the local variation of the objective function at the current point, RGP employs a first-order forward finite difference to approximate the Jacobian matrix J R D × D of the objective function: Here, the subscript i in e i refers to the coordinate direction, not to the index of an individual in the population.
    J i j f j x + ϵ · e i f j ( x ) ϵ , e i = [ 0 , , 0 , 1 i -th entry , 0 , , 0 ] R D
    where e i is the i-th unit vector, and ϵ is a small perturbation value (e.g., 10 8 ). The element J i j in the i-th row and j-th column of the Jacobian matrix J approximates the partial derivative of the j-th component of the objective function with respect to the i-th variable.
  • Adaptive Step-Size Control: To enable broad exploration in the early iterations and fine convergence in the later stages, the algorithm employs a dynamically decaying step-size coefficient η :
    η ( t ) = sin π 2 1 2 t T max 5
    where t is the current iteration number, and T max is the maximum number of iterations. The term 2 t T max 5 applies a nonlinear time scaling to accelerate decay in later iterations. The sine function smoothly maps the scaled time into [ 0 , 1 ] , ensuring η ( t ) 1 at the start and η ( t ) 0 toward the end. The exponent 5 balances early-stage exploration (slow decay) and late-stage exploitation (fast decay), supporting the algorithm’s phased search behavior.
  • Position Update: Finally, the individual’s position is updated by integrating gradient direction and Jacobian matrix information:
    x n 1 ( t + 1 ) = x i η ( t ) d 1 , if 3 r 3 2 r 4 x i + J · η ( t ) d 1 , otherwise
    Here, x n 1 ( t + 1 ) denotes the new candidate position generated by the RGP operator. The first update is a direct gradient descent, while the second update transforms the gradient direction using the Jacobian matrix, which may assist in escaping certain local optima or saddle points.

2.3. Dual Random Projection (DRP)

The DRP operator enhances the algorithm’s global exploration capability by introducing dual randomness:
  • Random Reference Individual Selection: First, two distinct individual indices c and d are randomly selected (without replacement) from the population, corresponding to positions x c and x d .
  • Direction Calculation and Position Update: Based on the randomly selected individuals, the algorithm again chooses one of two update methods via a random decision (comparing 3 r 5 and 2 r 6 ):
    -
    Gradient-Type Update: When 3 r 5 2 r 6 , compute direction d 2 and update:
    d 2 = α ( x c x best ) + ( 1 α ) ( x d x best ) 2
    x n 2 ( t + 1 ) = x i η ( t ) d 2
    -
    Jacobian-Type Update: Otherwise, compute direction d 3 and update using the Jacobian matrix:
    d 3 = α ( x d x best ) + ( 1 α ) ( x c x best ) 2
    x n 2 ( t + 1 ) = x i + J · η ( t ) d 3
    Here, x n 2 ( t + 1 ) is the new candidate position generated by the DRP operator. DRP injects diversity into the search process through completely random reference points and update mechanisms.

2.4. Weighted Random Projection Update (WRPU)

The WRPU operator aims to balance the exploration (diversification) and exploitation (intensification) behaviors of the algorithm by introducing a weighting strategy and a dual-path mechanism:
  • Random Weight Generation: Two random weights greater than 1, w 1 = 1 + rand and w 2 = 1 + rand , are generated to increase the magnitude of random perturbations in the update.
  • Dual-Path Update Strategy: For each dimension of each individual, the algorithm selects an update path based on a random condition related to the dimension index j (e.g., comparing rand / j to another random number). Higher-dimensional variables tend to choose more stable paths:
    -
    Path 1: Random Weighted Projection: This path generates a new position through a weighted combination of the current solution, the global best solution, and their difference, aiming to increase diversity.
    x n 3 ( t + 1 ) = w 1 x i + ( 1 α ) x best + w 2 ( x i x best )
    -
    Path 2: Adaptive Corrective Projection: This path utilizes the information from the new position x n 1 ( t + 1 ) generated by RGP for correction. The correction step size λ adaptively decreases over the iteration process, facilitating a smooth transition from exploration to exploitation.
    λ ( t ) = 1 t T max × rand
    x n 3 ( t + 1 ) = x i + λ ( t ) x n 1 ( t + 1 ) x best
    Here, x n 3 ( t + 1 ) is the new candidate position generated by the WRPU operator.

2.5. Lévy Flight-Guided Projection (LFGP)

The LFGP operator simulates the long-range jumping characteristic of Lévy Flight, aiming to help the algorithm escape local optimum regions and enhance global exploration capability.
  • Dynamic Triggering Mechanism: Lévy Flight is not executed in every iteration. Its triggering probability p trigger increases dynamically over the iteration process. Early stages focus more on conventional search, while later stages increase the probability of long-range jumps for a final global scan.
    p trigger ( t ) = 1 2 tanh 9 t T max 5 + 1
    LFGP update is executed when a generated random number is less than or equal to p trigger ( t ) .
  • Lévy Step-Size Generation: The Lévy step size L is generated using the Mantegna algorithm, whose distribution features a heavy tail, allowing occasional very large step sizes.
    L = u | v | 1 / β , u N ( 0 , σ u 2 ) , v N ( 0 , 1 )
    σ u = Γ ( 1 + β ) sin ( π β / 2 ) Γ ( ( 1 + β ) / 2 ) β 2 ( β 1 ) / 2 1 / β
    where β ( 0 , 2 ] is an exponent parameter controlling the distribution characteristics, typically set to 1.5; Γ ( · ) is the gamma function. The Lévy step size L is generated via the Mantegna algorithm, which uses two independent Gaussian random variables u N ( 0 , σ u 2 ) and v N ( 0 , 1 ) . Defining σ = u / | v | 1 / β , the step is computed as L = u / | σ | 1 / β . This procedure yields a random variable whose distribution approximates a symmetric Lévy stable law with tail exponent β , allowing occasional long jumps during the search.
  • Elite-Guided Update: Lévy Flight is performed with guidance from the current elite solution x elite (which could be x best or another high-quality individual).
    x n 4 ( t + 1 ) = ρ x elite + ( 1 ρ ) · L · ξ ( t ) x elite x i 2 t T max
    Here, ρ [ 0 , 1 ] is a random weight controlling the degree of attraction towards the elite solution; ξ ( t ) = rand × ( 1 t / T max ) 2 is a random scaling factor that decays over time, ensuring reduced jump magnitudes in later stages.

2.6. Iterative Update and Termination

In each iteration cycle, for each individual, the algorithm executes the four operators above in parallel or sequentially, generating multiple candidate new positions ( x n 1 , x n 2 , x n 3 , x n 4 ). Typically, a selection strategy (e.g., greedy selection) is applied to determine the next generation’s position from these candidates and the original position. The fitness of all individuals is then recalculated, and the global best solution x best and f best are updated. This process iterates until a preset maximum iteration count T max or other termination criteria are met, finally outputting the best-found solution.
The core strength of the PIMO lies in its multi-operator collaborative mechanism. RGP provides directional guidance based on residuals, DRP injects dual randomness to maintain population diversity, WRPU achieves a dynamic balance between exploration and exploitation through weighting and dual-path strategies, and LFGP endows the algorithm with strong global escape capability. Empirical studies indicate that PIMO demonstrates competitive performance in terms of convergence speed, solution accuracy, and robustness when handling high-dimensional, multimodal, non-convex, and complexly constrained optimization problems, validating the effectiveness of its hybrid framework.

3. Proposed Enhanced PIMO

The performance of metaheuristic algorithms in tackling complex optimization problems is largely determined by the design of mechanisms that balance exploration and exploitation capabilities. Although the original Projection-Iterative-Methods-based Optimizer (PIMO) has demonstrated excellent performance on numerous benchmark problems, both practical applications and theoretical analyses have revealed a series of limitations, primarily manifested in the rigidity of its parameter settings, the weakness of its diversity maintenance mechanisms, and the static nature of its exploration–exploitation balance. To address these shortcomings systematically, this paper proposes the Enhanced Projection-Iterative-Methods-based Optimizer (EPIMO), which incorporates three core improvement strategies, thereby significantly enhancing the algorithm’s overall performance and adaptability. The pseudo-code is shown in Algorithm 1.
Algorithm 1 EPIMO: Enhanced Projection-Iterative-Methods-based Optimizer
Require: N: Population size

   D: Dimension of the problem

    l b , u b : Lower and upper bounds for each dimension (D-dimensional vectors)

    T max : Maximum number of iterations

    f o b j : Objective function to be minimized

Ensure:  B e s t _ P o s : Best found solution (position)

    B e s t _ S c o r e : Fitness value of B e s t _ P o s

    C G _ c u r v e : Convergence curve recording B e s t _ S c o r e over iterations
  1.
Initialization Phase:
  2.
Generate initial population X with N individuals using Equation (1)
  3.
for  t = 1 to T max  do
  4.
    Compute adaptive step size δ ( t ) using Equation (18)
  5.
    for each individual i in population do
  6.
          Phase 1: Residual-Guided Projection (RGP)
  7.
          Select guidance individuals via Equation (2)
  8.
          Compute gradient direction d 1 via Equation (3)
  9.
          Approximate Jacobian matrix J via Equation (4)
10.
          Update candidate position x n 1 ( t + 1 ) via Equation (6)
11.
          Evaluate and greedily update if improved
12.
          Phase 2: Dual Random Projection (DRP)
13.
          if  rand > rand  then
14.
            Randomly select two reference individuals c and d
15.
            if  3 × rand 2 × rand  then
16.
                 Compute d 2 via Equation (7) and update Equation (22)
17.
            else
18.
                 Compute d 3 via Equation (9) and update Equation (23)
19.
            end if
20.
          end if
21.
          Phase 3: Weighted Random Projection Update (WRPU)
22.
          Generate random weights
23.
          for  j = 1 to D do
24.
             if  rand / j > rand  then
25.
                  Update via Path 1: Equation (11)
26.
             else
27.
                  Compute λ ( t ) via Equation (12) and update via Path 2: Equation  (13)
28.
             end if
29.
          end for
30.
          Phase 4: Lévy Flight-Guided Projection (LFGP)
31.
          Compute trigger probability p trigger ( t ) via Equation (14)
32.
          if  rand p trigger ( t )  then
33.
             Compute adaptive β ( t ) via Equation (21)
34.
             Generate Lévy step matrix L via Equations (15) and (16)
35.
             Compute d ( t ) and update x n 4 ( t + 1 ) via Equation (17)
36.
          end if
37.
    end for
38.
    Global Update Phase:
39.
    Identify current best individual and update B e s t _ P o s , B e s t _ S c o r e
40.
    Phase 5: Mirror Opposition-Based Learning (Conditional)
41.
    Compute p obl ( t ) using the opposition probability formula in Section 3.2
42.
    if  rand < p obl ( t )  then
43.
          Generate k U ( 0 , 1 )
44.
          for each individual i do
45.
             Compute mirrored position via Equation (19)
46.
          end for
47.
          Update B e s t _ P o s , B e s t _ S c o r e if necessary
48.
    end if
49.
    Record C G _ c u r v e ( t ) = B e s t _ S c o r e
50.
end for
51.
return  B e s t _ P o s , B e s t _ S c o r e , C G _ c u r v e

3.1. Adaptive Decay Strategy: Evolution from Deterministic to Stochastic Perturbation

The original PIMO algorithm employs a fixed sinusoidal decay function to control the step size, expressed as δ ( t ) = sin π / 2 1 ( 2 t / T ) 5 . While this design exhibits clear decay characteristics in theoretical analysis, its completely deterministic decay pattern in practical high-dimensional and non-convex problems often results in inflexible search behavior, making the algorithm prone to becoming trapped in local optima or exhibiting oscillatory convergence. To overcome this limitation, EPIMO proposes an adaptive decay mechanism that combines a cosine function with stochastic perturbation:
δ ( t ) = 1 2 1 + cos π t T 0.8 + 0.1 ξ ,
where ξ U ( 0 , 1 ) is a uniformly distributed random variable. This improvement offers three key advantages. First, the cosine function provides a smoother transition within its domain, which helps maintain the stability of the search process and avoids performance degradation caused by abrupt changes in step size. Second, adjusting the exponent from 5 to 0.8 slows the decay rate in the early stages, allowing the algorithm to retain stronger global exploration capability during initial iterations. Third, introducing a random perturbation with an amplitude of 0.1 effectively breaks the deterministic nature of the original decay pattern, enhancing the algorithm’s robustness across different problems and multiple runs. Mathematically, this strategy ensures that the step size possesses a clear decay trend while retaining an appropriate degree of randomness, thereby achieving a more dynamic and adaptive balance between exploration and exploitation.

3.2. Mirror Opposition-Based Learning Strategy: A Structured Mechanism for Enhancing Population Diversity

Maintaining population diversity is crucial for avoiding premature convergence in complex multimodal optimization problems. The original PIMO algorithm primarily relies on natural variation through random selection and hybrid operators to sustain diversity, lacking an active and systematic mechanism for injecting diversity. This often leads to a rapid decay of the population distribution entropy in later iterations, making it difficult to escape local optima effectively. To address this, EPIMO introduces an opposition-based learning strategy based on dynamic mirror mapping, with its core formula given by Equation (19):
x obl ( i ) = 0.5 k + 0.5 ( UB + LB ) k x ( i ) ,
where k U ( 0 , 1 ) is a random scaling factor, and UB and LB are the upper and lower bound vectors of the variables, respectively. Mathematically, this strategy constitutes a linear transformation. Its mapping center, α ( UB + LB ) (where α = 0.5 k + 0.5 ), adjusts dynamically with k, thereby systematically generating new solution regions in the search space that are “mirrored” relative to the current population. To further coordinate exploration and convergence, the triggering probability for opposition-based learning, p obl ( t ) , is designed to decay with iteration:
p obl ( t ) = 0.5 1 t T + 0.1 .
This design allows the algorithm to perform diversity-enhancing operations with a higher probability (approximately 0.6) in the early stages, while gradually shifting focus towards local exploitation in later stages. Crucially, a minimum triggering probability of 0.1 is always maintained to prevent the search from stagnating completely. This mechanism not only effectively slows the decay rate of population diversity but also guides the search towards promising, unexplored regions by leveraging the structural information of current solutions, thereby theoretically increasing the probability of the algorithm discovering the global optimum.

3.3. Adaptive Adjustment of Parameter β : Phase-Sensitive Optimization of Exploration Behavior

Lévy flight, as a random walk strategy with heavy-tailed characteristics, has its step-size distribution shape controlled by the exponent parameter β . The original PIMO algorithm fixes β at 1.5, meaning the exploration characteristics of Lévy flight remain unchanged throughout the optimization process. However, different stages of the optimization process often require different exploration behaviors: the initial iterations demand broad, long-jump global exploration, while later stages require fine-tuned local exploitation with smaller steps. A fixed β value cannot adapt to these dynamic needs, potentially leading to insufficient exploration in the early phases or inefficient exploitation in the later phases.
To address this, EPIMO proposes an adaptive strategy for the β parameter based on the iteration progress by Equation (21):
β ( t ) = 1.2 + 0.6 · t T .
Equation (8) is transformed into
x n 2 ( t + 1 ) = x i δ ( t ) d 2
Equation (10) is transformed into
x n 2 ( t + 1 ) = x i + J δ ( t ) d 3
This design causes the β value to increase linearly from 1.2 at the beginning of the iterations to 1.8 at the end. Analyzing the theoretical properties of the Lévy distribution, a smaller β value (e.g., 1.2) corresponds to a heavier distribution tail, meaning a higher probability of generating long-distance jumps, which is beneficial for rapidly covering the solution space in the early search stages. As the iterations progress, the β value gradually increases, the distribution tail becomes lighter, and the step-size distribution gradually approaches a Gaussian distribution. This causes the algorithm to favor short-distance, fine-grained searches in later stages, thereby improving the efficiency of local exploitation. This improvement synergizes effectively with the aforementioned adaptive decay strategy δ ( t ) along the time dimension: strong exploration is achieved in the early stages via a smaller β combined with a larger δ , while strong exploitation is achieved in later stages via a larger β combined with a smaller δ . Together, they form an intelligent, multi-scale, phase-sensitive search framework.

3.4. Comprehensive Effects and Theoretical Expectations of the Improvement Strategies

The three aforementioned improvements are not isolated; rather, they form a tightly coupled synergistic mechanism across both time and functional dimensions. Here, “functional dimensions” refer to the distinct roles that each strategy plays in the search process: (1) the adaptive decay strategy governs the step-size magnitude; (2) the mirror opposition-based learning actively maintains population diversity; and (3) the adaptive Lévy- β modulates the tail characteristics of the step-size distribution. These functions collectively address different aspects of the optimization—step control, diversity injection, and search-distribution shaping—while being temporally coordinated. Along the time dimension, the algorithm emphasizes global exploration in the early stages through high-probability opposition-based learning, a small β value for Lévy flight, and a slow-decay step size. In the middle stages, all strategies reach a balanced state, facilitating a smooth transition between exploration and exploitation. In the final stages, the algorithm focuses on local convergence and fine-tuning via low-probability opposition-based learning, a large β value for Lévy flight, and a fast-decay step size.
From the perspective of addressing PIMO’s core deficiencies, the adaptive decay strategy mitigates issues of parameter rigidity and determinism, enhancing algorithmic robustness. The mirror opposition-based learning strategy actively injects diversity, effectively countering the tendency for premature convergence. The adaptive adjustment of the β parameter enables the phasing and intelligent regulation of exploration behavior, optimizing the dynamic balance between exploration and exploitation. Unlike recent variants of PIMO that focus on isolated operator modifications or external hybridization, EPIMO embeds these three adaptive mechanisms within the original projection-iterative framework, resulting in a self-tuning, phase-aware optimizer that does not require manual staging or problem-specific adjustments. Theoretical analysis suggests that while preserving the original algorithm’s convergence properties, EPIMO, by introducing bounded diversity perturbations and more rational phased strategies, is expected to achieve significant improvements in both convergence speed and global optimization success rate. This performance advantage is anticipated to be particularly pronounced in complex optimization scenarios such as high-dimensional, multimodal, and non-convex problems.
In summary, through systematic strategic improvements, the EPIMO algorithm not only specifically compensates for the known deficiencies of PIMO but also makes valuable explorations in areas such as adaptive mechanisms, diversity maintenance, and intelligent balancing. It provides a new, referential paradigm for the design and enhancement of metaheuristic algorithms. Subsequent research will further validate the effectiveness and superiority of these improvement strategies through systematic benchmarking and comparative experiments.

4. Experimental Analysis

To comprehensively validate the performance of EPIMO, this study adopts a multi-stage experimental framework: preliminary performance validation is first conducted using classical benchmark functions, followed by systematic evaluation extended to the CEC2017 and CEC2022 benchmark test suites. This dual-test-platform strategy ensures the robustness of the algorithm across different problem characteristics (such as multimodality, high dimensionality, and constraint conditions). All compared algorithms are executed under unified hardware configurations and parameter settings, strictly eliminating external interference factors to guarantee the fairness and reproducibility of the results. Systematic experimental findings demonstrate that EPIMO exhibits significant advantages in terms of convergence accuracy, stability, and robustness.

4.1. Experimental Setting

A rigorous multi-stage experimental framework was adopted in this study to comprehensively evaluate the performance of the EPIMO algorithm. Systematic evaluation was conducted using the CEC2017 and CEC2022 benchmark test suites, ensuring robustness verification of the algorithm across diverse problem characteristics such as multimodality, high dimensionality, and complex constraints.
Eleven mature and emerging metaheuristic algorithms were selected for comparative analysis. These include the proposed Enhanced Projection-Iterative-Methods-based Optimizer (EPIMO) and its predecessor PIMO, along with state-of-the-art algorithms such as the Artificial Lemming Algorithm (ALA), Fata Morgana Algorithm (FATA), BKA, Water Uptake and Transport in Plants algorithm (WUTP), Kangaroo Escape Optimizer (KEO), and Dream Optimization Algorithm (DOA) [30]. Additionally, well-established classical algorithms including Particle Swarm Optimization (PSO) and Differential Evolution (DE) were included as baseline references.
All comparative algorithms were executed under strictly controlled and uniform experimental conditions. All implementations were based on the MATLAB 2024a platform and run on a workstation equipped with an Intel® Core™ i7-13700K processor, 32 GB of RAM, and the Windows 11 operating system. Through systematic analysis, the experimental parameters were set as follows: a population size of 30 individuals and a maximum iteration count of 500. This configuration achieves a balanced trade-off between computational efficiency and optimization performance. Each experimental condition was independently repeated 30 times to ensure the reliability of the statistical results and to provide sufficient data for subsequent non-parametric hypothesis testing.

4.2. Results and Analysis on CEC 2017

Based on the data in Table 1, Table 2 and Table 3, “Min” represents Minimum value, “Avg” stands for Average value, and “Std” indicates Standard deviation. The performance of the EPIMO algorithm is noteworthy, demonstrating unique and comprehensive competitive advantages in the CEC 2017 benchmark tests [31].
From the perspective of convergence accuracy analysis, EPIMO achieves leading optimization performance on several key test functions. In the F1 function, EPIMO’s average value reaches 3.22 × 10 3 , significantly outperforming PIMO ( 6.42 × 10 4 ) and ALA ( 6.38 × 10 8 ), and surpassing the worst-performing FATA ( 4.21 × 10 10 ) by seven orders of magnitude. In the F4 function, EPIMO’s average value of 4.67 × 10 2 also shows excellent performance, second only to DOA’s 5.94 × 10 2 , yet better than all other comparative algorithms. This trend continues in moderately complex functions such as F15 and F19, indicating that EPIMO possesses superior local exploitation capabilities on unimodal and basic multimodal problems.
More importantly, EPIMO demonstrates unique robustness when handling high-dimensional complex functions. On the most challenging F13 function, EPIMO’s average value is 5.26 × 10 3 , within the same order of magnitude as DOA’s 1.35 × 10 4 , while significantly better than traditional algorithms like PSO ( 5.81 × 10 8 ) and DE ( 2.85 × 10 7 ). In the F18 function, although EPIMO’s average value of 1.01 × 10 4 is not as good as DOA’s 1.60 × 10 6 , it still significantly outperforms ALA ( 1.28 × 10 6 ) and FATA ( 2.41 × 10 7 ), demonstrating its effective navigation capability in complex search spaces.
From the perspective of algorithm stability, EPIMO’s standard deviation indicators show relatively balanced characteristics. In the F6 function, its standard deviation is 7.96 × 10 1 ; although higher than DOA’s 4.25 × 10 1 , it is much lower than ALA’s 6.38 × 10 0 and BKA’s 7.13 × 10 0 . In the F21 function, EPIMO’s standard deviation is 5.07 × 10 1 , comparable to PIMO’s 4.70 × 10 1 , indicating that EPIMO’s performance fluctuations are within a controllable range, without showing the extreme instability observed in FATA or BKA.
Notably, EPIMO exhibits different optimization characteristics compared to DOA in certain functions. In the F1 function, although EPIMO’s average value is slightly higher than DOA’s, its minimum value of 4.40 × 10 2 is significantly better than DOA’s 2.64 × 10 6 , suggesting that EPIMO can find solutions closer to the theoretical optimum in some runs. In the F9 function, EPIMO’s average value of 5.83 × 10 3 outperforms DOA’s 6.74 × 10 3 , indicating that EPIMO may have unique advantages on specific types of multimodal problems.
Compared with traditional algorithms, EPIMO’s advantages are even more pronounced. In the F10 function, EPIMO’s average value of 5.78 × 10 3 not only outperforms PSO’s 8.04 × 10 3 and DE’s 1.40 × 10 4 , but even exceeds ALA’s 1.18 × 10 4 . In the F22 function, EPIMO’s 7.55 × 10 3 similarly outperforms PSO’s 1.02 × 10 4 and DE’s 1.61 × 10 4 . These data indicate that the optimization strategy adopted by EPIMO shows significant advancement compared to traditional algorithms when addressing the diverse challenges of modern benchmark tests.
Based on the Wilcoxon rank-sum test results in Table 4, EPIMO demonstrates statistically significant superiority across the CEC 2017 benchmark. With EPIMO as the reference, p-values below 0.05 indicate significant performance differences. In 29 of the 30 test functions, EPIMO achieves p-values of 1.83 × 10 4 against most competing algorithms, confirming its consistent advantage. The sole exception is function F26, where EPIMO and DOA show equivalent performance (p = 1.00 × 10 0 ), suggesting DOA’s competitiveness on specific problems. Traditional algorithms such as PSO and DE exhibit p-values of 1.83 × 10 4 in nearly all comparisons, highlighting their substantial lag behind modern approaches like EPIMO. Similarly, FATA and BKA show consistently inferior results across all functions, with p-values of 1.83 × 10 4 . DOA displays notable performance on selected functions, with p-values of 3.85 × 10 1 (F9), 8.90 × 10 2 (F10), 1.73 × 10 2 (F22), 1.00 × 10 0 (F26), and 9.70 × 10 1 (F27), indicating its capability to approach EPIMO’s effectiveness in certain complex multimodal scenarios. PIMO also shows close performance to EPIMO on functions F16, F17, F20, F27, and F29, with p-values ranging from 0.521 to 0.910, though it remains significantly inferior overall.
In summary, the EPIMO algorithm demonstrates three core strengths in the CEC 2017 benchmark evaluations: exceptional convergence precision on moderately complex functions, consistent robustness in high-dimensional and intricate search landscapes, and a performance profile fundamentally distinct from conventional optimization approaches. These attributes not only establish EPIMO as a highly effective tool for practical engineering and scientific applications but also advance the theoretical discourse on metaheuristic design—particularly in balancing structure-aware local refinement with sustained global exploration.
This set of Figure 1 and Figure 2 constitutes the core experimental results of the algorithm comparison study. They systematically demonstrate the convergence performance of multiple optimization algorithms on a series of standard test functions (F1, F5, F8, F12, F14, F15, F18, F26). These test functions are carefully designed to cover a wide range of problem characteristics, from simple unimodal to complex high-dimensional multimodal ones, aiming to comprehensively evaluate the algorithms’ robustness, convergence speed, and solution accuracy. The horizontal axis of each figure represents the number of iterations, while the vertical axis shows the average fitness value, with either a linear or logarithmic scale chosen flexibly depending on the function’s properties to clearly illustrate the performance differences of the algorithms across problems of varying scales. Circles in Figure 2 typically denote outliers, indicating that the algorithm performed significantly worse in a few runs compared to the majority, which serves as a key indicator of stability in optimization algorithm comparisons.
From the overall trend, a prominent conclusion is that the algorithm EPIMO exhibits significant and consistent superiority on the majority of test functions—especially on highly complex functions with fitness values in the range of 10 6 to 10 12 (e.g., F12, F14, F15, F18). Its convergence curve not only descends more rapidly but also generally achieves a lower final fitness value, indicating that EPIMO achieves a better balance between global exploration and local exploitation when dealing with high-dimensional, multimodal, and non-smooth search spaces. EPIMO exhibits not only faster initial convergence but also reaches a stable solution region earlier (within 200 iterations), as observed from the plateau in its convergence curve, indicating effective exploitation under the defined convergence threshold. In contrast, other algorithms (e.g., WUTP) show unstable performance, with their effectiveness varying significantly across different functions.
These results visually confirm the performance advantage of the new algorithm (especially EPIMO) over existing techniques, while also revealing the applicability and inherent limitations of each algorithm through their differential performance across various function categories. Overall, the experiment effectively demonstrates that the improvements in EPIMO’s algorithmic mechanisms are substantial, enabling it to address complex optimization challenges more stably and efficiently.
Figure 3 uses the comprehensive ranking mean as the vertical axis, clearly illustrating the overall performance ranking of ten algorithms across the complete set of test functions. EPIMO holds the first position with a significant advantage (ranking mean approximately 1.56), followed by PIMO (approximately 2.69) and ALA (approximately 2.76). Among traditional algorithms, PSO and DE rank sixth and seventh, respectively, showing moderate performance, while WUTP, KEO, and DOA are positioned lower (8th–10th), indicating weaker overall competitiveness on the current test set. Based on the analysis of the comprehensive ranking bar chart and the radar performance profile, the EPIMO algorithm demonstrates comprehensive and stable superiority in the overall evaluation across all 30 test functions. Its ranking mean significantly outperforms other algorithms, while in the radar chart, it exhibits a broad coverage area, a well-rounded contour, and proximity to the outer edge, particularly excelling in high-complexity functions. This indicates that EPIMO not only possesses superior accuracy in solving individual problems but also demonstrates exceptional robustness and generalization capability across diverse and multi-featured problem sets, further validating the comprehensive effectiveness of its algorithmic mechanism in balancing global exploration and local exploitation.
The experimental results substantiate that the EPIMO algorithm consistently achieves superior performance across a diverse spectrum of benchmark functions, outperforming both established methods (e.g., PSO, DE) and more recent alternatives (e.g., WUTP, KEO, DOA). Its strength is particularly evident in convergence speed, final solution quality, and robustness when applied to high-dimensional and multimodal optimization problems. The core effectiveness of EPIMO stems from its well-calibrated balance between global exploration and local exploitation, which mitigates premature convergence and ensures stable and competitive results.
Beyond benchmark validation, EPIMO demonstrates notable generalization capability across varied and complex problem landscapes, highlighting its strong practical potential. The proposed algorithmic framework thus offers both theoretical innovation and practical applicability, providing a reliable foundation for tackling challenging real-world optimization tasks. Future research may focus on extending its application to larger-scale and engineering-oriented problems, as well as conducting further component-wise analysis to refine its performance and adaptability.

4.3. Results and Analysis on CEC 2022

To further validate the robustness and scalability of the proposed EPIMO algorithm, this section presents a comprehensive performance evaluation on the CEC 2022 benchmark suite [32], which consists of diverse and challenging optimization functions. These two tables systematically present the comprehensive performance of the EPIMO algorithm on the CEC 2022 benchmark function set, with statistical tests reinforcing the credibility of the conclusions.
Table 5 and Table 6 compares EPIMO with nine other algorithms across 12 test functions using three metrics: minimum value (min), standard deviation (std), and average value (avg). Based on the numerical performance data and statistical test results, the EPIMO algorithm demonstrates comprehensive superiority in solving the CEC 2022 benchmark problems. In terms of solution quality, EPIMO achieves the best or near-optimal values across most test functions, exhibiting particularly outstanding precision on complex multimodal problems. For instance, on function F1, it attains near-perfect accuracy with both minimum and average values of 3.00 × 10 2 and an exceptionally low standard deviation of 7.71 × 10 5 , while on the challenging function F6, its average value of 1.81 × 10 3 significantly outperforms competitors like FATA ( 5.59 × 10 5 ). These results confirm EPIMO’s exceptional capability in converging to high-quality solutions while maintaining remarkable stability throughout the search process.
From a statistical perspective, the Wilcoxon rank-sum test provides rigorous validation of EPIMO’s performance advantages. The algorithm shows statistically significant superiority over most competitors, with p-values as low as 1.83 × 10 4 across 29 of the 30 test functions. This overwhelming statistical evidence strongly supports that EPIMO’s superior performance is systematic and reliable rather than coincidental. While the algorithm maintains this dominant position overall, the statistical results also objectively identify specific scenarios where competition exists, such as on function F4 where EPIMO shows comparable performance with PIMO and PSO (p-values > 0.05), and on F26 where it demonstrates equivalent performance with DOA (p = 1.00 × 10 0 ).
In conclusion, EPIMO establishes itself as a highly effective and reliable optimizer through its dual strengths in numerical precision and statistical robustness. The algorithm consistently delivers high-accuracy, low-variance solutions across diverse problem types, while its performance advantages are rigorously validated through comprehensive statistical testing. These findings not only highlight EPIMO’s practical value for complex optimization challenges but also contribute significantly to the advancement of projection-iterative methods in the field of global optimization, offering a robust framework for both theoretical research and engineering applications.
The EPIMO algorithm demonstrates comprehensive and significant performance advantages on the CEC 2022 benchmark test set. These charts consistently validate the effectiveness of the algorithm through multiple perspectives, including convergence curves, final solution comparisons, comprehensive rankings, and radar performance profiles.
Across the four test functions F1, F3, F8, and F11, the EPIMO algorithm demonstrates comprehensive and consistent performance superiority. In terms of numerical results, EPIMO achieves the best or near-best average fitness values on all four functions (F1: 3.00 × 10 2 , F3: 6.00 × 10 2 , F8: 2.21 × 10 3 , F11: 2.60 × 10 3 ), while maintaining extremely low standard deviations (F1: 7.71 × 10 5 , F3: 3.59 × 10 1 , F8: 7.34 × 10 0 , F11: 4.03 × 10 5 ). This combination of “low mean and low variance” is visually represented in the charts as the shortest bar height and the narrowest error bars, clearly indicating that EPIMO not only converges stably to high-quality solutions but also exhibits excellent repeatability and robustness in the solution process.
In terms of dynamic convergence processes (as shown in Figure 4 for functions such as F1, F5, F6, F9, F10, F11, and F12), EPIMO exhibits the fastest convergence speed and the lowest final fitness values on the majority of test functions, with particularly pronounced advantages on high-dimensional complex functions (e.g., F6, F10). This indicates that its improved search mechanism effectively balances global exploration and local exploitation, avoiding premature convergence.
Regarding static solution quality (as illustrated in bar Figure 5), EPIMO achieves the best or near-best final fitness values on almost all test functions, with high result stability, further confirming its solution accuracy and robustness.
The comprehensive ranking Figure 6a shows that EPIMO ranks first with a significant advantage, with an average ranking far lower than other algorithms, indicating its superior overall performance across the entire test set. The radar Figure 6b further reveals that EPIMO’s performance profile is closest to the outer edge across multiple function directions, with broad coverage and no apparent weaknesses, demonstrating excellent generalization capability and problem adaptability.
It is noteworthy that on some moderately complex functions, while EPIMO still maintains a lead, the gap with traditional algorithms (such as PSO and DE) is relatively smaller. In contrast, its advantages are more pronounced on extremely complex functions. This suggests that EPIMO possesses stronger competitiveness when handling high-difficulty, non-smooth, and multimodal optimization problems.
Overall, this set of charts systematically and consistently indicates that EPIMO not only excels in individual metrics but also achieves comprehensive improvements across multiple dimensions, including solution accuracy, convergence speed, stability, and generalization capability. It represents an efficient optimization algorithm with practical application potential. These results provide robust experimental support for the effectiveness of the algorithm’s improvement mechanisms and lay a foundation for subsequent research.
The comprehensive ranking chart shows that EPIMO ranks first with a significant advantage, with an average ranking far lower than other algorithms, indicating its superior overall performance across the entire test set. The radar chart further reveals that EPIMO’s performance profile is closest to the outer edge across multiple function directions, with broad coverage and no apparent weaknesses, demonstrating excellent generalization capability and problem adaptability.

4.4. Application of Practical Engineering Optimization Problems

Building upon the foundational principles and common methodologies of optimization theory, this chapter shifts focus to the practical application of engineering optimization problems. It presents four classic and representative case studies to demonstrate the integration of mathematical models and optimization algorithms for solving complex real-world design challenges.
The discussion begins with the Himmelblau function optimization, a standard multimodal test function used to evaluate the performance and robustness of optimization algorithms in identifying global optima. Attention then turns to more applied mechanical design problems: the step-cone pulley problem addresses the minimization of weight while adhering to constraints on speed ratios and structural strength; the hydrostatic thrust bearing design problem seeks an optimal trade-off between lubrication performance and power loss to minimize total power consumption. Concluding the set, the three-bar truss design problem introduces a structural engineering challenge, aiming for the most economical material distribution under strict stress and deflection constraints.
These four problems progress in complexity from theoretical benchmarking to multi-constrained engineering design, collectively illustrating the critical role of optimization in enhancing performance, reducing cost, and ensuring system reliability. The analysis starts with the Himmelblau function.

4.4.1. Himmelblau Function Optimization

The Himmelblau function adopted in this study serves as a classical benchmark for nonlinear constrained optimization, whose structure simulates typical engineering scenarios characterized by multi-variable nonlinear coupling and narrow feasible regions, commonly encountered in chemical process systems and mechanical design [33]. This test case enables systematic validation of EPIMO’s performance in handling complex constraints, navigating feasible search spaces, and ensuring convergence stability, thereby providing empirical justification for the algorithm’s applicability to multi-constrained engineering problems such as process industry optimization and structural design. It has since become a widely used benchmark for evaluating nonlinear constrained optimization algorithms. This problem involves five variables and six nonlinear constraints, with its detailed description provided in [34].
Minimize
f ( x ¯ ) = 5.3578547 x 3 2 + 0.8356891 x 1 x 5 + 37.293239 x 1 40792.141
subject to
g 1 ( x ¯ ) = G 1 0 , g 2 ( x ¯ ) = G 1 92 0 ,
g 3 ( x ¯ ) = 90 G 2 0
g 4 ( x ¯ ) = G 2 110 0 ,
g 5 ( x ¯ ) = 20 G 3 0 ,
g 6 ( x ¯ ) = G 3 25 0 ,
where
G 1 = 85.334407 + 0.0056858 x 2 x 5 + 0.0006262 x 1 x 4 0.0022053 x 3 x 5 ,
G 2 = 80.51249 + 0.0071317 x 2 x 5 + 0.0029955 x 1 x 2 + 0.0021813 x 3 2 ,
G 3 = 9.300961 + 0.0047026 x 3 x 5 + 0.00125447 x 1 x 3 + 0.0019085 x 3 x 4 .
with the bounds
78 x 1 102 ,
33 x 2 45 ,
27 x 3 45 ,
27 x 4 45 ,
27 x 5 45
Based on the data in Table 7 and Table 8, the convergence curves, and box plots (Figure 7), a comprehensive performance analysis of the optimization algorithms is presented below.
All algorithms achieve near-optimal solutions for the Himmelblau function, with cost values close to −30,665, indicating the problem’s global optimum is relatively accessible. However, stability differs significantly across algorithms, which is critical in engineering practice. The welded beam statistics (Table 8) show that EPIMO, WUTP, and KEO not only reach the best performance but also exhibit very low standard deviations (0.00–0.03), reflecting high reliability. In contrast, FATA, BKA, and DOA display large standard deviations (150–227), considerable performance fluctuations, and notably worse worst-case results.
Convergence curves further reveal temporal dynamics. EPIMO, WUTP, and KEO converge quickly and steadily, approaching the optimum within about 50–100 iterations with minimal oscillation. PSO and DE show moderate convergence speed but tend to stagnate in later stages, with slight curve oscillations. FATA, BKA, and DOA perform poorly: they either converge slowly, require more iterations, or stagnate prematurely at suboptimal points, accompanied by noticeable curve instability.
Box plots in Figure 7 provide a clear visual comparison of robustness. EPIMO, WUTP, and KEO show extremely narrow boxes—almost lines—indicating highly consistent results across multiple runs with few outliers. Conversely, algorithms such as DOA exhibit wide boxes with lower-bound outliers, implying a higher risk of poor solutions in practice. Algorithmic robustness is thus a key determinant of reliability in real-world applications.
Integrating these analyses, EPIMO, WUTP, and KEO stand out as the top performers, excelling in convergence speed, solution quality, and run-to-run stability. ALA and PIMO may serve as viable alternatives, while FATA, BKA, and DOA are less suitable for reliability-sensitive tasks due to their instability.
In summary, a sound evaluation should combine convergence behavior (process insight), box plots (result distribution), and statistical metrics (quantitative assessment). This multi-angle framework supports informed algorithm selection for complex engineering optimization problems.

4.4.2. Step-Cone Pulley Problem

The step-cone pulley design represents a classical constrained optimization problem that aims to minimize the weight of a four-stage stepped pulley assembly. As illustrated in Figure 8, the system comprises multiple pulley stages with distinct diameters ( l 1 , l 2 , l 3 , l 4 ) and a common width ω . The optimization is subject to 11 nonlinear constraints, which ensure the required power transmission of 0.75 horsepower per stage [35].
Given the strong nonlinear couplings among design variables and the presence of multiple constraints, this problem poses significant challenges in maintaining feasibility while converging to a globally optimal design. It therefore serves as a rigorous benchmark for evaluating the constraint-handling ability and convergence performance of modern metaheuristic optimizers such as EPIMO and related algorithms.
Minimize:
f ( x ¯ ) = ρ ω l 1 2 1 + N 1 N 2 + l 2 2 1 + N 2 N 2 + l 3 2 1 + N 3 N 2 + l 4 2 1 + N 4 N 2
where l i denotes the length associated with each stage, N i the rotational speeds, and ρ the material density. Subject to:
h 1 ( x ¯ ) = C 1 C 2 = 0 , h 2 ( x ¯ ) = C 1 C 3 = 0 , h 3 ( x ¯ ) = C 1 C 4 = 0 , g 5 ( x ¯ ) = 0.75 × 745.6998 P 1 0 , g 6 ( x ¯ ) = 0.75 × 745.6998 P 2 0 , g 7 ( x ¯ ) = 0.75 × 745.6998 P 3 0 , g 8 ( x ¯ ) = 0.75 × 745.6998 P 4 0 ,
where:
C i = π l i 2 1 + N i N + N i N 1 2 4 a + 2 a , i = 1 , 2 , 3 , 4 , R i = exp μ π 2 sin 1 N i N 1 l i 2 a , i = 1 , 2 , 3 , 4 , P i = s t ω ( 1 R i ) π l i N i 60 , i = 1 , 2 , 3 , 4 ,
With parameters:
t = 8 mm , s = 1.75 MPa , μ = 0.35 , ρ = 7200 kg / m 3 , a = 3 mm
Analysis of the optimization results for the stepped-cone pulley problem reveals significant performance disparities among different algorithms when addressing this type of strongly nonlinear constrained engineering problem. Integrating the variable optimization outcomes (Table 9) with the performance statistics (Table 10), the EPIMO and KEO algorithms demonstrate the most outstanding performance. Their obtained optimal objective function values (weight) are as low as 16.16 kg and 16.10 kg, respectively, and the corresponding design variables (pulley lengths l 1 to l 4 and width ω ) exhibit a reasonable stepwise increasing distribution, consistent with physical expectations. More importantly, the standard deviations (std) of these two algorithms are extremely small (0.30 and 0.64, respectively), and their mean (avg) and median values are both close to the optimum. This confirms that they not only achieve high optimization accuracy but also possess exceptional stability and reliability, enabling them to consistently converge to high-quality feasible solutions across repeated runs.
The two graphical representations provide a clear visualization of the significant advantages of the EPIMO algorithm in terms of convergence speed and solution accuracy in Figure 9. From the convergence curve and the fitness value distribution, it is evident that the EPIMO algorithm demonstrates significant advantages in both convergence speed and solution accuracy. The convergence plot illustrates that EPIMO rapidly descends to and stabilizes near the theoretical optimum (around 10 0 ), consistently maintaining the lowest curve position throughout the iterations. In contrast, algorithms such as FATA remain at substantially higher fitness levels (around 10 20 ), exhibiting markedly slower convergence. Correspondingly, the box plot of fitness values reveals that EPIMO achieves the lowest median fitness along with the most compact distribution, further confirming its superior solution quality and robustness. Collectively, these visual results consistently indicate that EPIMO outperforms the comparative algorithms in terms of convergence performance and solution stability.
Thus, this case study clearly demonstrates that when dealing with complex engineering constrained optimization problems, an algorithm’s constraint-handling capability and search robustness are equally, if not more, critical as its pure “optimization” capability. The comprehensive performance exhibited by algorithms such as EPIMO and KEO in this problem establishes them as reliable choices for solving similar mechanical design optimization tasks.

4.4.3. Hydrostatic Thrust Bearing Design Problem

The hydrostatic thrust bearing design optimization aims to minimize the total power loss while satisfying seven nonlinear constraints related to load capacity, pressure, temperature rise, and geometric feasibility. As shown in Figure 10, the design involves four key variables: recess radius R 0 , bearing step radius R, lubricant viscosity μ , and flow rate Q. The problem is recognized as a challenging benchmark due to the strong coupling among variables and the presence of multiple active constraints [36].
Objective Function: Minimize the total power loss:
f ( x ) = Q P 0 0.7 + E f = 1.42857 Q P 0 + 143.2776 Q Δ T
Design variables:
x = [ R , R 0 , μ , Q ] T
With bounds:
1.000 R 16.000 ( in ) 1.000 R 0 16.000 ( in ) 1.0 × 10 6 μ 16.0 × 10 6 ( lb · s / in 2 ) 1.000 Q 16.000 ( in 3 / s )
Constraints:
g 1 ( x ) = 101000 W 0
g 2 ( x ) = P 0 1000 0
g 3 ( x ) = Δ T 50 0
g 4 ( x ) = 0.001 h 0
g 5 ( x ) = R 0 R 0
g 6 ( x ) = 0.001 0.0307 386.4 P 0 Q 2 π R h 0
g 7 ( x ) = 5000 W π ( R 2 R 0 2 ) 0
Intermediate Parameters:
W = π P 0 2 ( R 2 R 0 2 ) ln R R 0
P 0 = 6 μ Q π h 3 ln R R 0
E f = 9336 Q γ C Δ T = 9336 × 0.0307 × 0.5 Q Δ T = 143.2776 Q Δ T
Δ T = 2 ( 10 P 560 )
P = log 10 log 10 ( 8.12226 μ + 0.8 ) 10.04 3.55
With bounds:
1 R 16 , 1 R 0 16 ,
1 × 10 6 μ 16 × 10 6 , 1 Q 16 .
Based on an integrated analysis of convergence curves, fitness distribution plots in Figure 11, performance statistics in Table 11, and variable optimization results in Table 12, this paper provides a comprehensive evaluation of the performance of different algorithms on the hydrostatic thrust bearing design optimization problem. The algorithms exhibit significant differences in capability when solving this strongly constrained, nonlinear engineering problem, which are systematically revealed across multiple dimensions: the dynamic convergence process, quantitative statistical metrics, and the quality of the final solutions.
High-performing algorithms, such as EPIMO and PSO, demonstrate rapid and stable characteristics during the convergence process, with their fitness values descending swiftly on a logarithmic scale and stabilizing at low levels. This dynamic behavior aligns closely with their statistical profiles: EPIMO achieves an objective function value (19,455.12) closest to the globally optimal solution documented in the literature, while its exceptionally low standard deviation (125.67) reflects outstanding robustness and result reproducibility. PSO also shows excellent performance, albeit with slightly inferior stability. Observing the fitness distribution plot, the boxes for both algorithms are compact and positioned at low levels, visually confirming the concentration and superiority of their output. In terms of solution space distribution, the design variable values converged upon by these algorithms are highly concentrated within the parameter region widely recognized in the literature as optimal ( R 5.96 , R 0 5.39 , μ 5.36 × 10 6 , Q 2.27 ). Moreover, they strictly satisfy all engineering constraints, thereby ensuring the feasibility and practical value of the results.
In contrast, certain algorithms such as PIMO, ALA, and FATA exhibit various degrees of limitation. Their convergence trajectories show considerable fluctuations, descend slowly only in later iterations, or even diverge completely in some runs. Statistical data indicate that the worst values for these algorithms can reach magnitudes of 10 7 to 10 8 , differing from their respective min values by several orders of magnitude, accompanied by large standard deviations. The long whiskers or discrete outliers corresponding to these algorithms in the fitness distribution plot provide an intuitive representation of this extreme volatility and instability. A deeper analysis of their variable optimization results reveals that the solutions generated often deviate from the optimal parameter space. For instance, the R and Q values for the DOA algorithm are significantly higher, leading to severe violations of key constraints such as inlet pressure or load capacity, rendering its objective function value practically meaningless from an engineering perspective.
In summary, the convergence behavior reveals the search efficiency and trajectory stability of an algorithm, the performance statistics quantify the reliability boundaries and anomaly risks of its results, and the variable optimization outcomes provide the final judgment based on solution quality and feasibility. The fitness distribution plot, serving as a visual extension of the statistical data, makes performance comparisons more intuitive. All evidence converges to a consistent conclusion: when addressing complex constrained optimization problems like the hydrostatic thrust bearing design, a successful algorithm must deeply integrate powerful global exploration capabilities, refined local exploitation strategies, and efficient constraint-handling mechanisms. The comprehensive superiority demonstrated by the EPIMO algorithm in this study exemplifies this balance and integrative capability, providing strong support for its application in the field of complex engineering optimization.

4.4.4. Three-Bar Truss Design Problem

The three-bar truss design problem is a classic benchmark in structural optimization, aiming to minimize the volume of a truss structure under specified loading conditions. Schematic of a three-bar truss design problem is shown in Figure 12.
The problem involves two design variables, namely the cross-sectional areas A 1 and A 2 of the bars (denoted as x 1 and x 2 ) [37]. Its mathematical model is given as follows:
Design Variables:
x = [ x 1 , x 2 ] = [ A 1 , A 2 ] , 0 x 1 , x 2 1
Objective Function (minimize volume):
min f ( x ) = ( 2 2 x 1 + x 2 ) · l
Constraints (stress constraints):
g 1 ( x ) = 2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 , g 2 ( x ) = x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 , g 3 ( x ) = 1 2 x 2 + x 1 P σ 0 ,
where the constants are: bar length l = 100 cm , external load P = 2 kN / cm 2 , and allowable stress σ = 2 kN / cm 2 .
Due to its highly nonlinear constraints, narrow feasible region, and strong coupling among variables, this problem serves as a standard test case for evaluating the robustness, precision, and constraint-handling capability of optimization algorithms. It is widely used in lightweight structural design in civil and mechanical engineering.
From the convergence curve in Figure 13, it is observed that most algorithms rapidly converge to a narrow region near the optimal value (approximately 263.9) within the early iterations (around 50 iterations) and maintain stability thereafter. This behavior strongly aligns with the extremely low standard deviation (std) values reported in the performance statistics table—particularly for algorithms such as EPIMO (6.80 × 10−8) and ALA (1.28 × 10−5). Their curves appear almost horizontal, visually demonstrating exceptional convergence stability and result reproducibility. In contrast, the convergence curves of algorithms like FATA and DOA exhibit slight fluctuations or slower convergence in later stages. This corresponds directly to their significantly higher standard deviations (1.90 × 10−1 and 6.25 × 10−1, respectively) and relatively larger worst-case values (264.62 and 265.75, respectively) in the statistical table, indicating that these algorithms fail to consistently lock onto the optimal solution in certain runs, reflecting a degree of search uncertainty.
The fitness value distribution plot in Figure 13 offers an intuitive statistical complement to the above analysis. The box plots of high-performing algorithms such as EPIMO, PIMO, and ALA are expected to show extremely compact shapes with the lowest box positions. This visually confirms the concentration and reliability of their outputs, consistent with the highly concentrated and near-optimal values of their minimum (min) and median statistics. In contrast, the box plots of algorithms like FATA and DOA likely display longer whiskers or noticeable outliers, corresponding to their larger standard deviations and worst-case values, reflecting dispersion and instability in result distribution.
In summary, the optimal solution space for the three-bar truss design problem is relatively well-defined and accessible, with most algorithms capable of locating high-quality solutions as shown in Table 13 and Table 14. However, the key differences among algorithms lie in convergence stability and result robustness. The EPIMO algorithm demonstrates near-perfect performance in this problem—its convergence curve is smooth, statistical indicators are highly consistent, and the distribution is concentrated, reflecting an excellent balance between exploration and exploitation as well as numerical stability. This conclusion reaffirms that even for relatively straightforward engineering optimization problems, an effective algorithm must ensure not only optimization accuracy but also reliability in the solution process and consistency in results, which are critical for practical engineering applications.

5. Conclusions and Prospects

This study introduces the Enhanced Projection-Iterative-Methods-based Optimizer (EPIMO), a significantly improved version of the PIMO algorithm. By systematically integrating three core strategies—adaptive stochastic decay, structured mirror opposition-based learning, and phase-sensitive Lévy flight parameter adjustment—EPIMO effectively addresses the primary limitations of its predecessor. The adaptive decay strategy breaks the deterministic nature of parameter evolution, enhancing robustness. The opposition-based learning mechanism actively preserves population diversity, mitigating premature convergence. The dynamic adjustment of the Lévy flight parameter β allows for intelligent, phase-dependent balancing of exploration and exploitation.
Rigorous validation was conducted across multiple dimensions. Systematic experiments on the challenging CEC 2017 and CEC 2022 benchmark suites demonstrate that EPIMO consistently outperforms a wide range of contemporary and classical metaheuristic algorithms in terms of solution accuracy, convergence speed, and statistical robustness. More importantly, its practical efficacy was confirmed through successful applications to four classical, nonlinear, and tightly constrained engineering optimization problems: the Himmelblau function, step-cone pulley design, hydrostatic thrust bearing design, and three-bar truss design. In these real-world scenarios, EPIMO not only achieved solutions closest to known global optima but also exhibited remarkable stability and reliability—key attributes for engineering deployment.
The successful development and validation of EPIMO provide valuable insights for the metaheuristic research community. It demonstrates that carefully designed, synergistic enhancement strategies can substantially elevate algorithm performance without compromising conceptual elegance. The paradigm of combining mathematical projection theory with adaptive, diversity-aware mechanisms proves particularly effective for navigating complex, constrained search spaces. It should be noted that EPIMO’s performance may degrade in scenarios involving extremely high-dimensional discrete variables, severely non-stationary landscapes, or problems with disconnected feasible regions, suggesting opportunities for future hybridization and methodological extension.
Future Work: Several promising directions for future research emerge from this work. Firstly, the component-wise contribution of each improvement strategy warrants deeper theoretical analysis to guide further refinements. Secondly, extending EPIMO’s framework to multi-objective, dynamic, and large-scale optimization problems represents a natural and challenging next step. Thirdly, exploring its integration with machine learning techniques for online parameter adaptation or surrogate model assistance could unlock new levels of efficiency. Finally, applying EPIMO to more complex, real-world engineering systems—such as aerospace structural design, energy system scheduling, or biomedical device optimization—will be the ultimate test of its versatility and impact. The EPIMO algorithm, with its demonstrated balance of innovation and practicality, provides a solid foundation for these future explorations.

Author Contributions

Methodology and overall research plan, X.Z.; investigation, H.P. and H.C.; data and figure curation, Y.L., S.L. and W.P.; original draft writing preparation, X.Z. and W.P.; proofreading and editing, X.Z. and W.P.; revision and funding acquisition, X.Z., W.P. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Anhui Provincial Department of Education (Grant No. 2024AH051994), the High-level Talent Start-up Project of West Anhui University (Grant No. WGKQ2024011 and Grant No. WGKQ2023003), and Wanxi University’s Horizontal Research Project: Research on Path planning system of intelligent unmanned aerial vehicle, and Grant No. 0045025103, and Grant No. 0045025141.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon request. There are no restrictions on data availability.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Bian, K.; Priyadarshi, R. Machine learning optimization techniques: A survey, classification, challenges, and future research issues. Arch. Comput. Methods Eng. 2024, 31, 4209–4233. [Google Scholar] [CrossRef]
  2. Cheng, G.H.; Wang, G.G.; Hwang, Y.M. Multi-objective optimization for high-dimensional expensively constrained black-box problems. J. Mech. Des. 2021, 143, 111704. [Google Scholar] [CrossRef]
  3. Gogna, A.; Tayal, A. Metaheuristics: Review and application. J. Exp. Theor. Artif. Intell. 2013, 25, 503–526. [Google Scholar] [CrossRef]
  4. Vinod Chandra, S.S.; Anand, H.S. Nature inspired meta heuristic algorithms for optimization problems. Computing 2022, 104, 251–269. [Google Scholar]
  5. Tian, Y.; Si, L.; Zhang, X.; Cheng, R.; He, C.; Tan, K.C.; Jin, Y. Evolutionary large-scale multi-objective optimization: A survey. ACM Comput. Surv. 2021, 54, 174. [Google Scholar] [CrossRef]
  6. Blum, C.; Roli, A. Metaheuristics in combinatorial optimization: Overview and conceptual comparison. ACM Comput. Surv. 2003, 35, 268–308. [Google Scholar] [CrossRef]
  7. Chang, Y.C.; Yeh, L.J.; Chiu, M.C.; Lai, G.J. Shape optimization on constrained single-layer sound absorber by using GA method and mathematical gradient methods. J. Sound Vib. 2005, 286, 941–961. [Google Scholar] [CrossRef]
  8. Meng, Z.; Li, G.; Wang, X.; Sait, S.; Yıldız, A. A comparative study of metaheuristic algorithms for reliability-based design optimization problems. Arch. Comput. Methods Eng. 2020, 28, 1853–1869. [Google Scholar] [CrossRef]
  9. Yang, L.; Shami, A. On hyperparameter optimization of machine learning algorithms: Theory and practice. Neurocomputing 2020, 415, 295–316. [Google Scholar] [CrossRef]
  10. Sun, J.; Zhang, R.; Wang, M.; Zhang, J.; Qiu, S.; Tian, W.; Su, G.H. Multi-objective optimization of helical coil steam generator in high temperature gas reactors with genetic algorithm and response surface method. Energy 2022, 259, 124976. [Google Scholar] [CrossRef]
  11. Nanda, S.J.; Panda, G. A survey on nature inspired metaheuristic algorithms for partitional clustering. Swarm Evol. Comput. 2014, 16, 1–18. [Google Scholar] [CrossRef]
  12. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  13. Bilal; Pant, M.; Zaheer, H.; Garcia-Hernandez, L.; Abraham, A. Differential evolution: A review of more than two decades of research. Eng. Appl. Artif. Intell. 2020, 90, 103479. [Google Scholar] [CrossRef]
  14. Gad, A.G. Particle swarm optimization algorithm and its applications: A systematic review. Arch. Comput. Methods Eng. 2022, 29, 2531–2561. [Google Scholar] [CrossRef]
  15. Karaboga, D. Artificial bee colony algorithm. Scholarpedia 2010, 5, 6915. [Google Scholar] [CrossRef]
  16. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  17. Yang, X.S.; He, X. Firefly algorithm: Recent advances and applications. Int. J. Swarm Intell. 2013, 1, 36–50. [Google Scholar] [CrossRef]
  18. Pourpanah, F.; Wang, R.; Lim, C.P.; Wang, X.Z.; Yazdani, D. A review of artificial fish swarm algorithms: Recent advances and applications. Artif. Intell. Rev. 2023, 56, 1867–1903. [Google Scholar] [CrossRef]
  19. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  20. Coelho, L.d.S. Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems. Expert Syst. Appl. 2010, 37, 1676–1683. [Google Scholar] [CrossRef]
  21. Abdel-Basset, M.; Mohamed, R.; Sallam, K.M.; Chakrabortty, R.K. Light spectrum optimizer: A novel physics-inspired metaheuristic optimization algorithm. Mathematics 2022, 10, 3466. [Google Scholar] [CrossRef]
  22. Gharehchopogh, F.S.; Maleki, I.; Dizaji, Z.A. Chaotic vortex search algorithm: Metaheuristic algorithm for feature selection. Evol. Intell. 2022, 15, 1777–1808. [Google Scholar] [CrossRef]
  23. Bouaouda, A.; Sayouti, Y. Hybrid meta-heuristic algorithms for optimal sizing of hybrid renewable energy system: A review of the state-of-the-art. Arch. Comput. Methods Eng. 2022, 29, 4049–4083. [Google Scholar] [CrossRef] [PubMed]
  24. Guo, E.; Gao, Y.; Hu, C.; Zhang, J. A hybrid PSO-DE intelligent algorithm for solving constrained optimization problems based on feasibility rules. Mathematics 2023, 11, 522. [Google Scholar] [CrossRef]
  25. Luo, J.; Huang, M.; Xiang, C.; Lei, Y. Bayesian damage identification based on autoregressive model and MH-PSO hybrid MCMC sampling method. J. Civ. Struct. Health Monit. 2022, 12, 361–390. [Google Scholar] [CrossRef]
  26. Cui, J.; Wu, L.; Huang, X.; Xu, D.; Liu, C.; Xiao, W. Multi-strategy adaptable ant colony optimization algorithm and its application in robot path planning. Knowl.-Based Syst. 2024, 288, 20. [Google Scholar] [CrossRef]
  27. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  28. Yu, D.; Ji, Y.; Xia, Y. Projection-iterative-methods-based optimizer: A novel metaheuristic algorithm for continuous optimization problems and feature selection. Knowl.-Based Syst. 2025, 326, 113978. [Google Scholar] [CrossRef]
  29. Chen, D.; Xian, Q.; Yao, C.; Deng, R.; Yuan, T. Dual-Domain Impulse Complexity Index-Guided Projection Iterative-Methods-Based Optimizer-Feature Mode Decomposition (DICI-Guided PIMO-FMD): A Robust Approach for Bearing Fault Diagnosis Under Strong Noise Conditions. Sensors 2025, 25, 6174. [Google Scholar] [CrossRef]
  30. Lang, Y.; Gao, Y. Dream Optimization Algorithm (DOA): A novel metaheuristic optimization algorithm inspired by human dreams and its applications to real-world engineering problems. Comput. Methods Appl. Mech. Eng. 2025, 436, 117718. [Google Scholar] [CrossRef]
  31. Baş, E. BinDMO: A new binary dwarf mongoose optimization algorithm on based Z-shaped, U-shaped, and taper-shaped transfer functions for CEC-2017 benchmarks. Neural Comput. Appl. 2024, 36, 6903–6935. [Google Scholar] [CrossRef]
  32. Salgotra, R.; Sharma, P.; Kundu, K.; Raju, S.; Gandomi, A.H. Enhancing differential evolution algorithm for CEC 2014, CEC 2017, CEC 2021, and CEC 2022 test suites. Neural Comput. Appl. 2025, 37, 27593–27630. [Google Scholar] [CrossRef]
  33. Himmelblau, D.M. Applied Nonlinear Programming; McGraw-Hill: New York, NY, USA, 2018. [Google Scholar]
  34. He, F.; Fu, C.; He, Y.; Huo, S.; Tang, J.; Long, X. Improved dwarf mongoose optimization algorithm based on hybrid strategy for global optimization and engineering problems. J. Supercomput. 2025, 81, 483. [Google Scholar] [CrossRef]
  35. Liu, J.; Zhao, J.; Li, Y.; Zhou, H. HSMAOA: An enhanced arithmetic optimization algorithm with an adaptive hierarchical structure for its solution analysis and application in optimization problems. Thin-Walled Struct. 2025, 206, 112631. [Google Scholar] [CrossRef]
  36. Sahin, I. A comparative performance investigation of swarm optimisers on the design of hydrostatic thrust bearing. Sci. Program. 2020, 2020, 8856770. [Google Scholar] [CrossRef]
  37. Che, Y.; He, D. An enhanced seagull optimization algorithm for solving engineering optimization problems. Appl. Intell. 2022, 52, 13043–13081. [Google Scholar] [CrossRef]
Figure 1. Convergence analysis of partial functions on CECE 2017.
Figure 1. Convergence analysis of partial functions on CECE 2017.
Computation 14 00045 g001
Figure 2. Box plot of partial functions on CECE 2017.
Figure 2. Box plot of partial functions on CECE 2017.
Computation 14 00045 g002
Figure 3. Ranking charts of optimization results on the CEC2017 benchmark. (a) The radar chart. (b) The average rank chart.
Figure 3. Ranking charts of optimization results on the CEC2017 benchmark. (a) The radar chart. (b) The average rank chart.
Computation 14 00045 g003
Figure 4. Convergence analysis of partial functions on CEC2022.
Figure 4. Convergence analysis of partial functions on CEC2022.
Computation 14 00045 g004
Figure 5. Box plots of partial functions on CEC2022.
Figure 5. Box plots of partial functions on CEC2022.
Computation 14 00045 g005
Figure 6. Ranking charts of optimization results on the CEC2022 benchmark. (a) The radar chart. (b) The average rank chart.
Figure 6. Ranking charts of optimization results on the CEC2022 benchmark. (a) The radar chart. (b) The average rank chart.
Computation 14 00045 g006
Figure 7. The average convergence curve and box plot of the Himmelblau function optimization.
Figure 7. The average convergence curve and box plot of the Himmelblau function optimization.
Computation 14 00045 g007
Figure 8. Step-cone pulley design mechanical structure diagram.
Figure 8. Step-cone pulley design mechanical structure diagram.
Computation 14 00045 g008
Figure 9. The average convergence curve and box plot of step-cone pulley design.
Figure 9. The average convergence curve and box plot of step-cone pulley design.
Computation 14 00045 g009
Figure 10. Schematic of a hydrostatic thrust bearing.
Figure 10. Schematic of a hydrostatic thrust bearing.
Computation 14 00045 g010
Figure 11. The average convergence curve and box plot of hydrostatic thrust bearing design problem.
Figure 11. The average convergence curve and box plot of hydrostatic thrust bearing design problem.
Computation 14 00045 g011
Figure 12. Schematic of a three-bar truss design problem.
Figure 12. Schematic of a three-bar truss design problem.
Computation 14 00045 g012
Figure 13. The average convergence curve and box plot of three-bar truss design problem.
Figure 13. The average convergence curve and box plot of three-bar truss design problem.
Computation 14 00045 g013
Table 1. Performance comparative of optimization algorithms on CEC 2017 benchmark functions, part 1.
Table 1. Performance comparative of optimization algorithms on CEC 2017 benchmark functions, part 1.
Func.TypeEPIMOPIMOALAFATABKAPSODEWUTPKEODOA
F1min4.40 × 1022.14 × 1042.64 × 1083.39 × 10102.10 × 10101.45 × 1092.16 × 1081.14 × 1081.22 × 1072.64 × 106
std4.79 × 1032.52 × 1044.12 × 1084.38 × 1091.85 × 10105.03 × 1093.83 × 1084.81 × 1075.25 × 1071.05 × 106
avg3.22 × 1036.42 × 1046.38 × 1084.21 × 10103.84 × 10108.34 × 1096.29 × 1081.80 × 1088.83 × 1074.02 × 106
F3min3.37 × 1047.62 × 1047.19 × 1041.21 × 1057.43 × 1041.44 × 1053.08 × 1053.26 × 1057.93 × 1041.90 × 105
std1.05 × 1042.29 × 1042.22 × 1041.41 × 1042.14 × 1048.48 × 1043.50 × 1044.80 × 1043.55 × 1044.80 × 104
avg5.13 × 1041.10 × 1051.10 × 1051.46 × 1051.01 × 1052.50 × 1053.74 × 1054.15 × 1051.43 × 1052.75 × 105
F4min4.25 × 1024.74 × 1026.15 × 1024.08 × 1032.46 × 1038.76 × 1027.76 × 1026.40 × 1025.92 × 1025.60 × 102
std4.01 × 1014.12 × 1016.30 × 1011.28 × 1037.45 × 1033.39 × 1025.09 × 1016.47 × 1017.73 × 1012.22 × 101
avg4.67 × 1025.59 × 1027.38 × 1025.33 × 1037.60 × 1031.34 × 1038.62 × 1027.25 × 1026.90 × 1025.94 × 102
F5min6.54 × 1026.54 × 1027.03 × 1021.06 × 1038.30 × 1027.60 × 1029.12 × 1029.07 × 1027.32 × 1025.85 × 102
std3.20 × 1016.02 × 1016.98 × 1012.70 × 1014.83 × 1015.50 × 1012.06 × 1012.58 × 1013.66 × 1013.74 × 101
avg7.15 × 1027.53 × 1028.02 × 1021.10 × 1039.06 × 1028.39 × 1029.56 × 1029.38 × 1027.93 × 1026.54 × 102
F6min6.00 × 1026.00 × 1026.17 × 1026.87 × 1026.58 × 1026.42 × 1026.06 × 1026.04 × 1026.38 × 1026.01 × 102
std7.96 × 1015.316.385.147.137.541.021.465.744.25 × 101
avg6.01 × 1026.09 × 1026.28 × 1026.94 × 1026.69 × 1026.53 × 1026.08 × 1026.06 × 1026.47 × 1026.02 × 102
F7min8.96 × 1029.00 × 1021.09 × 1031.71 × 1031.57 × 1031.30 × 1031.21 × 1031.17 × 1031.08 × 1039.28 × 102
std2.06 × 1012.88 × 1017.24 × 1015.38 × 1011.09 × 1029.58 × 1012.42 × 1012.48 × 1011.52 × 1021.60 × 101
avg9.31 × 1029.62 × 1021.19 × 1031.79 × 1031.74 × 1031.38 × 1031.25 × 1031.21 × 1031.40 × 1039.52 × 102
F8min9.94 × 1021.00 × 1039.94 × 1021.38 × 1031.21 × 1031.08 × 1031.24 × 1031.19 × 1031.03 × 1039.35 × 102
std1.49 × 1013.90 × 1015.21 × 1011.94 × 1011.22 × 1023.31 × 1011.22 × 1012.05 × 1014.80 × 1011.57 × 101
avg1.02 × 1031.06 × 1031.09 × 1031.42 × 1031.32 × 1031.13 × 1031.26 × 1031.23 × 1031.10 × 1039.56 × 102
F9min4.25 × 1034.96 × 1036.38 × 1033.03 × 1041.37 × 1047.67 × 1031.20 × 1041.37 × 1038.44 × 1034.00 × 103
std1.35 × 1031.63 × 1032.64 × 1032.66 × 1031.97 × 1033.42 × 1031.87 × 1037.76 × 1021.64 × 1032.37 × 103
avg5.83 × 1037.46 × 1039.67 × 1033.41 × 1041.62 × 1041.16 × 1041.48 × 1042.32 × 1031.06 × 1046.74 × 103
F10min4.85 × 1034.69 × 1038.92 × 1031.43 × 1048.84 × 1036.35 × 1031.26 × 1041.47 × 1047.00 × 1034.87 × 103
std4.81 × 1024.57 × 1021.67 × 1034.17 × 1021.88 × 1031.02 × 1036.51 × 1024.91 × 1027.85 × 1028.46 × 102
avg5.78 × 1035.56 × 1031.18 × 1041.49 × 1041.00 × 1048.04 × 1031.40 × 1041.56 × 1048.52 × 1036.30 × 103
Table 2. Performance comparative of optimization algorithms on CEC 2017 benchmark functions, part 2.
Table 2. Performance comparative of optimization algorithms on CEC 2017 benchmark functions, part 2.
Func.TypeEPIMOPIMOALAFATABKAPSODEWUTPKEODOA
F11min1.22 × 1031.27 × 1031.56 × 1036.74 × 1032.43 × 1031.43 × 1034.98 × 1037.65 × 1031.51 × 1031.31 × 103
std3.57 × 1015.29 × 1014.52 × 1022.49 × 1035.04 × 1033.03 × 1022.59 × 1037.13 × 1031.23 × 1021.52 × 103
avg1.27 × 1031.35 × 1031.97 × 1031.14 × 1047.22 × 1031.76 × 1038.74 × 1031.39 × 1041.71 × 1032.82 × 103
F12min3.09 × 1042.85 × 1051.80 × 1079.51 × 1093.79 × 1083.48 × 1075.83 × 1082.27 × 1072.06 × 1072.61 × 106
std3.73 × 1041.95 × 1063.07 × 1073.20 × 1092.19 × 10107.84 × 1081.94 × 1081.06 × 1082.78 × 1074.49 × 106
avg6.42 × 1042.84 × 1064.75 × 1071.43 × 10101.51 × 10106.99 × 1088.90 × 1081.15 × 1084.58 × 1071.08 × 107
F13min1.47 × 1031.72 × 1032.77 × 1041.78 × 1097.95 × 1054.48 × 1046.56 × 1062.28 × 1043.30 × 1047.08 × 103
std7.47 × 1032.46 × 1032.70 × 1047.61 × 1087.74 × 1081.08 × 1092.27 × 1071.53 × 1047.17 × 1043.67 × 103
avg5.26 × 1034.63 × 1037.23 × 1043.08 × 1092.88 × 1085.81 × 1082.85 × 1075.06 × 1041.07 × 1051.35 × 104
F14min1.57 × 1032.89 × 1038.36 × 1038.85 × 1051.70 × 1044.80 × 1047.14 × 1055.53 × 1053.16 × 1041.23 × 105
std6.60 × 1011.19 × 1041.87 × 1041.84 × 1068.52 × 1052.14 × 1051.78 × 1061.32 × 1061.63 × 1051.06 × 106
avg1.67 × 1031.61 × 1043.58 × 1043.63 × 1064.50 × 1053.16 × 1054.00 × 1061.94 × 1061.96 × 1051.07 × 106
F15min1.57 × 1031.59 × 1034.37 × 1033.77 × 1075.08 × 1047.55 × 1031.99 × 1044.95 × 1039.44 × 1032.09 × 103
std6.43 × 1018.45 × 1021.22 × 1046.10 × 1071.53 × 1091.73 × 1041.31 × 1061.41 × 1042.38 × 1047.48 × 103
avg1.64 × 1032.36 × 1032.70 × 1041.11 × 1084.85 × 1082.70 × 1041.54 × 1063.43 × 1043.12 × 1048.55 × 103
F16min2.30 × 1032.54 × 1033.14 × 1033.91 × 1033.17 × 1032.91 × 1034.51 × 1035.09 × 1032.91 × 1032.71 × 103
std3.24 × 1022.79 × 1022.84 × 1027.66 × 1025.08 × 1025.33 × 1022.34 × 1029.36 × 1014.00 × 1022.21 × 102
avg2.99 × 1033.07 × 1033.60 × 1035.52 × 1034.16 × 1033.63 × 1035.06 × 1035.28 × 1033.43 × 1033.08 × 103
F17min2.40 × 1032.56 × 1032.81 × 1033.79 × 1033.02 × 1032.72 × 1033.61 × 1033.87 × 1032.68 × 1032.37 × 103
std2.22 × 1021.36 × 1023.67 × 1026.23 × 1021.16 × 1035.56 × 1021.59 × 1021.62 × 1023.74 × 1022.83 × 102
avg2.75 × 1032.80 × 1033.35 × 1034.66 × 1034.03 × 1033.52 × 1033.86 × 1034.21 × 1033.34 × 1032.72 × 103
F18min2.50 × 1037.42 × 1047.92 × 1047.71 × 1062.95 × 1052.05 × 1056.15 × 1067.53 × 1061.70 × 1054.33 × 105
std1.17 × 1041.04 × 1051.98 × 1061.58 × 1072.24 × 1071.16 × 1067.76 × 1061.50 × 1071.24 × 1061.14 × 106
avg1.01 × 1041.64 × 1051.28 × 1062.41 × 1077.93 × 1061.37 × 1062.04 × 1072.88 × 1072.06 × 1061.60 × 106
F19min1.93 × 1031.95 × 1032.73 × 1032.70 × 1075.28 × 1051.76 × 1045.27 × 1053.56 × 1031.93 × 1042.19 × 103
std3.33 × 1012.69 × 1031.44 × 1041.77 × 1074.58 × 1086.16 × 1046.71 × 1058.65 × 1031.94 × 1045.69 × 103
avg1.99 × 1033.94 × 1031.44 × 1045.19 × 1072.24 × 1086.06 × 1041.10 × 1061.06 × 1043.65 × 1047.69 × 103
F20min2.53 × 1032.53 × 1033.25 × 1033.74 × 1032.86 × 1032.99 × 1033.57 × 1033.88 × 1032.84 × 1032.47 × 103
std1.47 × 1022.11 × 1021.88 × 1021.81 × 1021.68 × 1023.05 × 1021.74 × 1022.32 × 1023.44 × 1022.22 × 102
avg2.82 × 1032.82 × 1033.55 × 1034.03 × 1033.21 × 1033.40 × 1033.76 × 1034.20 × 1033.44 × 1032.80 × 103
Table 3. Performance comparative of optimization algorithms on CEC 2017 benchmark functions, part 3.
Table 3. Performance comparative of optimization algorithms on CEC 2017 benchmark functions, part 3.
Func.TypeEPIMOPIMOALAFATABKAPSODEWUTPKEODOA
F21min2.49 × 1032.51 × 1032.46 × 1032.92 × 1032.71 × 1032.55 × 1032.72 × 1032.72 × 1032.53 × 1032.45 × 103
std5.07 × 1014.70 × 1015.63 × 1014.44 × 1017.27 × 1017.38 × 1012.05 × 1011.28 × 1014.14 × 1017.13
avg2.54 × 1032.57 × 1032.55 × 1032.98 × 1032.81 × 1032.65 × 1032.76 × 1032.74 × 1032.58 × 1032.46 × 103
F22min6.50 × 1037.20 × 1031.08 × 1041.33 × 1041.03 × 1049.21 × 1031.52 × 1041.59 × 1048.76 × 1037.70 × 103
std6.16 × 1025.22 × 1021.11 × 1031.26 × 1032.12 × 1038.76 × 1023.85 × 1025.33 × 1028.78 × 1025.03 × 102
avg7.55 × 1037.84 × 1031.30 × 1041.68 × 1041.17 × 1041.02 × 1041.61 × 1041.68 × 1041.03 × 1048.39 × 103
F23min2.71 × 1032.99 × 1032.94 × 1033.56 × 1033.47 × 1033.33 × 1033.16 × 1033.13 × 1032.98 × 1032.87 × 103
std1.02 × 1023.37 × 1019.30 × 1018.72 × 1013.02 × 1021.35 × 1021.99 × 1015.04 × 1018.22 × 1012.59 × 101
avg2.97 × 1033.04 × 1033.04 × 1033.69 × 1033.86 × 1033.58 × 1033.19 × 1033.21 × 1033.08 × 1032.91 × 103
F24min3.28 × 1033.29 × 1033.10 × 1034.14 × 1033.46 × 1033.44 × 1033.34 × 1033.30 × 1033.08 × 1033.06 × 103
std7.16 × 1016.47 × 1019.57 × 1012.35 × 1021.53 × 1021.57 × 1021.19 × 1013.73 × 1015.29 × 1014.95 × 101
avg3.35 × 1033.38 × 1033.26 × 1034.32 × 1033.81 × 1033.58 × 1033.36 × 1033.34 × 1033.16 × 1033.12 × 103
F25min2.96 × 1033.03 × 1033.13 × 1034.76 × 1034.68 × 1033.33 × 1033.23 × 1033.06 × 1033.12 × 1033.06 × 103
std1.46 × 1012.75 × 1018.24 × 1014.90 × 1022.95 × 1032.21 × 1024.85 × 1016.30 × 1015.19 × 1012.06 × 101
avg2.97 × 1033.07 × 1033.21 × 1035.44 × 1036.63 × 1033.67 × 1033.32 × 1033.15 × 1033.23 × 1033.09 × 103
F26min2.90 × 1033.26 × 1036.00 × 1037.73 × 1038.96 × 1038.67 × 1037.54 × 1038.34 × 1036.03 × 1033.38 × 103
std1.83 × 1031.59 × 1038.58 × 1022.92 × 1031.72 × 1031.17 × 1032.73 × 1021.94 × 1029.63 × 1021.01 × 103
avg4.62 × 1035.65 × 1037.26 × 1031.02 × 1041.27 × 1041.02 × 1048.28 × 1038.49 × 1037.42 × 1034.87 × 103
F27min3.26 × 1033.32 × 1033.40 × 1034.22 × 1033.92 × 1033.70 × 1033.50 × 1033.26 × 1033.49 × 1033.28 × 103
std7.14 × 1016.29 × 1011.37 × 1023.02 × 1024.00 × 1021.62 × 1025.54 × 1011.17 × 1029.58 × 1017.54 × 101
avg3.40 × 1033.41 × 1033.59 × 1034.63 × 1034.29 × 1033.95 × 1033.57 × 1033.47 × 1033.61 × 1033.40 × 103
F28min3.26 × 1033.27 × 1033.44 × 1036.05 × 1034.59 × 1033.90 × 1034.11 × 1035.24 × 1033.48 × 1033.32 × 103
std1.80 × 1012.38 × 1011.76 × 1033.88 × 1027.78 × 1025.46 × 1025.35 × 1021.19 × 1034.32 × 1023.01 × 101
avg3.27 × 1033.31 × 1034.45 × 1036.62 × 1036.25 × 1034.47 × 1034.81 × 1036.45 × 1033.78 × 1033.36 × 103
F29min3.62 × 1033.77 × 1034.04 × 1036.29 × 1035.98 × 1035.08 × 1035.23 × 1035.43 × 1034.25 × 1033.58 × 103
std2.72 × 1022.44 × 1026.91 × 1025.90 × 1023.95 × 1025.25 × 1022.90 × 1022.31 × 1025.82 × 1022.77 × 102
avg4.07 × 1034.08 × 1034.93 × 1037.26 × 1036.80 × 1035.72 × 1035.81 × 1035.72 × 1035.08 × 1033.99 × 103
F30min5.82 × 1056.11 × 1051.63 × 1062.61 × 1083.35 × 1071.36 × 1073.18 × 1071.67 × 1065.81 × 1069.19 × 105
std2.64 × 1042.11 × 1051.70 × 1061.19 × 1084.05 × 1082.09 × 1071.40 × 1073.07 × 1063.49 × 1062.43 × 105
avg6.13 × 1059.25 × 1053.28 × 1064.68 × 1082.05 × 1083.43 × 1075.27 × 1075.65 × 1068.55 × 1061.42 × 106
Table 4. The results of the Wilcoxon rank-sum test on the CEC2017 benchmark.
Table 4. The results of the Wilcoxon rank-sum test on the CEC2017 benchmark.
Func.EPIMOPIMOALAFATABKAPSODEWUTPKEODOA
F11.001.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 104
F31.001.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 104
F41.001.01 × 1031.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 104
F51.001.40 × 1012.20 × 1031.83 × 1041.83 × 1042.46 × 1041.83 × 1041.83 × 1047.69 × 1041.31 × 103
F61.001.01 × 1031.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1042.83 × 103
F71.001.73 × 1021.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1042.57 × 102
F81.002.11 × 1022.83 × 1031.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1043.30 × 1041.83 × 104
F91.003.12 × 1021.71 × 1031.83 × 1041.83 × 1043.30 × 1041.83 × 1041.83 × 1042.46 × 1043.85 × 101
F101.002.41 × 1011.83 × 1041.83 × 1041.83 × 1042.46 × 1041.83 × 1041.83 × 1041.83 × 1048.90 × 102
F111.001.71 × 1031.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1043.30 × 104
F121.001.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 104
F131.002.12 × 1011.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1042.46 × 1041.83 × 1042.83 × 103
F141.001.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 104
F151.002.11 × 1021.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 104
F161.006.78 × 1017.69 × 1041.83 × 1044.40 × 1043.61 × 1031.83 × 1041.83 × 1045.39 × 1026.78 × 101
F171.005.21 × 1011.31 × 1031.83 × 1043.30 × 1041.71 × 1031.83 × 1041.83 × 1041.31 × 1036.78 × 101
F181.001.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 104
F191.002.11 × 1021.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 104
F201.008.50 × 1011.83 × 1041.83 × 1045.83 × 1045.83 × 1041.83 × 1041.83 × 1041.31 × 1037.34 × 101
F211.001.40 × 1017.34 × 1011.83 × 1041.83 × 1041.71 × 1031.83 × 1041.83 × 1042.11 × 1021.83 × 104
F221.005.21 × 1011.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.73 × 102
F231.004.52 × 1022.41 × 1011.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.73 × 1029.11 × 103
F241.002.41 × 1015.39 × 1021.83 × 1042.46 × 1045.83 × 1041.86 × 1019.70 × 1011.83 × 1041.83 × 104
F251.001.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 104
F261.008.90 × 1021.31 × 1031.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.71 × 1031.00
F271.008.50 × 1012.83 × 1031.83 × 1041.83 × 1041.83 × 1045.83 × 1041.86 × 1013.30 × 1049.70 × 101
F281.001.71 × 1031.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 104
F291.009.10 × 1011.31 × 1031.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1044.40 × 1046.23 × 101
F301.007.69 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 104
Table 5. Performance comparative of optimization algorithms on CEC 2022 benchmark functions.
Table 5. Performance comparative of optimization algorithms on CEC 2022 benchmark functions.
Func.TypeEPIMOPIMOALAFATABKAPSODEWUTPKEODOA
F1min3.00 × 1023.00 × 1023.00 × 1024.93 × 1023.00 × 1023.00 × 1022.91 × 1031.87 × 1033.00 × 1023.00 × 102
std7.71 × 1051.13 × 1016.32 × 1011.39 × 1032.64 × 1031.55 × 10103.18 × 1031.88 × 1031.95 × 1013.71 × 102
avg3.00 × 1023.09 × 1023.00 × 1021.84 × 1031.36 × 1033.00 × 1028.34 × 1034.19 × 1033.00 × 1024.62 × 102
F2min4.00 × 1024.00 × 1024.04 × 1024.11 × 1024.00 × 1024.00 × 1024.08 × 1024.06 × 1024.00 × 1024.00 × 102
std1.682.012.102.96 × 1015.57 × 1018.41 × 1014.75 × 1011.041.98 × 1013.20
avg4.01 × 1024.01 × 1024.07 × 1024.44 × 1024.26 × 1024.54 × 1024.08 × 1024.07 × 1024.12 × 1024.02 × 102
F3min6.00 × 1026.00 × 1026.00 × 1026.13 × 1026.18 × 1026.00 × 1026.00 × 1026.00 × 1026.00 × 1026.00 × 102
std3.59 × 1011.924.15 × 1019.961.25 × 1011.32 × 1017.77 × 1065.17 × 1013.676.70 × 103
avg6.00 × 1026.02 × 1026.00 × 1026.29 × 1026.34 × 1026.09 × 1026.00 × 1026.00 × 1026.02 × 1026.00 × 102
F4min8.03 × 1028.08 × 1028.04 × 1028.25 × 1028.09 × 1028.08 × 1028.23 × 1028.26 × 1028.10 × 1028.08 × 102
std8.778.348.508.308.386.474.255.994.508.17
avg8.16 × 1028.20 × 1028.22 × 1028.36 × 1028.23 × 1028.19 × 1028.29 × 1028.35 × 1028.17 × 1028.18 × 102
F5min9.01 × 1029.56 × 1029.00 × 1029.10 × 1029.66 × 1029.00 × 1029.00 × 1029.00 × 1029.00 × 1029.00 × 102
std3.92 × 1017.08 × 1011.401.46 × 1021.04 × 1021.19 × 1011.237.00 × 1066.34 × 1015.80 × 101
avg9.53 × 1021.04 × 1039.01 × 1021.04 × 1031.09 × 1039.09 × 1029.02 × 1029.00 × 1029.44 × 1029.39 × 102
F6min1.80 × 1031.80 × 1031.83 × 1034.62 × 1041.83 × 1031.94 × 1033.38 × 1034.96 × 1031.95 × 1031.83 × 103
std4.921.14 × 1019.16 × 1015.98 × 1052.41 × 1032.10 × 1035.14 × 1033.29 × 1041.46 × 1033.20 × 102
avg1.81 × 1031.82 × 1031.95 × 1035.59 × 1053.40 × 1033.91 × 1039.43 × 1032.02 × 1042.70 × 1032.08 × 103
F7min2.00 × 1032.02 × 1032.00 × 1032.04 × 1032.02 × 1032.01 × 1032.01 × 1032.02 × 1032.02 × 1032.00 × 103
std9.621.386.881.55 × 1012.74 × 1011.08 × 1015.176.173.90 × 1018.92
avg2.01 × 1032.02 × 1032.02 × 1032.06 × 1032.05 × 1032.03 × 1032.01 × 1032.03 × 1032.04 × 1032.01 × 103
F8min2.20 × 1032.20 × 1032.22 × 1032.23 × 1032.21 × 1032.22 × 1032.22 × 1032.23 × 1032.22 × 1032.20 × 103
std7.348.461.302.026.506.21 × 1011.732.291.165.27
avg2.21 × 1032.21 × 1032.22 × 1032.23 × 1032.23 × 1032.27 × 1032.22 × 1032.23 × 1032.22 × 1032.22 × 103
F9min2.40 × 1032.40 × 1032.53 × 1032.55 × 1032.53 × 1032.53 × 1032.53 × 1032.53 × 1032.53 × 1032.53 × 103
std4.09 × 1014.89 × 1011.76 × 1011.36 × 1014.64 × 1014.62 × 1012.14 × 1089.18 × 10129.64 × 1091.68
avg2.52 × 1032.50 × 1032.53 × 1032.57 × 1032.54 × 1032.55 × 1032.53 × 1032.53 × 1032.53 × 1032.53 × 103
F10min2.40 × 1032.41 × 1032.50 × 1032.50 × 1032.50 × 1032.50 × 1032.50 × 1032.50 × 1032.50 × 1032.40 × 103
std1.18 × 1014.07 × 1011.76 × 1027.17 × 1016.25 × 1011.63 × 1025.46 × 1012.54 × 1025.91 × 1014.17 × 101
avg2.41 × 1032.46 × 1032.61 × 1032.56 × 1032.59 × 1032.62 × 1032.50 × 1032.65 × 1032.54 × 1032.42 × 103
F11min2.60 × 1032.60 × 1032.60 × 1032.76 × 1032.60 × 1032.90 × 1032.74 × 1032.60 × 1032.90 × 1032.60 × 103
std4.03 × 1051.11 × 1021.66 × 1023.81 × 1021.50 × 1029.87 × 1014.98 × 1011.55 × 1025.121.38 × 102
avg2.60 × 1032.70 × 1032.79 × 1033.22 × 1032.75 × 1032.93 × 1032.88 × 1032.78 × 1032.90 × 1032.78 × 103
F12min2.86 × 1032.86 × 1032.86 × 1032.87 × 1032.86 × 1032.86 × 1032.86 × 1032.86 × 1032.86 × 1032.86 × 103
std1.209.26 × 10−11.667.483.952.811.001.902.523.51
avg2.86 × 1032.86 × 1032.86 × 1032.87 × 1032.87 × 1032.87 × 1032.86 × 1032.86 × 1032.87 × 1032.87 × 103
Table 6. The results of the Wilcoxon rank-sum test on the CEC 2022 benchmark.
Table 6. The results of the Wilcoxon rank-sum test on the CEC 2022 benchmark.
Func.EPIMOPIMOALAFATABKAPSODEWUTPKEODOA
F11.001.83 × 1041.83 × 1041.83 × 1041.83 × 1041.77 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 104
F21.001.40 × 1021.81 × 1041.83 × 1043.61 × 1035.83 × 1041.83 × 1041.83 × 1045.83 × 1041.73 × 102
F31.005.80 × 1038.50 × 1011.83 × 1041.83 × 1045.83 × 1041.83 × 1041.21 × 1012.11 × 1028.90 × 102
F41.002.41 × 1017.57 × 1024.40 × 1046.40 × 1023.85 × 1012.20 × 1035.83 × 1045.20 × 1014.73 × 101
F51.003.61 × 1034.40 × 1042.57 × 1021.01 × 1035.80 × 1031.31 × 1031.83 × 1042.73 × 1011.86 × 101
F61.009.11 × 1031.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 104
F71.007.69 × 1042.83 × 1031.83 × 1041.83 × 1041.01 × 1036.78 × 1011.83 × 1041.83 × 1044.27 × 101
F81.001.40 × 1021.83 × 1041.83 × 1043.30 × 1041.83 × 1043.30 × 1041.83 × 1041.83 × 1047.69 × 104
F91.008.61 × 1023.38 × 1021.59 × 1041.59 × 1044.78 × 1011.54 × 1012.90 × 1032.90 × 1031.59 × 104
F101.001.31 × 1031.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 1044.73 × 101
F111.001.83 × 1045.83 × 1041.83 × 1041.83 × 1041.59 × 1041.83 × 1041.83 × 1041.83 × 1041.83 × 104
F121.003.60 × 1032.55 × 1021.82 × 1041.82 × 1041.82 × 1042.73 × 1011.40 × 1012.45 × 1041.70 × 103
Table 7. Himmelblau function’s best outcomes from the various algorithms.
Table 7. Himmelblau function’s best outcomes from the various algorithms.
AlgorithmOptimal Values for VariablesOptimal Cost
x 1 x 2 x 3 x 4 x 5
EPIMO78.0033.0030.0045.0036.78−30,665.54
PIMO78.0033.0030.0045.0036.78−30,665.54
ALA78.0033.0030.0045.0036.78−30,665.54
FATA78.0033.6030.6645.0035.72−30,517.93
BKA78.0033.0030.0045.0036.78−30,665.54
PSO78.0033.0030.0045.0036.78−30,665.54
DE78.0033.0030.0244.9436.75−30,660.23
WUTP78.0033.0030.0045.0036.78−30,665.54
KEO78.0033.0030.0045.0036.78−30,665.54
DOA78.0133.2130.8043.6235.41−30,492.21
Table 8. Himmelblau function index statistics.
Table 8. Himmelblau function index statistics.
Alg.MinWorstStdAvgMedian
EPIMO−30,665.54−30,665.440.03−30,665.53−30,665.54
PIMO−30,665.54−30,662.231.02−30,665.12−30,665.48
ALA−30,665.54−30,665.520.01−30,665.54−30,665.54
FATA−30,517.93−29,967.57180.22−30,281.67−30,298.78
BKA−30,665.53−30,184.75151.72−30,616.54−30,665.02
PSO−30,665.54−30,544.2938.34−30,653.41−30,665.54
DE−30,660.23−30,576.8525.83−30,639.75−30,648.48
WUTP−30,665.54−30,665.530.00−30,665.54−30,665.54
KEO−30,665.54−30,665.530.00−30,665.54−30,665.54
DOA−30,492.21−29,799.07227.73−30,152.39−30,145.38
Table 9. Step-cone pulley design best outcomes for the various algorithms.
Table 9. Step-cone pulley design best outcomes for the various algorithms.
AlgorithmOptimal Values for VariablesOptimal Cost
l1l2l3l4(ω)
EPIMO38.4252.8670.4884.5090.0016.16
PIMO40.9256.3175.0790.0084.4817.14
ALA40.4155.6174.1488.8985.9417.18
FATA39.8954.0473.6190.0090.003.64 × 1010
BKA39.8054.7773.0287.5487.1916.74
PSO39.0253.7071.5985.8390.0016.60
DE40.9256.3175.0790.0084.482578.28
WUTP40.9256.3175.0790.0087.6917.79
KEO38.4452.9070.5284.5689.9416.10
DOA39.5954.4872.6487.0988.431346.21
Table 10. Step-cone pulley design index statistics.
Table 10. Step-cone pulley design index statistics.
Alg.MinWorstStdAvgMedian
EPIMO1.62 × 1011.71 × 1013.02 × 1011.70 × 1011.71 × 101
PIMO1.71 × 1011.84 × 10135.83 × 10121.84 × 10121.71 × 101
ALA1.72 × 1018.60 × 1022.62 × 1021.17 × 1022.68 × 101
FATA3.64 × 10102.75 × 10117.04 × 10101.27 × 10111.23 × 1011
BKA1.67 × 1013.80 × 10111.20 × 10113.80 × 10101.75 × 101
PSO1.66 × 1011.83 × 1016.69 × 1011.79 × 1011.83 × 101
DE2.58 × 1031.42 × 1074.48 × 1065.29 × 1064.38 × 106
WUTP1.78 × 1012.15 × 1046.77 × 1033.01 × 1031.83 × 101
KEO1.61 × 1011.82 × 1016.44 × 1011.72 × 1011.71 × 101
DOA1.35 × 1031.28 × 1054.99 × 1044.23 × 1041.56 × 104
Table 11. Hydrostatic thrust bearing design problem’s best outcomes for the various algorithms.
Table 11. Hydrostatic thrust bearing design problem’s best outcomes for the various algorithms.
AlgorithmOptimal Values for VariablesOptimal Cost
R R 0 μ × 10 6 Q
EPIMO5.96345.39565.362.264819,455.12
PIMO5.95585.38905.35872.269719,505.48
ALA5.95695.38925.40212.301519,587.65
FATA6.27105.60505.60502.938023,403.51
BKA5.95765.39075.35872.278119,645.78
PSO5.95695.38925.40212.301519,586.68
DE5.95585.38905.35872.269719,623.45
WUTP5.95765.39075.35872.278119,789.34
KEO5.95585.38905.35872.269719,876.56
DOA7.15516.68878.32089.168529,221.45
Table 12. Hydrostatic thrust bearing design problem index statistics.
Table 12. Hydrostatic thrust bearing design problem index statistics.
Alg.MinWorstStdAvgMedian
EPIMO1.95 × 1041.99 × 1041.26 × 1021.96 × 1041.95 × 104
PIMO1.95 × 1042.15 × 1086.34 × 1075.27 × 1071.95 × 104
ALA1.96 × 1043.42 × 1078.56 × 1061.24 × 1071.96 × 104
FATA2.34 × 1042.98 × 1089.87 × 1071.56 × 1082.36 × 104
BKA1.96 × 1044.56 × 1071.23 × 1072.34 × 1071.98 × 104
PSO1.96 × 1041.98 × 1048.94 × 1011.97 × 1041.96 × 104
DE1.96 × 1043.78 × 1071.02 × 1071.89 × 1071.97 × 104
WUTP1.98 × 1042.34 × 1076.78 × 1069.87 × 1061.99 × 104
KEO1.99 × 1043.45 × 1079.23 × 1061.78 × 1072.00 × 104
DOA2.92 × 1048.77 × 10132.77 × 10138.80 × 10121.36 × 109
Table 13. Three-bar truss design problem’s best outcomes for the various algorithms.
Table 13. Three-bar truss design problem’s best outcomes for the various algorithms.
AlgorithmOptimal Values for VariablesOptimal Cost
A 1 A 2 f ( x )
EPIMO0.78870.4083263.8958
PIMO0.78870.4083263.8958
ALA0.78870.4082263.8958
FATA0.79400.3936263.9307
BKA0.78850.4088263.8959
PSO0.78850.4086263.8959
DE0.78860.4086263.8962
WUTP0.78840.4090263.8967
KEO0.78870.4083263.8958
DOA0.78760.4115263.9193
Table 14. Three-bar truss design problem index statistics.
Table 14. Three-bar truss design problem index statistics.
Alg.MinWorstStdAvgMedian
EPIMO263.8958263.89586.80 × 10−8263.8958263.8958
PIMO263.8958263.89827.14 × 10−4263.8966263.8966
ALA263.8958263.89591.28 × 10−5263.8959263.8958
FATA263.9307264.62001.90 × 10−1264.2606264.3130
BKA263.8959263.90071.68 × 10−3263.8978263.8978
PSO263.8959264.04414.57 × 10−2263.9149263.8997
DE263.8962263.89938.91 × 10−4263.8969263.8968
WUTP263.8967263.90181.75 × 10−3263.8988263.8981
KEO263.8958263.92207.97 × 10−3263.9004263.8971
DOA263.9193265.74966.25 × 10−1264.4569264.1958
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, X.; Peng, H.; Cai, H.; Liu, Y.; Li, S.; Peng, W. An Enhanced Projection-Iterative-Methods-Based Optimizer for Complex Constrained Engineering Design Problems. Computation 2026, 14, 45. https://doi.org/10.3390/computation14020045

AMA Style

Zhu X, Peng H, Cai H, Liu Y, Li S, Peng W. An Enhanced Projection-Iterative-Methods-Based Optimizer for Complex Constrained Engineering Design Problems. Computation. 2026; 14(2):45. https://doi.org/10.3390/computation14020045

Chicago/Turabian Style

Zhu, Xuemei, Han Peng, Haoyu Cai, Yu Liu, Shirong Li, and Wei Peng. 2026. "An Enhanced Projection-Iterative-Methods-Based Optimizer for Complex Constrained Engineering Design Problems" Computation 14, no. 2: 45. https://doi.org/10.3390/computation14020045

APA Style

Zhu, X., Peng, H., Cai, H., Liu, Y., Li, S., & Peng, W. (2026). An Enhanced Projection-Iterative-Methods-Based Optimizer for Complex Constrained Engineering Design Problems. Computation, 14(2), 45. https://doi.org/10.3390/computation14020045

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop