Next Article in Journal
Analysis of Erosive Wear in Pipe Elbows and Biomimetic Protection Strategies
Previous Article in Journal
Deep Learning-Based Automatic Segmentation of Ischemic Stroke Lesions in CT Perfusion Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Strategy Improved Pied Kingfisher Optimizer for Solving Constrained Optimization Problems

1
School of Mathematics and Physics, Hulunbuir University, Hailar 021008, China
2
School of Engineering, Hulunbuir University, Hailar 021008, China
3
School of Artificial Intelligence and Big Data, Hulunbuir University, Hailar 021008, China
*
Author to whom correspondence should be addressed.
Biomimetics 2026, 11(5), 335; https://doi.org/10.3390/biomimetics11050335
Submission received: 21 March 2026 / Revised: 21 April 2026 / Accepted: 3 May 2026 / Published: 11 May 2026
(This article belongs to the Section Biological Optimisation and Management)

Abstract

This paper proposes a multi-strategy improved pied kingfisher optimizer (MSIPKO), a novel metaheuristic algorithm designed to address constrained optimization problems (COPs). COPs are widely encountered in engineering and industrial applications and are characterized by complex constraints that restrict the feasible solution space and often lead to multiple local optima. To enhance the performance of the original pied kingfisher optimizer (PKO), three strategies are incorporated: (i) a reverse differential crossover mechanism to improve global exploration and maintain population diversity; (ii) an enhanced diving-fishing operator to strengthen local exploitation; and (iii) an improved commensalism phase to enrich search directions and increase robustness. The performance of MSIPKO is evaluated on 12 benchmark functions from the IEEE Congress on Evolutionary Computation 2006 (CEC 2006) test suite and six classical engineering optimization problems. Experimental results demonstrate that MSIPKO outperforms several state-of-the-art algorithms in terms of optimization accuracy, convergence speed, and stability, particularly for high-dimensional, nonlinear, and multi-constrained problems. Moreover, MSIPKO achieves superior or comparable solutions with fewer function evaluations, indicating its high efficiency and adaptability. These results confirm that MSIPKO is a promising tool for solving complex real-world constrained optimization problems. Future work will focus on extending the proposed algorithm to multi-objective and large-scale optimization scenarios.

1. Introduction

Constrained optimization problems are a common challenge in the field of optimization. In real-world engineering scenarios, such as production planning and scheduling, most problems are subject to various constraints due to the scarcity of resources in the production process or technological limitations. Therefore, effectively solving such problems is of critical importance.
Compared to unconstrained optimization problems, constrained optimization problems are more complex due to the presence of constraints. When there are numerous constraints, the feasible solution space may become very limited. Additionally, if the feasible regions are disjoint, the problem may have multiple local optimal solutions. As a result, constrained optimization problems are significantly more difficult to solve than unconstrained ones. Research on the solution of these problems holds great theoretical significance and practical value.

1.1. Description of Constrained Optimization

Generally, a constrained optimization problem can be described by Formula (1). which is widely adopted in the literature and also used in our previous work [1].
min     f ( x ) s . t . g i ( x ) 0 , i = 1 , 2 , p h j ( x ) = 0 , j = 1 , 2 , , q u k x k v k , x R n , k = 1 , 2 , , n
where f ( x ) represents the objective function. g i ( x ) 0 is the i-th inequality constraint in optimization problem in Formula (1), and p is the number of inequality constraints. h j ( x ) = 0 is the j-th equation constraint, and q is the number of equation constraints. u k and v k are the upper and lower bounds of x k , respectively. The set D = { x S | g i ( x ) 0 , h j ( x ) = 0 ,   i = 1 , 2 , p , j = 1 , 2 , , q } that meets all inequality and equality constraints in the search space S = { u k x k v k ,   x R n , k = 1 , 2 , , n } is called the feasible region of the constrained optimization problem in Formula (1). If a group solution x D , x is called a feasible solution; otherwise, it is called an infeasible solution.
For (1), the distance from an individual x to the feasible region D is defined by (2).
G ( x ) = i = 1 p max g i ( x ) , 0 + j = 1 q max h j ( x ) δ , 0
Here, G ( x ) is called the violation function. max g i ( x ) , 0   ( i = 1 , 2 , , p ) represents the violation degree of x in the i-th inequality constraint. h j ( x ) ( i = 1 , 2 , , q ) represents the violation degree of x in the j-th equation constraint, where δ is the tolerance parameter of the equation constraint and δ = 0.0001 .

1.2. Related Work

When applying metaheuristic algorithms to constrained optimization problems (COPs), existing studies mainly involve the design of effective constraint-handling strategies, the development of high-performance optimization algorithms, and their applications in various engineering domains.
(1)
Constraint-handling methods
From the perspective of constraint handling, classical approaches mainly include the penalty function method [2] and Deb’s rules [3]. The penalty function method incorporates constraint violations into the objective function, while Deb’s rules rank solutions based on feasibility and objective values. These methods are widely used due to their simplicity and ease of implementation. However, they are often sensitive to parameter settings and may suffer from slow convergence or premature stagnation when dealing with complex constrained problems.
To overcome these limitations, several advanced strategies have been proposed, such as transforming constraints into bi-objective formulations [4], the ε-constraint method [5], and dual-population strategies [6]. These approaches improve the balance between constraint satisfaction and objective optimization, thereby enhancing adaptability to complex constrained problems. Nevertheless, they usually introduce additional computational overhead and increase algorithmic complexity.
(2)
Swarm intelligence algorithms and their improvements
From the perspective of algorithm design, a wide range of metaheuristic algorithms have been developed in recent years, particularly in the field of swarm intelligence optimization. In the past five years, more than 100 swarm-based optimization algorithms have been proposed, reflecting the rapidly growing research interest in this area [7,8]. Representative examples include Hawkfish Optimization Algorithm (HFOA) [9], Bezier Curve Optimization (BCO) [10], Dung Beetle Optimization (DBO) [11], Polar Lights Optimization (PLO) [12] and Sequoia-ecology-based Metaheuristic Optimization Algorithm (SMOA) [13]. Despite differences in inspiration and mathematical modeling, these algorithms share several common characteristics. Most of them adopt population-based search frameworks and utilize iterative update mechanisms to balance global exploration and local exploitation. In general, they exhibit strong exploration capability in the early search stages and can effectively avoid premature convergence to some extent.
However, these algorithms still face several inherent limitations when applied to constrained optimization problems. Complex constraints often lead to irregular, narrow, or even disconnected feasible regions, which significantly increase the difficulty of locating feasible and high-quality solutions. In such scenarios, many candidate solutions tend to fall into infeasible regions, making it difficult to maintain population diversity within the feasible space. In addition, the presence of multiple disjoint feasible regions may cause algorithms to converge prematurely to local feasible areas, thereby limiting global search capability. Furthermore, achieving an effective balance between exploration and exploitation under strict constraints remains challenging.
Therefore, developing metaheuristic algorithms with stronger adaptability, better diversity maintenance, and more effective search mechanisms for complex constrained spaces remains an important research direction.
To further enhance the performance of metaheuristic algorithms in constrained scenarios, many researchers have proposed improved or hybrid approaches by integrating multiple strategies. For example, Yu [14] proposed an enhanced Aquila Optimizer Algorithm (EAOA) incorporating restart strategies, opposition-based learning, and chaotic local search, and validated its effectiveness on several engineering optimization problems. Sun [15] developed a fuzzy logic-constrained particle swarm optimization algorithm (FILPSO-SCAε), which integrates a Spearman correlation-based adaptive ε constraint-handling method to dynamically balance constraint and objective information. In addition, fuzzy logic is employed to adaptively adjust key parameters, thereby improving search performance. Li [16] introduced an adaptive multi-objective transformation technique (AMaOTCO), which transforms constrained optimization problems into multi-objective ones by combining objective functions with weighted constraint violations, achieving improved convergence efficiency. Furio [17] proposed a hybrid grey wolf–JAYA algorithm (SHGWJA), which integrates elite strategies and adaptive perturbation mechanisms, and employs a penalty function to handle multi-constraint problems. The proposed method has been successfully applied to various engineering applications, such as structural optimization and path planning.
Although these approaches improve convergence behavior and solution quality to some extent, their performance gains often rely on complex hybrid strategies or multiple mechanisms. This may increase algorithmic complexity and reduce general applicability, especially in problems with highly complex constraint structures.
(3)
Engineering applications
In practical applications, some studies have integrated algorithm improvements with constraint-handling strategies to address complex engineering optimization problems. These applications span a wide range of domains, including energy system scheduling [18,19,20] and optimal power flow in power systems [21,22]. In addition, various studies have explored more complex constrained scenarios.
For instance, Boualem [23] proposed an adaptive coordinate system-constrained differential evolution algorithm (ACS-CDE), which utilizes multiple coordinate systems and adaptive selection mechanisms to balance exploration and exploitation. Yang [24] developed an improved sandworm optimization algorithm (ISSA) for structural optimization problems, incorporating enhanced initialization and update strategies combined with penalty-based constraint handling. Abdollahzadeh [25] introduced a hybrid algorithm integrating the puma optimizer with neighborhood search for marine route optimization under multiple constraints. Similarly, Wang [26] proposed a multi-population adaptive optimizer (MACSGWO) for UAV path planning in complex three-dimensional environments. Other studies [27,28] have also applied metaheuristic algorithms to solve constrained optimization problems in similar engineering domains. In addition, Chen et al. [29] and Huang et al. [30] demonstrated the effectiveness of improved metaheuristic algorithms in industrial scheduling and portfolio optimization problems, respectively. These studies show that metaheuristic algorithms have achieved promising performance in various engineering applications with complex constraints.
However, despite these successes, most existing approaches are highly problem-dependent and often rely on carefully designed hybrid strategies or constraint-handling mechanisms. As a result, their generalization ability and robustness in handling diverse and highly complex constrained optimization problems remain limited.
(4)
Summary of existing studies
Based on the above review, existing studies on constrained optimization problems can be broadly categorized into three main groups: constraint-handling strategies, swarm intelligence optimization algorithms, and their applications in engineering domains.
Constraint-handling strategies, including classical methods such as penalty functions and Deb’s rules, as well as advanced approaches like ε-constraint and multi-objective transformation techniques, provide effective mechanisms for handling feasibility conditions. However, their performance often depends heavily on parameter settings and may introduce additional computational complexity.
Swarm intelligence algorithms and their improved variants have demonstrated strong global exploration capabilities and have been widely applied in solving complex optimization problems. Nevertheless, these algorithms still face challenges in maintaining population diversity and achieving a proper balance between exploration and exploitation, especially in constrained search spaces.
In terms of engineering applications, metaheuristic algorithms have achieved promising results in various domains, such as energy systems, power systems, and structural optimization. However, many of these methods are highly problem-dependent and rely on carefully designed hybrid strategies, which may limit their generalization ability in diverse constrained optimization scenarios.
To further synthesize the above studies, a structured comparison of representative methods in terms of their advantages and limitations is provided in Table 1.
(5)
Discussion and research gaps
Despite the significant progress achieved in constraint-handling strategies, swarm intelligence algorithms, and their engineering applications, several critical challenges remain in solving constrained optimization problems.
First, existing constraint-handling methods often struggle to effectively guide the search process in complex feasible spaces. In problems with irregular, narrow, or disconnected feasible regions, many algorithms tend to generate a large number of infeasible solutions, which reduces search efficiency and hinders convergence toward high-quality feasible solutions.
Second, most swarm intelligence algorithms still face difficulties in maintaining population diversity and achieving a proper balance between global exploration and local exploitation under strict constraint conditions. As a result, these algorithms are prone to premature convergence within local feasible regions, limiting their ability to explore multiple promising areas.
Third, although hybrid and improved algorithms can enhance performance by integrating multiple strategies, their effectiveness often relies on problem-specific designs and carefully tuned parameters. This reduces their robustness and generalization ability when applied to diverse and complex constrained optimization scenarios.
Therefore, there is a need to develop more effective optimization algorithms that can better handle complex constraint structures, maintain population diversity within feasible regions, and achieve a more balanced and robust search process. These challenges motivate the development of the proposed method in this study.

1.3. The Introduction of PKO

The pied kingfisher optimizer (PKO) is a novel swarm intelligence algorithm proposed by Bouaouda [31], inspired by the unique foraging behavior of the pied kingfisher, including hovering, diving, and its symbiotic relationship with Eurasian otters. The algorithm innovatively incorporates three phases: the hovering phase for global search, the diving phase for local exploitation, and the symbiotic phase for balancing exploration and exploitation. By mathematically modeling these behaviors into optimization strategies, PKO has been validated on 29 CEC-2017 benchmark functions and 10 CEC-2020 benchmark functions. Experimental results demonstrate that PKO outperforms many advanced algorithms in terms of solution quality, convergence speed, and avoiding local optima. Despite its superior performance in complex search spaces, PKO has certain limitations. For instance, the algorithm lacks a mutation mechanism, leaving room for improvement in its global search capabilities. Additionally, during the symbiotic phase, the algorithm occasionally chooses to keep the population stationary, which suggests potential enhancements to its search performance in this stage.
However, several limitations have been observed in the original PKO. First, it lacks any explicit mutation or perturbation mechanism throughout its iterative process, which limits the algorithm’s ability to escape local optima. As a result, the population tends to converge prematurely, especially when navigating complex, multimodal, or high-dimensional search spaces where diverse exploratory behavior is crucial. Second, the hovering and diving phases adopt a centralized update strategy that relies heavily on elite or globally best individuals. While this may accelerate convergence in simple landscapes, it significantly reduces the diversity of the population and restricts the algorithm’s capacity to explore less-exploited regions of the search space. In scenarios involving rugged fitness landscapes or disjoint feasible regions, this behavior can severely compromise the global search capability. Third, the symbiotic phase lacks directional learning mechanisms guided by fitness differences or relative advantage among individuals. The update rule often leads individuals to randomly drift toward other peers without considering solution quality, resulting in inefficient local exploitation and slow convergence in the later stages of the search. This inefficiency becomes more pronounced in fine-tuning phases where precision and gradient-like adjustment are essential.

1.4. The Main Contributions and Contents of This Paper

To address the limitations of existing methods in solving complex constrained optimization problems, this paper proposes a multi-strategy improved pied kingfisher optimizer (MSIPKO). The main contributions of this study are summarized as follows:
(1) A reverse differential crossover mechanism is proposed by integrating reverse learning with differential mutation, which effectively enhances global exploration capability and maintains population diversity, especially in complex feasible regions.
(2) An enhanced diving-fishing operator is designed to strengthen local exploitation performance, thereby improving convergence accuracy and accelerating the search process.
(3) An improved commensalism phase with a novel guidance mechanism is introduced to enrich search directions, leading to improved solution quality and robustness of the algorithm.
(4) Extensive experiments on IEEE Congress on Evolutionary Computation 2006 (CEC 2006) benchmark functions and classical engineering optimization problems demonstrate that the proposed MSIPKO achieves competitive or superior performance in terms of optimization accuracy, convergence speed, and stability compared with several state-of-the-art algorithms.
These contributions distinguish MSIPKO from existing hybrid metaheuristic approaches and provide an effective solution for complex constrained optimization problems.
The remainder of this paper is organized as follows. Section 2 introduces the basic pied kingfisher optimizer (PKO). Section 3 presents the proposed MSIPKO in detail. Section 4 reports the experimental results, including benchmark functions and engineering optimization problems. Finally, Section 5 concludes the paper and discusses future research directions.

2. The Process of PKO

The proposal of this algorithm is inspired by the living habits of pied kingfisher, considering their behaviors such as perching, hovering, diving, and symbiosis. The specific process is shown below. The mathematical formulations presented in this section follow the original PKO algorithm proposed in [31].

2.1. Initialization

Similar to many swarm intelligence optimization algorithms, the PKO algorithm initiates the search process by randomly generating a set of initial solutions in the search space as the first attempt. The initial population generation is shown in Formula (3).
X i , j = L B + r a n d × U B L B i = 1 , 2 , , N                       j = 1 , 2 , , dim
where X i , j means the position of the i-th individual in the j-th dimension. rand is a random number within the interval [0, 1]. LB and UB respectively represent the lower and upper bounds of the search range.

2.2. Perching and Hovering Strategies (Exploration Phase)

The exploration phase of the PKO algorithm was inspired by the habitat and hovering behavior of pied kingfisher. Based on observations of pied kingfisher in their natural habitats, they alternate between habitat attacks and hovering attacks according to different factors. The location of the individual being searched is updated based on the foraging activities of the pied kingfisher, as shown in Formula (4).
X i t + 1 = X i t + α T 1 X k t X i t , r a n d 0.5 α T 2 X k t X i t , r a n d < 0.5 i , k = 1 , 2 , , N                         i k
where X i t and X i t + 1 respectively represent the solution vector of the i-th individual in iteration t and t + 1 . X k t means the solution vector of the k-th individual in iteration t . The population size is N.
The calculation method for parameter α is shown in Formula (5).
α = 2 × r a n d n 1 , dim 1
where randn means random numbers between 0 and 1 that follow a normal distribution. dim is the length of the solution vector X i , which is the dimension of the objective function.
The parameter T represents the strategy selection of perching or hovering, and the calculation method is shown in Formulas (6)–(9).

2.2.1. Perching

In their habitat strategy, pied kingfishers typically inhabit objects such as trees, rocks, or electrical wires to search for prey. The calculation formula is shown in Formulas (6) and (7).
T 1 = exp 1 exp t 1 M a x _ I t e r 1 B F cos c r e s t _ a n g l e s
c r e s t _ a n g l e s = 2 π × r a n d
where M a x _ I t e r is the number of maximum iterations. BF stands for beating factor, which is set to 8 in this algorithm. The crest angle of a pied kingfisher enhances its field of view for detecting prey from a distance and aids in focusing on prey during hunting.

2.2.2. Hovering

In the hovering strategy, the pied kingfisher maintains its position in the air by rapidly flapping its wings, and its mathematical model is shown in Formulas (8) and (9).
T 2 = B e a t i n g R a t e t M a x _ I t e r 1 B F
B e a t i n g R a t e = r a n d P K O _ F i t n e s s j P K O _ F i t n e s s i
where the i-th and j-th individual’s fitness are respectively P K O _ F i t n e s s i and P K O _ F i t n e s s j .

2.3. Solid Samples Desorption and Sustainability

Pied kingfishers are known for their diving and hunting behavior, which was used for local searches during the development phase. The calculation formula is shown in Formulas (10)–(13).
X i t + 1 = X i t + H A ο α b X b e s t t ,           i = 1 , 2 , , N              
H A = r a n d P K O _ F i t n e s s i B e s t _ F i t n e s s
ο = exp t M a x _ I t e r 2
b = X i t + ο 2 r a n d n X b e s t t
where H A and ο are the parameters related to hunting ability.

2.4. Commensalism Phase (Local Escape Phase)

The symbiotic relationship between pied kingfisher and otters was used to simulate the escape phase, and the calculation formula is shown in Formulas (14) and (15).
X i t + 1 = X m t + ο α X i t X n t , r a n d > 1 P E X i t ,                                                                             r a n d 1 P E
P E = P E max P E max P E min t M a x _ I t e r
where X m t and X n t respectively mean two random individuals of iteration t. PE represents predatory efficiency of the pied kingfisher. P E max = 0.5 , P E min = 0 .

2.5. The Flow of PKO

This section presents PKO’s framework in pseudocode form, as shown in Algorithm 1. The pseudocode outlines the three main phases of PKO—hovering exploration, diving exploitation, and commensalism cooperation—and details how these phases are executed within the iterative optimization process.
Algorithm 1: Pseudocode of PKO
Input: Population total number N; the number of optimization iterations MaxIt.
Output: The optimal solution of PKO (Best_fitness), optimal solution vector Xbest.
1Initialize the PKO population positions according to Section 2.1.
2Calculate the pied kingfisher fitness values
3while (t < MaxIt + 1) do
4  for i = 1:N do
5       if (rand < 0.8) then
6      % Exploration phase
7      if (rand > 0.5) then
8        Compute T1 according to Formulas (6) and (7)
9        Update the position of pied kingfisher using Formula (4)
10      else
11        Compute T2 according to Formulas (8) and (9)
12        Update the position of pied kingfisher using Formula (4)
13    end
14    else
15      % Exploitation phase
16        Update the position of pied kingfisher using Formula (10)
17    end
18    If the newly generated solutions are superior to the previous ones, then replace them. Set best position as the location of best fitness.
19    if (rand > (1 − PE)) then
20       Update the position of pied kingfisher using Formulas (14) and (15)
21    else
22       Update the position of pied kingfisher using Formulas (14) and (15)
23    end
24    Calculate the fitness values of pied kingfisher
25    If the newly generated solutions are superior to the previous ones, then replace them. Set best position as the location of best fitness.
26    t = t + 1
27  end for
28end while
29Return Best_fitness and Xbest.

3. Materials and Methodology

3.1. Overview of the Proposed MSIPKO Algorithm

The original PKO algorithm has demonstrated promising performance in solving optimization problems. However, similar to many population-based metaheuristic methods, it still suffers from several limitations, such as insufficient exploration ability in early iterations, lack of effective mutation mechanisms, and potential premature convergence or slow search speed in later stages when dealing with complex functions.
To address these issues, a multi-strategy improved PKO algorithm (MSIPKO) is proposed in this study. The core idea of MSIPKO is to enhance the balance between global exploration and local exploitation through the integration of multiple complementary strategies.
Specifically, three improvement strategies are introduced. First, a reverse differential crossover mechanism is incorporated to compensate for the lack of mutation in the original PKO and enhance population diversity. Second, during the symbiotic stage, a detour-foraging mechanism inspired by zebrafish behavior is embedded into the second branch of Formula (14) to improve the search capability during the commensalism phase. Finally, a diving-foraging module inspired by pied kingfisher behavior is introduced to strengthen the search efficiency and maintain convergence performance in later iterations.
Through the integration of these strategies, MSIPKO is able to achieve a better balance between exploration and exploitation, thereby improving convergence stability and optimization accuracy. The detailed formulations of the proposed strategies are presented in the following subsections.

3.2. Improvement Strategies

3.2.1. Reverse Differential Crossover Mechanism

In this operator, first, reverse learning is used to simulate the process of reverse searching for food in pied kingfisher, and the calculation formula is shown in Formula (16).
X i o p t = L B + U B X i t
where X i o p t represents the inverse solution after the t-th iteration.
Subsequently, after merging the original population and the reverse population, differential mutation operations are performed on the resulting population X n e w t . The specific calculation process is shown in Formulas (17) and (18). This step simulates the process in which pied kingfishers expand their search range and interact with each other within the group to exchange information.
X i v t = X i n e w t + r a n d 1 , dim X i n e w t X k n e w t
X i , j t + 1 = X i , j n e w t , r a n d 0.5 X i , j v t , r a n d > 0.5 i = 1 , 2 , , N                           j = 1 , 2 , , dim
where X i n e w t and X k n e w t are the i-th and k-th individuals after population merging, respectively. r a n d 1 , dim means a random vector with the range of [0, 1] of length dimensions.

3.2.2. Enhanced Diving-Fishing Operator

An enhanced operator, as defined in Formulas (19)–(21), is proposed to improve the search efficiency of the pied kingfisher’s diving behavior during the prey-capturing process in the PKO algorithm.
X i t + 1 = λ 1 X i t λ 2 X r a n d _ i n d t + λ 3 X s t e p t
X s t e p t = U B L B λ 4
λ 4 =     exp 0.1 dim t M a x _ I t e r 2 0.2 dim t M a x _ I t e r
where λ 1 , λ 2 , λ 3 are three random numbers within the interval [0, 1]. X r a n d _ i n d t represents any individual in the population other than X i itself in t-th iteration. X s t e p is a disturbance variable.

3.2.3. Improvement of the Commensalism Phase

To enhance the search performance of the PKO algorithm, the second branch of Formula (14) was improved to address the issue where individuals might not search toward a better direction. The modified formula is presented as Formulas (22)–(25).
X i t + 1 = X m t + ο α X i t X n t ,     r a n d > 1 P E X r a n d _ i n d t + λ 5 ,                                                 r a n d 1 P E
λ 5 = R X i t X r a n d _ i n d t + δ
R = exp 1 exp t 1 M a x _ I t e r 2 sin 2 π r a n d
δ = r o u n d 0.5 0.05 + r a n d r a n d n 1 , dim
where R is a zoom factor. δ is a disturbance variable. round means round to the nearest integer.

3.3. The Flow of MSIPKO

MSIPKO is designed as a unified hybrid optimization framework, as shown in Algorithm 2, in which three carefully selected strategies function in a complementary and coordinated manner. This integration is not a simple stacking of heuristics, but a structured synergy aimed at addressing key limitations in the original PKO algorithm. Specifically, the reverse differential crossover enhances global exploration in early stages, the enhanced diving-fishing operator improves local exploitation during convergence, and the improved commensalism phase ensures guided adaptation in later iterations. These strategies work together to improve search diversity, convergence speed, and solution accuracy under complex constraints. Based on this integrated framework, the complete pseudocode of MSIPKO is presented below to illustrate its iterative process.
Algorithm 2: Pseudocode of MSIPKO
Input: Population total number N; the number of optimization iterations MaxIt.
Output: The optimal solution of PKO (Best_fitness), optimal solution vector Xbest.
1Initialize the PKO population positions according to Section 2.1.
2Calculate the pied kingfisher fitness values
3while (t < MaxIt + 1) do
4  for i = 1:N do
5     Implement reverse differential crossover mechanism according to Formulas (16)–(18).
6     If the newly generated solutions are superior to the previous ones, then replace them. Set best position as the location of best fitness.
7         if (rand < 0.8) then
8      % Exploration phase
9      if (rand > 0.5) then
10        Compute T1 according to Formulas (6) and (7)
11        Update the position of pied kingfisher using Formula (4)
12      else
13        Compute T2 according to Formulas (8) and (9)
14        Update the position of pied kingfisher using Formula (4)
15     end
16   else
17    % Exploitation phase
18      Update the position of pied kingfisher using Formula (10)
19    end
20    If the newly generated solutions are superior to the previous ones, then replace them. Set best position as the location of best fitness.
21     if (rand > (1 − PE)) then
22      Update the position of pied kingfisher using the first branch of Formulas (14) and (15)
23    else
24      Update the position of pied kingfisher using Formulas (19)–(21)
25   end
26   Calculate the fitness values of pied kingfisher
27    If the newly generated solutions are superior to the previous ones, then replace them. Set best position as the location of best fitness.
28   % Local escape phase
29   Implement commensalism phase according to Formulas (22)–(25) and (15).
30   If the newly generated solutions are superior to the previous ones, then replace them. Set best position as the location of best fitness.
31   t = t + 1
32  end for
33end while
34Return Best_fitness and Xbest.

3.4. Computational Complexity of MSIPKO Analysis

To assess the computational efficiency of the proposed MSIPKO algorithm, this section compares its theoretical time complexity with the original PKO algorithm. Both algorithms operate under the same population size N, problem dimension D, and maximum number of iterations T.
Table 2 summarizes the time complexity of key modules in both algorithms. Structurally, PKO performs three stages per generation—hovering/perching, diving, and commensalism—each involving O N D scale updates and evaluations. Therefore, the complexity for all iterations is O 3 T N D . Considering the initialization, the total complexity is O 3 T + 1 N D .
On top of retaining these three stages, MSIPKO introduces one additional module: the reverse differential crossover mechanism, which involves generating reverse individuals and applying differential mutation, contributing an extra O N D computational load per generation. It is important to emphasize that the enhanced diving-fishing operator and the improved commensalism phase in MSIPKO are refined versions of the original PKO operators. These enhancements only adjust local update rules without introducing new loops or global operations, thus do not increase the asymptotic complexity. Consequently, MSIPKO’s per-generation complexity is O 4 T N D , and total complexity is O 4 T + 1 N D . While the constant factor increases slightly, the overall asymptotic order remains unchanged. It should be noted that the above analysis focuses on asymptotic time complexity, whereas the additional module mainly affects the constant computational cost per iteration rather than the overall complexity order.
In addition to the asymptotic time complexity analysis, the computational burden of MSIPKO is further evaluated from a practical perspective. Although the additional module increases the constant computational cost per iteration, the experimental results indicate that MSIPKO generally requires fewer function evaluations to achieve comparable or better solution quality than most competing algorithms. This suggests that the im proved search efficiency effectively compensates for the additional computational overhead.
Therefore, MSIPKO maintains a reasonable computational burden while achieving improved optimization performance, demonstrating a favorable trade-off between computational cost and solution quality. Table 2 presents a module-level comparison of computational cost, reflecting the relative overhead of each component rather than the asymptotic complexity order.

3.5. Experimental Settings and Constraint-Handling Strategy

To ensure a fair and reliable comparison, all experiments are conducted under a unified experimental setting.
To ensure a fair and reliable comparison, all algorithms are implemented in MATLAB R2024a and executed on a macOS-based system equipped with an Apple M3 Pro processor and 18 GB RAM. For each test problem, all algorithms are independently run 30 times to reduce the influence of randomness.
For constrained optimization problems, a static penalty function is adopted. The penalized objective function is defined as Equation (26).
F ( x ) = f ( x ) + ω G ( x )
where F ( x ) is the fitness function, f x is the original objective function, and G ( x ) is called the violation function, whose calculation process is shown in Equation (2). ω is the penalty coefficient. In this study, ω is set to 106 for all constrained problems. This formulation ensures that infeasible solutions are strongly penalized and guides the population toward the feasible region during the optimization process.
All compared algorithms use the parameter settings recommended in their original studies to ensure fairness of comparison.

4. Numerical Experiment

This section presents a comprehensive set of experiments to evaluate the performance of the proposed MSIPKO algorithm on constrained optimization problems (COPs). The evaluation is divided into two parts: benchmark problems from the CEC 2006 suite (G01–G12) and six classical engineering design problems.
For the benchmark tests, MSIPKO is compared against nine representative algorithms, including recent methods such as the enzyme action optimizer (EAO) and the starfish optimization algorithm (SFOA), forming a set of ten algorithms in total. All algorithms are tested under unified simulation conditions and use a static penalty function as the default constraint-handling mechanism. Performance is assessed through metrics such as average fitness, best solution, worst solution, and standard deviation. Statistical significance is verified using Wilcoxon signed-rank tests with Bonferroni correction.
Next, a component-wise ablation study is conducted to assess the contribution of each improvement in MSIPKO, by sequentially integrating the proposed strategies into the baseline PKO. Following that, a sensitivity analysis under varying evaluation budgets is performed, examining the algorithm’s ability to converge under six different function evaluations (FEs) limits, ranging from 1000 to 200,000.
Furthermore, a detailed comparison of different constraint-handling strategies (CHMs) is presented. MSIPKO is embedded within multiple CHMs, including static penalty, dynamic penalty, feasibility rules, ε-constrained method, and equality relaxation. This comparison highlights the algorithm’s robustness across diverse constraint environments.
In the second part, six widely studied engineering problems are used to further validate the real-world applicability of MSIPKO. The results confirm its strong and stable performance under practical design constraints.

4.1. Benchmark Functions

In this section, the performance of the proposed MSIPKO algorithm is thoroughly examined. Twelve benchmark functions are used to evaluate its optimization ability, and ten state-of-the-art algorithms are employed for comparison. This section includes five parts: (1) introduction of benchmark functions; (2) simulation setup and statistical analysis; (3) component-wise ablation study of MSIPKO; (4) sensitivity to evaluation budget; and (5) comparison of constraint-handling strategies.

4.1.1. Introduction of Benchmark Functions

The main characteristics of the 12 functions are shown in Table 3. Among them, n is the number of decision variables. ρ is the estimated ratio between the feasible region and the search space. LI is the number of linear inequality constraints. NI is the number of nonlinear inequality constraints. LE is the number of linear equality constraints, and NE is the number of nonlinear equality constraints. a is the number of active constraints.

4.1.2. Comparative Evaluation of 10 Algorithms: Simulation Setup and Statistical Analysis

(1)
Simulation settings
In order to verify MSIPKO’s performance, eight algorithms that have shown high quality in the last two years are compared, including PKO, flood algorithm (FLA) [32], black-winged kite algorithm (BKA) [33], triangulation topology aggregation optimizer (TTAO) [34], FOX-inspired optimization algorithm (FOX) [35], Crayfish optimization algorithm (COA) [36], Newton–Raphson-based optimizer (NRBO) [37], enzyme action optimizer (EAO) [38] and starfish optimization algorithm [7].
In order to ensure the fairness of numerical simulation, these 10 algorithms have consistent simulation conditions when calculating 12 benchmark functions by executing 30 times independently. The specific parameters and constraint processing strategies for numerical simulation are shown in Table 4, FEs represent function evaluations, which is population size multiplied by iterations. The relevant parameters of each algorithm are consistent with their proposed literature.
(2)
Experimental results
According to the simulating condition in Table 4, the performance of G01–G12, including the mean value, standard deviation, and the best and worst value, is summarized in Table 5 and Table 6 and visualized through the box plots in Figure 1. Specifically, Table 4 presents the results of the first five algorithms, while Table 5 shows those of the remaining five algorithms.
From the numerical results, MSIPKO demonstrates a strong ability to consistently reach near-optimal or best-known solutions in most test cases. Specifically, on G01, G02, G06, and G08, the algorithm shows both low standard deviation and excellent best-case performance, indicating reliable convergence behavior. In constrained equality-dominant problems such as G03 and G07, MSIPKO still maintains competitive quality with relatively tight result distributions.
Figure 1 complements this analysis by highlighting the solution distributions of all algorithms. MSIPKO exhibits compact box ranges on most problems, reflecting high stability. For functions like G04 and G09, although multiple algorithms show feasible outputs, the interquartile range of MSIPKO is narrower, with fewer outliers. This suggests the multi-strategy enhancement successfully mitigates population stagnation and maintains solution diversity in high-constraint scenarios.
Overall, the simulation results validate the effectiveness of MSIPKO across different categories of constrained optimization problems.
To further validate the statistical significance of the non-parametric statistical tests was conducted. The next subsection presents the ranking results, Wilcoxon signed-rank test outcomes, and Bonferroni-corrected significance analysis to confirm the robustness and reliability of MSIPKO’s superiority.
In the present study, a static penalty function with a penalty factor of 106 is adopted for both the CEC 2006 benchmark problems and the engineering design problems. Under this setting, infeasible solutions would result in substantially inflated objective values. From the reported results, most algorithms obtain objective values within the expected feasible range on the majority of problems, indicating that they generally reach the feasible region or remain very close to it. A notable exception is EAO on G06, where the unusually large objective values suggest possible difficulty in reaching feasible solutions in some runs.
(3)
Statistical analysis
To further validate the algorithm’s superiority and robustness, statistical analyses including rank-based comparison and non-parametric Wilcoxon signed-rank test with Bonferroni correction were conducted on the results across all benchmark functions.
As shown in Table 7, the proposed MSIPKO algorithm achieves rank 1 on most benchmark functions, and secures rank 2 on a few functions. While the EAO ranks first on some functions including G05, G07, G09, etc., its performance significantly deteriorates on others (e.g., G01–G03, G06), resulting in an overall mean rank of 3.33, which is inferior to MSIPKO (mean rank = 1.42). This indicates that EAO may exhibit strength in specific local patterns but lacks general applicability. In contrast, MSIPKO demonstrates stable and universal performance, suggesting its broader suitability for solving diverse constrained optimization problems.
Furthermore, other algorithms such as PKO, BKA, TTAO, and FOX present relatively fluctuating ranks, showing inconsistent effectiveness depending on the problem structure. Notably, the original PKO ranks much lower (mean rank = 5.92), highlighting the significant improvements gained through the enhancements introduced in MSIPKO.
To further validate the statistical significance of the observed performance advantages, a Wilcoxon signed-rank test is performed between MSIPKO and each of the other nine algorithms. Additionally, the Bonferroni correction was applied to adjust the significance levels in the context of multiple comparisons. The detailed results of these non-parametric statistical tests are presented in Table 7.
As shown in Table 8, the Wilcoxon signed-rank test is employed to assess the statistical significance of performance differences between MSIPKO and the comparison algorithms. For each benchmark function, all algorithms are independently executed 30 times, and the mean objective value is used as the representative performance metric.
For each pair of algorithms, a paired sample is constructed by comparing their mean objective values on each benchmark function. That is, each function provides one paired data point, and the Wilcoxon signed-rank test is conducted across all benchmark functions. This pairing scheme enables a fair comparison by evaluating algorithm performance consistently on the same set of problem instances.
The significance level is set to α = 0.05, and the Bonferroni correction is applied with m = 9, representing the total number of pairwise comparisons between MSIPKO and each of the nine competing algorithms, to control the family-wise error rate.
The results indicate that MSIPKO generally achieves better performance compared to most of the considered algorithms. For PKO, FLA, BKA, TTAO, FOX, COA, NRBO, and SFOA, the obtained p-values (e.g., 0.0005) indicate statistically significant differences at the 0.05 significance level before correction, and these differences remain significant after applying the Bonferroni correction.
For EAO, the p-value is 0.0312, indicating relatively weaker statistical evidence. After Bonferroni correction, the adjusted p-value (0.2808) does not meet the significance thresholds. However, the rank-sum results (R+ > R−) still suggest that MSIPKO tends to outperform EAO in most cases.
Overall, these results suggest that MSIPKO demonstrates competitive and generally superior performance compared with the majority of the considered algorithms, while maintaining statistical robustness under multiple comparison conditions.

4.1.3. Component-Wise Ablation Study of MSIPKO

To further validate the effectiveness of each improvement component in the MSIPKO algorithm, this section conducts an ablation study involving five variants: the original PKO, PKO enhanced with Component 1 (PKO + C1), Component 2 (PKO + C2), Component 3 (PKO + C3), and the full version MSIPKO, where C1 denotes the reverse differential crossover mechanism, C2 represents the enhanced diving-fishing operator, and C3 introduces the improvement of the commensalism phase. The comparative results across 12 benchmark functions are summarized in Table 9 and visualized in Figure 2 using boxplots. Additionally, a ranking-based evaluation is conducted in Table 10 to assess the effectiveness of each component and their synergy.
From the data in Table 10, MSIPKO achieves the best performance on 10 out of the 12 benchmark functions, and ranks second on the remaining two (G02 and G07), resulting in the lowest total ranking score with the number 14 and an average rank of 1.17. This indicates that the integration of all three components leads to consistently superior performance and robustness. By contrast, the original PKO ranks worst overall (average rank = 4.83), confirming the necessity of component enhancements.
Detailed analysis reveals the individual and synergistic contributions of each component. Component C1 plays a decisive role in improving the optimization accuracy for functions G01 and G02. Component C2 dominates the improvements in G04, G05, G07, G09, and G10, showcasing its ability to enhance convergence and robustness in complex constrained scenarios. Component C3, though not solely responsible for the best results on any single function, contributes a stabilizing effect, particularly in the combination setting.
Moreover, some benchmark functions benefit from the synergy of multiple components. Specifically, G03 and G12 are significantly improved only when all three components are integrated, with the most notable contribution in G03 coming from C3. Functions G06, G08, and G11 are enhanced primarily by the combined effect of C1 and C2, indicating their complementary strengths.
It is worth noting that in the cases of G02 and G07, MSIPKO performs slightly worse than the single-module-enhanced versions (PKO + C1 and PKO + C2, respectively). This is likely due to the slight trade-off effect introduced by full-component integration, where the balancing mechanism of C3 may suppress the localized strength of individual modules in specific scenarios.
Overall, the results from Table 9 and Table 10 and Figure 2 demonstrate that each component contributes uniquely to the algorithm. The combined use of C1, C2, and C3 in MSIPKO ensures both high accuracy and strong stability, making it a robust and well-rounded approach for constrained optimization problems.

4.1.4. Sensitivity to Evaluation Budget

To further evaluate the response sensitivity and efficiency of the 10 algorithms under varying computational budgets, this section analyzes the average objective values across six levels of function evaluations, including 1000; 5000; 10,000; 50,000; 100,000; 200,000, as visualized in Figure 3.
When the number of function evaluations (FEs) is less than 10,000, most algorithms—such as PKO, FLA, and FOX—exhibit relatively slow convergence, with noticeable gaps between initial and near-optimal values. In contrast, some algorithms, including EAO, BKA, and FLA, show relatively faster descent on certain functions (e.g., G02, G07, and G09). Among them, EAO often demonstrates rapid improvement at the early stage for several functions, although its subsequent progress becomes limited in some cases (e.g., G02 and G03). Similarly, BKA shows good initial performance but tends to plateau relatively early.
In comparison, MSIPKO exhibits efficient early-stage convergence behavior, achieving objective values close to the optimum on several functions (e.g., G01, G02, G06, and G12) within FEs = 10,000.
As the evaluation budget increases to FEs = 50,000 and beyond, the performance differences become more apparent. MSIPKO continues to improve its solutions and generally maintains lower objective values on most functions. For example, on G02, MSIPKO gradually surpasses other algorithms. In the later stages, FOX and COA demonstrate relatively better fine-tuning ability on some functions (e.g., G03, G05, and G10), suggesting stronger local exploitation capability, although their overall performance remains inferior to MSIPKO.
Overall, MSIPKO demonstrates competitive and generally superior convergence behavior across most test functions. The convergence curves exhibit relatively smooth trends in many cases, although some fluctuations may still be observed. No obvious premature convergence is observed in most functions. These results suggest that MSIPKO achieves a reasonable balance between global exploration and local exploitation under different computational budgets.

4.1.5. Comparison of Constraint-Handling Strategies

To evaluate the performance of MSIPKO in comparison with algorithms utilizing different constraint-handling strategies, this study conducts a comparative analysis against five representative algorithms. These algorithms adopt various constraint-handling approaches, including the HMICA [1] based on Deb’s criterion, the BSA-SAε algorithm [39] employing the ε-constrained method, and three algorithms—SMA-GM [40], AGWO [41], and IChoA [42]—using the dynamic penalty approach.
The simulation conditions and constraint-handling strategies for these algorithms are summarized in Table 11, and the corresponding results are presented in Table 12. It should be emphasized that the results in Table 12 are partially collected from the literature, where different algorithms were evaluated under different function evaluation (FE) budgets. Therefore, these results should be interpreted as indicative comparisons rather than strictly controlled evaluations.
As shown in Table 12, MSIPKO demonstrates competitive performance compared with the other algorithms. However, some algorithms exhibit slightly better reported average or optimal values for certain benchmark functions, such as G04, G05, G06, G08, and G11. These discrepancies are mainly attributed to differences in numerical precision and rounding conventions across studies rather than intrinsic performance differences.
For example, for the G04 benchmark function, the theoretical optimum is reported as f x * = 30,665.53867178332 in the CEC 2006 benchmark definition, while some studies present rounded values such as −30,665.539 or even −30,666.
According to the official CEC 2006 evaluation criteria, solution quality is assessed based on function error rather than decimal precision. Specifically, a solution is considered successful if f x f x * 10 4 while satisfying feasibility conditions.
Therefore, in this study, solutions that satisfy this error threshold are regarded as equivalent to the theoretical optimum. This criterion ensures consistency with the benchmark evaluation protocol and avoids misleading comparisons caused by differences in reported decimal precision.
Another important issue is the inconsistency in FE budgets. Specifically, the three algorithms employing dynamic penalty strategies were evaluated under a fixed FE budget of 30,000 in their original studies, which is lower than that used by MSIPKO for several benchmark functions (G01, G02, G03, G05, G07, G09, and G10). To ensure a fairer comparison, MSIPKO was additionally evaluated under the same FE budget (30,000) for these functions, and the results are presented in Table 13.
Compared with Table 12, Table 13 provides a more reliable and directly comparable assessment under unified computational conditions. Based on these results, MSIPKO maintains competitive performance under equal evaluation budgets.
Given the aforementioned scenarios, a simple comparison using average rankings or the Wilcoxon test is not suitable when evaluating against studies that adopt different constraint-handling strategies. Instead, a detailed comparison of the results obtained by MSIPKO and other algorithms on each benchmark function is necessary. Additionally, when MSIPKO and another algorithm reach the theoretical optimal value for a benchmark function, the precision retained in the literature is disregarded. In this case, the results of the two algorithms are considered equivalent.
The comparison rules are as follows:
(1)
If the results of both algorithms are identical, their performance is further compared based on the number of function evaluations, with the algorithms requiring fewer evaluations being deemed superior.
(2)
If one algorithm achieves better results while requiring no more function evaluations than the other, it is considered to have superior performance. In this study, MSIPKO demonstrates that its number of function evaluations for all benchmark functions (G01–G12) is consistently less than or equal to that of the comparison algorithms.
(3)
If one algorithm outperforms the other in the comparison, it is marked with a “+”; if the other method is superior, it is marked with a “−”; and if both methods yield equivalent results, it is marked with an “=”.
Table 14 presents the comparison results of MSIPKO and five other algorithms across 12 benchmark functions. It should be noted that the results presented in Table 14 are derived based on a combination of the results from Table 12 and Table 13. For benchmark functions with consistent function evaluation settings, results from Table 13 are used, while results from Table 12 are adopted for the remaining functions. Combined with the data and analysis rules discussed earlier, MSIPKO shows competitive performance with consistent advantages across most benchmark functions.
First, MSIPKO achieved 53 positive outcomes (“+” marks), 3 ties (“=” marks), and 4 negative outcomes (“−” marks) in the comparisons. Its overall performance across the 12 benchmark functions outperformed other algorithms, particularly in nonlinear complex constrained problems where it showcased exceptional optimization capabilities. For instance, in complex functions such as G02 and G10, MSIPKO not only achieved the theoretical optimal values but also significantly reduced the number of function evaluations (up to 200,000, compared to 350,000 for BSA-SAε), achieving an excellent balance between efficiency and accuracy.
In the comparison with HMICA, MSIPKO performed better on most benchmark functions, with tied results on G04, G08, and G12 (“=” marks), highlighting its superior efficiency and robustness. Against BSA-SAε, MSIPKO outperformed in all benchmark functions, demonstrating its strong adaptability to different types of constraints. While SMA-GM showed slightly better performance on G01 and G03, MSIPKO surpassed it on all other benchmark functions. Although MSIPKO showed a slight disadvantage in G03 when fewer function evaluations were used compared to algorithms employing dynamic penalty functions, results with 200,000 function evaluations showed that MSIPKO still achieved the theoretical optimal value, indicating its competitive search precision. Particularly in complex constrained problems such as G05, G07, and G09, MSIPKO obtained superior solutions with fewer function evaluations. Additionally, MSIPKO significantly outperformed AGWO and IChoA, especially in nonlinear problems, where its mean and optimal values were markedly better, further emphasizing its precision advantages.
In summary, MSIPKO demonstrated computational efficiency, robustness, and optimization capabilities in the benchmark tests, validating its exceptional performance as an effective constrained optimization algorithm.

4.2. Engineering Problems

The method proposed in this paper addresses six common engineering optimization problems, including the I-beam vertical deflection problem, the speed reducer design problem, the three-bar truss design problem, the welded beam design problem, the tension/compression spring design problem, and the pressure vessel design optimization problem. The performance of MSIPKO is validated by comparing its results with those of nine other algorithms across these problems.
This section is divided into two parts. The first part provides a brief introduction to the six engineering problems, while the second part presents the computational results of various algorithms and offers an analysis and discussion of their performance.

4.2.1. Introduction of Six Engineering Problems

(1)
I-beam vertical deflection problem
The I-beam vertical deflection problem is a classic engineering optimization problem. The objective is to minimize the vertical deflection of the I-beam by optimizing geometric parameters such as length ( x 1 ), height ( x 2 ), web thickness ( x 3 ), and flange thickness ( x 4 ), while satisfying constraints on cross-sectional area and material stress. This problem is characterized by its constrained and nonlinear nature, requiring a balance between structural strength, stiffness, and material efficiency, which adds to its computational complexity. The design variables, including length, height, web thickness, and flange thickness, each have a significant impact on the structural performance.
I-beams are widely used in construction, mechanical engineering, and transportation, such as in bridges, floors, equipment frames, and railway tracks. Optimizing the design of I-beams not only reduces material consumption and costs but also enhances load-bearing capacity and structural safety. However, the problem involves complex constraints that require comprehensive consideration of strength, stiffness, and stability. Additionally, the optimization process often exhibits nonlinear and multimodal characteristics, increasing the difficulty of finding solutions.
The mathematical formulation of this problem is given in Formulas (27)–(29).
min     f x = 5000 x 3 x 2 2 x 4 12 + x 1 x 4 3 6 + x 1 x 4 x 2 x 4 2 2
s.t.
g 1 x = 2 x 1 x 4 + x 3 x 2 2 x 4 300 0
g 2 x = 18 × 10 4 x 2 x 3 y 3 + 2 x 1 x 3 4 x 4 2 + 3 x 2 y     + 15 × 10 3 x 1 x 3 3 y + 2 x 1 3 x 3 56 0
where y = x 2 2 x 4 , 10 x 1 50 , 10 x 2 80 , 0 x 3 , x 4 9.5 .
(2)
Speed reducer design problem
The speed reducer is a critical component in mechanical transmission systems and is widely used in industrial machinery, automotive power systems, and energy equipment such as wind turbines. It serves to regulate speed and transmit torque. Optimizing the design of speed reducers not only helps reduce their manufacturing cost and weight but also significantly enhances their efficiency and reliability.
This problem represents a classic constrained mixed-integer optimization challenge, where the primary difficulty lies in achieving the optimal balance between performance and weight while addressing the computational complexities introduced by nonlinear constraints and mixed variables. The objective is to minimize the weight of the speed reducer while satisfying mechanical performance constraints. These constraints primarily involve gear bending stress, surface contact stress, lateral deflection of the shaft, and internal stress within the shaft. The problem involves seven design variables: gear width ( x 1 ), module ( x 2 ), number of pinion teeth ( x 3 ), length of the first shaft ( x 4 ), length of the second shaft ( x 5 ), diameter of the first shaft ( x 6 ), and diameter of the second shaft ( x 7 ). These variables include both continuous variables (e.g., shaft diameters) and discrete variables (e.g., number of teeth), further increasing the complexity of the problem. For the speed reducer design problem, the variable x 3 , representing the number of pinion teeth, is treated as an integer variable. In this study, an integer constraint is imposed on x 3 during function evaluation and result reporting to ensure consistency with the original problem formulation.
The mathematical formulation of this problem is given in Formulas (30)–(41).
min     f x = 0.7854 x 1 x 2 2 3.3333 x 3 2 + 14.9334 x 3   43.0934 1.508 x 1 x 6 2 + x 7 2   + 7.4777                                                           x 6 3 + x 7 3 + 0.7854 x 4 x 6 2 + x 5 x 7 2
s.t.
g 1 x = 27 x 1 x 2 2 x 3 1 0    
g 2 x = 397.5 x 1 x 2 2 x 3 2 1 0
g 3 x = 1.93 x 4 3 x 2 x 3 x 6 4 1 0
  g 4 x = 1.93 x 5 3 x 2 x 3 x 7 4 1 0
g 5 x = 1 110 x 6 3 745 x 4 x 2 x 3 2 + 16.9 × 10 6 1 0
g 6 x = 1 85 x 7 3 745 x 5 x 2 x 3 2 + 157.5 × 10 6 1 0
g 7 x = x 2 x 3 40 1 0
g 8 x = 5 x 2 x 1 1 0
g 9 x = x 1 12 x 2 1 0  
g 10 x = 1.5 x 6 + 1.9 x 4 1 0
g 11 x = 1.1 x 7 + 1.9 x 5 1 0            
where 2.6 x 1 3.6 , 0.7 x 2 0.8 , 17 x 3 28 , 7.3 x 4 , x 5 8.3 , 2.9 x 6 3.9 , 5 x 7 5.5 .
(3)
Three-bar truss design problem
The three-bar truss design problem is a structural optimization problem with wide-ranging applications. In construction engineering, three-bar trusses are commonly used in bridges, roofs, and high-rise buildings, where optimized designs improve material utilization and reduce costs. In mechanical manufacturing, lightweight truss structures are applied to machine supports and frames, reducing weight while maintaining strength. In aerospace engineering, optimizing truss designs is crucial for reducing the structural weight of aircraft and spacecraft while enhancing their performance.
The three-bar truss design problem is a classic structural optimization problem. The objective is to minimize the total volume of the three-bar truss by optimizing its cross-sectional areas while satisfying the bearing capacity constraints of each truss member. In this problem, the objective function represents the total material volume of the truss, calculated as the sum of the products of the cross-sectional areas and the lengths of the truss members. The constraints ensure that the truss satisfies the required bearing capacity and structural stability while achieving volume minimization.
The main challenge of the three-bar truss design problem lies in handling the complex nonlinear constraints and interdependencies among variables, while achieving an optimal balance between strength, stability, and volume minimization. Its mathematical formulation is provided in Formulas (42)–(45).
min     f ( x ) = 2 2 x 1 + x 2 × l
s.t.
g 1 ( x ) = 2 x 1 + x 2 2 x 1 2 + x 1 x 2 P σ 0
g 2 ( x ) = x 2 2 x 1 2 + x 1 x 2 P σ 0
g 3 ( x ) = 1 2 x 2 + x 1 P σ 0
where 0 x 1 , x 2 1 , l = 100   cm , P = σ = 2   k N / c m 2 .
(4)
Welded beam design problem
The welded beam design problem is a classic engineering optimization problem widely applied in scenarios such as crane beams, bridge structures, steel frame buildings, and heavy equipment supports. Through optimized design, it is possible to reduce costs, decrease weight, and improve structural strength.
The objective of this problem is to minimize the manufacturing cost of the welded beam by optimizing geometric parameters such as weld thickness ( x 1 ), beam width ( x 2 ), beam thickness ( x 3 ) and weld length ( x 4 ). At the same time, it must satisfy multiple mechanical performance constraints, including limitations on shear strength, bending load, and buckling stress. Additionally, the weld width and deflection must remain within allowable limits to ensure the stiffness and stability of the beam.
min     f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14 + x 2 )
s.t.
g 1 ( x ) = τ ( x ) τ max 0
g 2 ( x ) = σ ( x ) σ max 0
g 3 ( x ) = x 1 x 4 0
g 4 ( x ) = 0.1047 x 1 2 + 0.04811 x 3 x 4 ( 14 + x 2 ) 5.0 0
g 5 ( x ) = 0.125 x 1 0  
g 6 ( x ) = δ ( x ) δ max 0
g 7 ( x ) = P P c ( x ) 0
where the parameters with fixed values are as follows. P = 6000 , L = 14 , E = 30 × 10 6 , G = 12 × 10 6 , τ max = 13,600 , σ max = 30,000 , δ max = 0.25 . There are some equation relationships among variables as follows.
τ ( x ) = τ 2 + 2 τ τ x 2 2 R + τ 2 , τ = P 2 x 1 x 2 , τ = M R J , M = P L + x 2 2 , R = x 2 2 4 + x 1 + x 3 2 2 , δ ( x )       = 4 P L 3 E x 3 3 x 4 , σ ( x )       = 6 P L x 3 2 x 4 , P c ( x ) = 4.013 E x 3 x 4 3 6 L 2 1 x 3 4 E L G , J = 2 2 x 1 x 2 x 2 2 12 + x 1 + x 3 2 2 . The range of values for decision variables is 0.1 x 1 , x 4 2 , 0.1 x 2 , x 3 10 .
(5)
Tension/compression spring design problem
The tension/compression spring design problem is a classic engineering optimization problem with wide-ranging applications across various fields. For example, in mechanical equipment, springs are used for vibration control, energy storage, and impact absorption. In the automotive industry, they are an integral part of suspension systems, providing shock absorption and structural support. In the aerospace sector, springs are employed for high-precision damping and load-bearing. In consumer electronics, they are commonly used in buttons, connectors, and mechanical latches.
The objective of this problem is to minimize the weight of the spring by optimizing its geometric parameters while satisfying multiple performance constraints, including spring deflection, shear stress, natural frequency, and outer diameter limitations. The problem involves three design variables: wire diameter ( x 1 ), mean coil diameter ( x 2 ), and the number of active coils ( x 3 ), all of which directly affect the performance and service life of the spring. The mathematical expression is shown as Formulas (54)–(58).
min     f ( x ) = ( x 3 + 2 ) x 2 x 1 2
s.t.
g 1 ( x ) = 1 x 2 3 x 3 71,785 x 1 4 0  
g 2 ( x ) = 4 x 2 2 x 1 x 2 12,566 x 2 x 1 3 x 1 4 + 1 5108 x 1 2 1 0
g 3 ( x ) = 1 140.45 x 1 x 2 2 x 3 0
g 4 ( x ) = x 1 + x 2 1.5 1 0
where 0.05 x 1 2 ,     0.25 x 2 1.3 ,     2 x 3 15 .
(6)
Pressure vessel design optimization problem
The pressure vessel design problem is widely applied in fields such as chemical engineering, energy, food and pharmaceutical industries, and aerospace. Examples include reactors, gas storage tanks, and steam generators, which are responsible for storing and transporting liquids and gases under high-pressure conditions. The challenge of this problem lies in handling mixed variables and multiple constraints while minimizing the cost and maintaining the performance of the vessel.
The objective is to minimize the manufacturing cost of a cylindrical pressure vessel, including welding, material, and forming costs. The design variables include shell thickness ( x 1 ), head thickness ( x 2 ), inner radius ( x 3 ), and vessel length ( x 4 ). For the pressure vessel design problem, the shell thickness x 1 and head thickness x 2 are integer-multiple variables with a discretization step of 0.0625, which reflects standard manufacturing requirements in practical engineering design. In this study, these two variables are mapped to the nearest feasible discrete values during function evaluation and result reporting, while x 3 and x 4 are treated as continuous variables. This treatment ensures that the evaluated solutions strictly satisfy the original mixed-integer design requirements.
The problem is subject to various constraints, including geometric, material, and mechanical performance constraints, to ensure that the pressure vessel achieves sufficient strength and stability at the lowest possible cost.
min       f ( x ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
s.t.
g 1 ( x ) = x 1 + 0.0193 x 3 0  
g 2 ( x ) = x 2 + 0.0954 x 3 0
g 3 ( x ) = π x 3 2 4 3 π x 3 3 + 1,296,000 0
  g 4 ( x ) = x 4 240 0
where 0 x 1 , x 2 100 , 0 x 3 , x 4 200 .

4.2.2. Results and Discussions of Solving Engineering Problems

This section conducts simulation experiments on six representative engineering optimization problems. The numerical simulation conditions and constraint-handling strategies are shown in Table 15. The computational results are summarized in Table 16 and Table 17, where Table 16 presents the results of the first five algorithms and Table 17 lists those of the remaining five algorithms. The distribution of solutions across different algorithms is visualized in Figure 4.
It should be noted that, due to the large dispersion of certain algorithms in some problems, the boxplots in Figure 4 may exhibit compressed regions for other methods. To improve the clarity of visualization and provide a more detailed comparison among the leading algorithms, Figure 5 further presents the boxplots of the top five algorithms selected based on mean fitness values for each problem, where the algorithms are arranged according to their ranking based on mean performance, consistent with the results reported in Table 18, with the leftmost box representing the best-performing algorithm.
The best objective values and their corresponding solution vectors are listed in Table 19. In order to analyze the quality of various algorithms when solving engineering optimization problems, the ranking results based on mean values are shown in Table 18.
For all experiments, a unified experimental setting is adopted to ensure a fair comparison. All algorithms are executed with the same population size and maximum number of iterations, and each problem is independently run 30 times. For problems involving discrete variables, the corresponding constraints are handled within the objective evaluation to ensure feasibility of the obtained solutions.
For problems involving discrete variables, namely the speed reducer design problem (P2) and the pressure vessel design problem (P6), the corresponding integer or discrete constraints are explicitly handled within the objective function evaluation. This ensures that all candidate solutions satisfy the problem-specific feasibility requirements throughout the optimization process. For constrained optimization problems, the original constraint-handling mechanisms of each algorithm are retained, ensuring that the comparison reflects their intrinsic performance under standard configurations.
Based on the ranking results presented in Table 18, a clear overall performance pattern can be observed. MSIPKO achieves the best overall ranking with the lowest total rank score, indicating its strong competitiveness across different types of engineering optimization problems. Specifically, MSIPKO ranks first in four out of six problems, while maintaining competitive performance in the remaining two cases. Compared with other algorithms, such as PKO and EAO, which perform well on certain individual problems, MSIPKO demonstrates a more balanced and consistent performance across all test cases. In contrast, algorithms such as FLA and FOX obtain relatively poor rankings, reflecting their limited adaptability to complex constrained optimization scenarios.
A more detailed analysis of each problem is given as follows.
For the I-beam deflection problem (P1), MSIPKO ranks third in terms of mean performance. Although it does not achieve the best result, less than TTAO and EAO, its solution is still very close to the theoretical optimum. Meanwhile, the distribution shown in Figure 4 indicates that MSIPKO maintains relatively stable convergence with limited fluctuations. Compared with some algorithms that exhibit larger deviations, MSIPKO provides a more reliable performance.
For the speed reducer design problem (P2), MSIPKO achieves the best overall performance. The obtained mean value is highly competitive, and the variance is relatively small, indicating both high accuracy and stability. Compared with algorithms such as FLA and FOX, which show larger deviations and fluctuations, MSIPKO demonstrates stronger adaptability to nonlinear and mixed-variable constraints.
For the three-bar truss structure problem (P3), most algorithms obtain results very close to the theoretical optimum. MSIPKO achieves the best mean performance, although the performance gap among the top algorithms is relatively small. From Figure 4 and Figure 5, it can be observed that MSIPKO exhibits relatively stable distributions with fewer outliers, indicating consistent convergence behavior.
For the welded beam design problem (P4), MSIPKO ranks first and achieves the best mean performance among all algorithms. Compared with recent algorithms such as COA and TTAO, MSIPKO shows better convergence accuracy and lower variance, demonstrating its effectiveness in handling nonlinear engineering constraints.
For the spring design problem (P5), MSIPKO ranks third, slightly behind the top-performing algorithms (including EAO and SFOA). However, the difference in objective value is extremely small, indicating that MSIPKO still maintains a competitive performance. In addition, its solution distribution remains relatively concentrated, suggesting stable convergence across multiple runs.
Finally, in the pressure vessel design problem (P6), MSIPKO achieves the best performance with a mean value of 6059.976678 and a very low standard deviation of 0.419671. This indicates that the proposed algorithm maintains both high solution quality and strong stability under the mixed discrete–continuous design constraints. In contrast, algorithms such as FOX, FLA and BKA exhibit much larger fluctuations, suggesting relatively weaker robustness in this problem. For the pressure vessel problem, the reported values of x 1 and x 2 are the discrete feasible values obtained after applying the 0.0625-step mapping in Table 19.

4.3. Discussion

Based on the comprehensive experimental results presented in Section 4.1 and Section 4.2, the proposed MSIPKO demonstrates competitive and generally superior performance across both benchmark functions and engineering optimization problems.
In Section 4.1, MSIPKO is evaluated on 12 constrained functions from the CEC 2006 test suite. The results show that the proposed algorithm achieves the best or near-best performance on most functions in terms of mean value, best value, and robustness. The convergence analysis further indicates that MSIPKO exhibits efficient search behavior across different evaluation budgets, maintaining a good balance between global exploration and local exploitation. In addition, the ablation study confirms that each component (C1, C2, and C3) contributes positively to the overall performance, and their integration leads to further improvements.
Despite these promising results, several limitations should be acknowledged. First, the current evaluation is mainly conducted on medium-scale constrained optimization problems (CEC 2006), and the performance of MSIPKO on high-dimensional or large-scale problems has not yet been fully investigated. Second, the introduction of additional modules leads to a slight increase in per-iteration computational overhead, which may affect efficiency in scenarios with strict real-time requirements. Third, the constraint-handling strategy adopted in this study is relatively standard, and its effectiveness in more complex or dynamic constraint environments still requires further validation.
Beyond these limitations, it is also important to consider the applicability of MSIPKO to more complex and emerging constrained optimization scenarios. In recent studies, constrained optimization problems have been formulated in increasingly complex forms, such as energy system scheduling under operational constraints and security-oriented optimization problems with implicit or hidden constraints. These problems are often characterized by highly irregular feasible regions, strong coupling among variables, and complex constraint structures.
Although MSIPKO demonstrates strong performance in the tested problems, its applicability to such emerging scenarios has not yet been fully investigated. Nevertheless, due to its enhanced exploration capability and adaptability, MSIPKO has the potential to be extended to these complex optimization tasks, which represents a promising direction for future research.

5. Conclusions and Future Work

This paper proposes a multi-strategy improved pied kingfisher optimizer (MSIPKO) to enhance the performance of the original PKO for constrained optimization problems. By integrating a reverse differential crossover mechanism, an enhanced diving-fishing operator, and a refined commensalism phase, the proposed algorithm improves global exploration, local exploitation, and convergence stability without changing the asymptotic computational complexity.
Comprehensive experiments on 12 constrained benchmark functions from the CEC 2006 test suite and six classical engineering design problems demonstrate that MSIPKO achieves competitive and generally superior performance compared with several state-of-the-art algorithms. In particular, the results indicate improvements in solution quality, robustness, and convergence behavior across most test cases.
Additional analyses, including ablation studies and sensitivity evaluations under different function evaluation budgets, further verify the effectiveness of the proposed components and the stability of the algorithm. Moreover, comparisons with different constraint-handling strategies confirm the robustness of MSIPKO across diverse constraint environments.
Despite these promising results, several limitations remain, as discussed in Section 4.3, and addressing them will be an important direction for future work.
In future research, we will focus on extending MSIPKO to high-dimensional and large-scale constrained optimization problems, improving its computational efficiency through parallelization, and enhancing its adaptability by integrating more advanced constraint-handling mechanisms. Furthermore, extending the proposed method to multi-objective optimization problems and real-world applications, such as energy scheduling and UAV path planning, will also be investigated.

Author Contributions

Conceptualization, J.L.; methodology, J.L.; software, J.L.; validation, J.L.; formal analysis, J.L.; investigation, J.L.; resources, J.L.; data curation, J.L.; writing—original draft preparation, J.L.; writing—review and editing, J.L., H.B. and T.W.; visualization, J.L.; project administration and discussions, H.B., T.W., J.L. and N.T.; funding acquisition, H.B., T.W., J.L. and N.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ph.D. Foundation of Hulunbuir University—Multi-strategy improved swarm optimization algorithm and engineering applications, grant number 2024BSJJ16; the Ph.D. Foundation of Hulunbuir University—Research on the prediction model of ionospheric characteristic parameters based on artificial intelligence, grant number 2022BS01; Key project of Hulun Buir city’s “Technological Revitalization of the City” action—Research and development of key technologies and equipment for mechanization of green, efficient, and intelligent production of soybeans; Research and development of vegetation adaptability for the historical geological environment and grassland vegetation ecological restoration project in Tuwei Mountain, Manzhouli City; Foundation of Hulunbuir University—Research on intelligent segmentation technology for medical imaging, grant number 2024BSJJ05.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data supporting the findings of this study are available within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Luo, J.; Zhou, J.; Jiang, X. A modification of the imperialist competitive algorithm with hybrid methods for constrained optimization problems. IEEE Access 2021, 9, 161745–161760. [Google Scholar] [CrossRef]
  2. Moayed, D.; Gen, Y. Constrained multiple-swarm particle swarm optimization within a cultural framework. IEEE Trans. Syst. Man Cybern.-Syst. 2012, 42, 475–490. [Google Scholar] [CrossRef]
  3. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Meth. Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar] [CrossRef]
  4. Dong, N.; Wang, Y. Novel bi-objective model-based evolutionary algorithm for constrained optimization problems. Control Theor. Appl. 2014, 31, 577–583. [Google Scholar] [CrossRef]
  5. Bi, X.; Zhang, L. Constrained multi-objective optimization algorithm m with adaptive ε truncation strategy. J. Electr. Inf. Technol. 2016, 36, 2047–2053. [Google Scholar] [CrossRef]
  6. Zuo, W.; Gao, Y. Solving numerical and engineering optimization problems using a dynamic dual-population differential evolution algorithm. Int. J. Mach. Learn. Cybern. 2024, 9, 1701–1760. [Google Scholar] [CrossRef]
  7. Zhong, C.; Li, G.; Meng, Z. Starfish optimization algorithm (SFOA): A bio-inspired metaheuristic algorithm for global optimization compared with 100 optimizers. Neural Comput. Appl. 2025, 37, 3641–3683. [Google Scholar] [CrossRef]
  8. Xiao, Y.; Cui, H.; Khurma, R.A.; Castillo, P.A. Artificial lemming algorithm: A novel bionic meta-heuristic technique for solving real-world engineering optimization problems. Artif. Intell. Rev. 2025, 58, 84. [Google Scholar] [CrossRef]
  9. Alkharsan, A.; Ata, O. Hawkfish optimization algorithm: A gender-bending approach for solving complex optimization problems. Electronics 2025, 14, 611. [Google Scholar] [CrossRef]
  10. Zhao, W.; Xie, Y.; Wang, L. An effective Bezier curve-based optimization (BCO) for large-scale numerical problems and 3D unmanned aerial vehicle path planning with efficient multiple threats evasion. Adv. Eng. Inform. 2026, 73, 104524. [Google Scholar] [CrossRef]
  11. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  12. Yuan, C.; Zhao, D.; Heidari, A.A. Polar lights optimizer: Algorithm and applications in image segmentation and feature selection. Neurocomputing 2024, 607, 128427. [Google Scholar] [CrossRef]
  13. Fan, S.; Wang, R.; Kang, S. A sequoia-ecology-based metaheuristic optimisation algorithm for multi-constraint engineering design and UAV path planning. Results Eng. 2025, 26, 105130. [Google Scholar] [CrossRef]
  14. Yu, H.; Jia, H.; Zhou, J. Enhanced aquila optimizer algorithm for global optimization and constrained engineering problems. Math. Biosci. Eng. 2022, 19, 14173–14211. [Google Scholar] [CrossRef]
  15. Sun, B.; Peng, P.; Tan, G. A fuzzy logic constrained particle swarm optimization algorithm for industrial design problems. Appl. Soft Comput. 2024, 167, 112456. [Google Scholar] [CrossRef]
  16. Li, G.; Wang, Z.; Gao, W. Adaptive multi/many-objective transformation for constrained optimization. IEEE Trans. Syst. Man Cybern.-Syst. 2025, 55, 721–734. [Google Scholar] [CrossRef]
  17. Furio, C.; Lamberti, L.; Pruncu, C.I. Mechanical and civil engineering optimization with a very simple hybrid grey wolf–Jaya metaheuristic optimizer. Mathematics 2024, 12, 3464. [Google Scholar] [CrossRef]
  18. Meng, K.; Zhang, J.; Xu, Z. Ship power system network reconfiguration based on swarm exchange particle swarm optimization algorithm. Appl. Sci. 2024, 14, 9960. [Google Scholar] [CrossRef]
  19. Zhao, M.; He, Y.; Tian, Y. Capacity optimization of wind–solar–storage multi-power microgrid based on two-layer model and an improved snake optimization algorithm. Electronics 2024, 13, 4315. [Google Scholar] [CrossRef]
  20. Boroumandfar, G.; Khajehzadeh, A.; Eslami, M. A single and multiobjective robust optimization of a microgrid in distribution network considering uncertainty risk. Sci. Rep. 2024, 14, 28195. [Google Scholar] [CrossRef]
  21. Farhat, M.; Kamel, S.; Abdelaziz, A.Y. Modified Tasmanian devil optimization for solving single and multiobjective optimal power flow in conventional and advanced power systems. Clust. Comput. 2025, 28, 105. [Google Scholar] [CrossRef]
  22. Dora, B.K.; Bhat, S.; Halder, S. A solution to multi-objective stochastic optimal power flow problem using mutualism and elite strategy based pelican optimization algorithm. Appl. Soft Comput. 2024, 158, 111548. [Google Scholar] [CrossRef]
  23. Boualem, S.A.E.M.; Meftah, B.; Debbat, F. An adaptive coordinate system for constrained differential evolution. Clust. Comput. 2025, 28, 37. [Google Scholar] [CrossRef]
  24. Yang, H.; Ren, Y.; Xu, G. Optimization of rotary drilling rig mast structure based on multi-dimensional improved salp swarm algorithm. Appl. Sci. 2024, 14, 10040. [Google Scholar] [CrossRef]
  25. Abdollahzadeh, B.; Javadi, H. The green marine waste collector routing optimization with puma selection-based neighborhood search algorithm. Clust. Comput. 2025, 28, 80. [Google Scholar] [CrossRef]
  26. Wang, C.; Hu, A.; Gao, Q. UAV swarm path planning approach based on integration of multi-population strategy and adaptive evolutionary optimizer. Meas. Sci. Technol. 2024, 35, 126204. [Google Scholar] [CrossRef]
  27. Wang, X.; Feng, Y.; Tang, J. A UAV path planning method based on the framework of multi-objective jellyfish search algorithm. Sci. Rep. 2024, 14, 28058. [Google Scholar] [CrossRef] [PubMed]
  28. You, G.; Hu, Y.; Lian, C.; Yang, Z. Mixed-strategy Harris hawk optimization algorithm for UAV path planning and engineering applications. Appl. Sci. 2024, 14, 10581. [Google Scholar] [CrossRef]
  29. Chen, Y.; Li, J.; Zhou, L. An improved dung beetle optimizer for the twin stacker cranes’ scheduling problem. Biomimetics 2024, 9, 683. [Google Scholar] [CrossRef]
  30. Huang, Z.; Zhang, Z.; Hua, C. Leveraging enhanced egret swarm optimization algorithm and artificial intelligence-driven prompt strategies for portfolio selection. Sci. Rep. 2024, 14, 26881. [Google Scholar] [CrossRef] [PubMed]
  31. Bouaouda, A.; Hashim, F.A.; Sayouti, Y. Pied kingfisher optimizer: A new bio-inspired algorithm for solving numerical optimization and industrial engineering problems. Neural Comput. Appl. 2024, 36, 15455–15513. [Google Scholar] [CrossRef]
  32. Ghasemi, M.; Golalipour, K.; Zare, M. Flood algorithm (FLA): An efficient inspired meta-heuristic for engineering optimization. J. Supercomput. 2024, 80, 22913–23017. [Google Scholar] [CrossRef]
  33. Wang, J.; Wang, W.; Hu, X. Black-winged kite algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev. 2024, 57, 98. [Google Scholar] [CrossRef]
  34. Zhao, S.; Zhang, T. Triangulation topology aggregation optimizer: A novel mathematics-based meta-heuristic algorithm for continuous optimization and engineering applications. Expert Syst. Appl. 2024, 283, 121744. [Google Scholar] [CrossRef]
  35. Mohammed, H.; Rashid, T. FOX: A FOX-inspired optimization algorithm. Appl. Intell. 2023, 53, 1030–1050. [Google Scholar] [CrossRef]
  36. Jia, H.; Rao, H.; Mirjalili, S. Crayfish optimization algorithm. Artif. Intell. Rev. 2023, 56, 1919–1979. [Google Scholar] [CrossRef]
  37. Sowmya, R.; Premkumar, M. Newton–Raphson-based optimizer: A new population-based metaheuristic algorithm for continuous optimization problems. Eng. Appl. Artif. Intell. 2024, 128, 107532. [Google Scholar] [CrossRef]
  38. Rodan, A.; Al-Tamimi, A.K.; Al-Alnemer, L.; Mirjalili, S.; Tiňo, P. Enzyme action optimizer: A novel bio-inspired optimization algorithm. J. Supercomput. 2025, 81, 686. [Google Scholar] [CrossRef]
  39. Zhang, C.; Lin, Q.; Gao, L. Backtracking search algorithm with three constraint handling methods for constrained optimization problems. Expert Syst. Appl. 2015, 42, 7831–7845. [Google Scholar] [CrossRef]
  40. Thakur, G.; Pal, A.; Mittal, N. Slime mould algorithm based on a Gaussian mutation for solving constrained optimization problems. Mathematics 2024, 12, 1470. [Google Scholar] [CrossRef]
  41. Ma, C.; Huang, H.; Fan, Q.; Wei, J.; Du, Y.; Gao, W. Grey wolf optimizer based on Aquila exploration method. Expert Syst. Appl. 2022, 205, 117629. [Google Scholar] [CrossRef]
  42. Preeti; Kaur, R.; Singh, D. Dimension learning-based chimp optimizer for energy efficient wireless sensor networks. Sci. Rep. 2022, 12, 14968. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Boxplot of MSIPKO and 9 competitor algorithms for 12 benchmark functions of CEC 2006.
Figure 1. Boxplot of MSIPKO and 9 competitor algorithms for 12 benchmark functions of CEC 2006.
Biomimetics 11 00335 g001
Figure 2. Boxplot of MSIPKO’s component-wise ablation study for 12 benchmark functions of CEC 2006.
Figure 2. Boxplot of MSIPKO’s component-wise ablation study for 12 benchmark functions of CEC 2006.
Biomimetics 11 00335 g002
Figure 3. Sensitivity analysis of 10 algorithms under varying FEs.
Figure 3. Sensitivity analysis of 10 algorithms under varying FEs.
Biomimetics 11 00335 g003
Figure 4. Boxplot of MSIPKO and competitor algorithms for 6 engineering problems.
Figure 4. Boxplot of MSIPKO and competitor algorithms for 6 engineering problems.
Biomimetics 11 00335 g004
Figure 5. Boxplot of the top five algorithms selected based on mean fitness values for each engineering design problem.
Figure 5. Boxplot of the top five algorithms selected based on mean fitness values for each engineering design problem.
Biomimetics 11 00335 g005
Table 1. Summary of representative methods for constrained optimization problems.
Table 1. Summary of representative methods for constrained optimization problems.
CategoryRepresentative MethodsKey AdvantagesLimitations
Constraint-handling
methods
Penalty method [2], Deb’s rules [3], multi-objective transformation [4], ε-constraint [5], dual-population strategies [6]Effective feasibility handling; simple or flexible implementationSensitive to parameters; may introduce additional computational complexity
Swarm intelligence
algorithms
HFOA [9], BCO [10], DBO [11], PLO [12], SMOA [13]Strong global exploration capability; easy to implementDifficulty in maintaining population diversity; imbalance between exploration and exploitation
Hybrid/Improved
algorithms
EAOA [14], FILPSO-SCAε [15], AMaOTCO [16], SHGWJA [17]Improved convergence accuracy and solution qualityIncreased algorithmic complexity; reduced generality due to multiple strategies
Engineering applicationsACS-CDE [23], ISSA [24], MACSGWO [26], IDBO [29], NBESOA [30]Effective in solving real-world constrained problemsStrong problem dependence; limited generalization ability 
Table 2. Comparison of computational complexity of PKO and MSIPKO modules.
Table 2. Comparison of computational complexity of PKO and MSIPKO modules.
AlgorithmsPKOMSIPKO
Initialization O N D O N D
Implement reverse differential crossover mechanism O N D
Hovering/perching O N D O N D
Diving O N D O N D
Commensalism O N D O N D
Complexity of per-iteration O 3 N D O 4 N D
Total O 3 T + 1 N D O 4 T + 1 N D
Table 3. The characteristics of CEC 2006 benchmark functions (G01–G12).
Table 3. The characteristics of CEC 2006 benchmark functions (G01–G12).
FunctionnType of FunctionρOptimalLINILENE a
G0113Quadratic0.0111%−1590006
G0220Non-linear99.9971%−0.803601902001
G0310Polynomial0.0000%−1.000500011
G045Quadratic52.1230%−30,665.538706002
G054Cubic0.0000%5126.496720033
G062Cubic0.0066%−6961.813902002
G0710Quadratic0.0003%24.306235006
G082Non-linear0.8560%−0.09582502000
G097Polynomial0.5121%680.63005704002
G108Linear0.0010%7049.248033006
G112Quadratic0.0000%0.749900011
G123Quadratic4.7713%−101000
Table 4. Numerical simulation conditions of 10 algorithms for solving the CEC 2006 benchmark functions.
Table 4. Numerical simulation conditions of 10 algorithms for solving the CEC 2006 benchmark functions.
FunctionPopulation SizeIterationsFEsConstraint-Handling Strategies
G0110050050,000Static penalty functions
G021002000200,000Static penalty functions
G031002000200,000Static penalty functions
G0410020020,000Static penalty functions
G0510050050,000Static penalty functions
G0610010010,000Static penalty functions
G071002000200,000Static penalty functions
G0850201000Static penalty functions
G0910050050,000Static penalty functions
G101002000200,000Static penalty functions
G1110020020,000Static penalty functions
G1240502000Static penalty functions
Table 5. Comparison results of MSIPKO and nine other algorithms for 12 benchmark functions (1).
Table 5. Comparison results of MSIPKO and nine other algorithms for 12 benchmark functions (1).
FunctionsStatisticsMSIPKOPKOFLABKATTAO
G01Mean−15−12.9328−10.6969−4.6878 −15.0000
SD01.59533.26721.76701.0739 × 10−8
Best−15−15−15−9.6339−15
Worst−15−9 −5−2−15.0000
G02Mean−0.801095−0.784224 −0.584017 −0.605223 −0.622355 
SD4.5681 × 10−31.6510 × 10−29.2062 × 10−29.1242 × 10−21.1827 × 10−1
Best−0.803619−0.803489 −0.777835 −0.750609 −0.792564 
Worst−0.787683−0.752728 −0.434697 −0.330256 −0.482724 
G03Mean−1.000499−0.260171 −0.899194 −0.953200 −0.900470 
SD1.0675 × 10−62.1712 × 10−12.3262 × 10−11.4425 × 10−15.3299 × 10−2
Best−1.000500−0.869920 −0.999118 −1.000481 −0.969864 
Worst−1.000497−0.045100 −0.132929 −0.482825 −0.781659 
G04Mean−30,665.5387−30,664.4202 −30,643.9203 −30,654.0000 −30,665.5382 
SD1.6229 × 10−79.4643 × 10−19.1300 × 1016.1857 × 1014.5989 × 10−4
Best−30,665.5387−30,665.4131 −30,665.5292 −30,665.5387−30,665.5387
Worst−30,665.5387−30,661.8847 −30,164.6536 −30,326.5193 −30,665.5369 
G05Mean5126.49675131.8063 5297.8303 5167.6899 5172.7790 
SD3.4771 × 10−64.1160 × 1006.7773 × 1021.1289 × 1022.6924 × 101
Best5126.49675126.5416 5129.2076 5126.5046 5134.1553 
Worst5126.49675146.5100 8880.0000 5668.8127 5232.5777 
G06Mean−6961.8139−6911.6279 −6950.9430 −6961.8033 −6942.0068 
SD4.6252 × 10−124.5712 × 1012.0330 × 1013.6077 × 10−21.9922 × 101
Best−6961.8139−6959.7614 −6961.8028 −6961.8139−6960.9252 
Worst−6961.8139−6754.1389 −6864.8359 −6961.6417 −6855.1227 
G07Mean24.319825.0260 27.5585 64.5204 24.8721 
SD1.7424 × 10−23.7751 × 10−17.6560 × 1006.4747 × 1013.8262 × 10−1
Best24.307224.5466 24.6259 24.9487 24.3303 
Worst24.400525.9218 66.1651 255.2172 25.6463 
G08Mean−0.095825−0.087289 −0.089219 −0.095631 −0.090976 
SD4.7744 × 10−81.4150 × 10−21.6868 × 10−26.5436 × 10−41.5199 × 10−2
Best−0.095825−0.095773 −0.095825−0.095825−0.095825
Worst−0.095825−0.028362 −0.021295 −0.092850 −0.02 8383 
G09Mean680.630069680.7278680.6743686.0756 680.6838
SD1.4812 × 10−56.8481 × 10−23.8327 × 10−21.2373 × 1013.3064 × 10−2
Best680.630058680.6538680.6352680.6658680.6444
Worst680.630109680.9750680.8504749.9112680.8121
G10Mean7049.25567477.0739 8007.5129 8062.1063 7381.5745 
SD1.3602 × 10−21.6362 × 1022.1611 × 1031.3527 × 1031.0448 × 102
Best7049.24807248.9473 7101.2679 7100.9835 7127.7024 
Worst7049.29857837.7985 19262.2455 14183.4060 7507.3094 
G11Mean0.7499000.749917 0.749943 0.749900 0.749922 
SD1.1292 × 10−161.1002 × 10−51.8628 × 10−44.5718 × 10−83.0927 × 10−5
Best0.7499000.749902 0.7499000.7499000.749900
Worst0.7499000.749943 0.750918 0.7499000.750063 
G12Mean−1−0.9885 −0.9808 −0.9969 −0.9963 
SD09.4945 × 10−34.9019 × 10−24.2583 × 10−34.1849 × 10−3
Best−1−0.999857 −0.999996 −1.000000 −0.999963 
Worst−1−0.959461 −0.748302 −0.986384 −0.988343 
Table 6. Comparison results of MSIPKO and nine other algorithms for 12 benchmark functions (2).
Table 6. Comparison results of MSIPKO and nine other algorithms for 12 benchmark functions (2).
FunctionsStatisticsFOXCOANRBOEAOSFOA
G01Mean−14.752−7.7689−7.9520−1.4333 −3.0620 
SD7.2542 × 10−12.41142.76842.1764 × 1002.2961 × 100
Best−14.9970−12.9997−14.5628−9.0000 −6.1204 
Worst−12.0019−5−400.9308 
G02Mean−0.378660 −0.520760 −0.450921 −0.643042 −0.649821 
SD9.6609 × 10−27.1374 × 10−25.4564 × 10−24.3071 × 10−23.9996 × 10−2
Best−0.631265 −0.633044 −0.564679 −0.769960 −0.760860 
Worst−0.242575 −0.360306 −0.409150 −0.544209 −0.604702 
G03Mean0.189077 −0.479319 −0.611842 −0.784701 −1.000494 
SD1.5044 × 10−12.5810 × 10−12.3377 × 10−11.2331 × 10−13.9296 × 10−6
Best0.000264 −0.971089 −0.993403 −0.997542 −1.000500 
Worst0.605330 −0.087333 −0.194243 −0.581804 −1.000487 
G04Mean−30,665.5252 −30,659.4466 −30,647.7918 −30,665.5387 −29,370.2824 
SD1.9102 × 10−23.5737 × 1003.3011 × 1010.0000 258.7623 
Best−30,665.5383 −30,663.9331 −30,665.5387−30,665.5387 −29,821.7131 
Worst−30,665.4357 −30,649.9932 −30,505.8409 −30,665.5387 −28,923.8142 
G05Mean5895.9748 5174.2454 5163.6804 5126.4967 5127.1523 
SD4.3731 × 1023.2125 × 1014.3675 × 1019.2504 × 10−133.8612 × 10−1
Best5217.8893 5131.1896 5126.5740 5126.4967 5126.6045 
Worst7138.3198 5255.6888 5320.2518 5126.4967 5128.2331 
G06Mean−6961.8093 −6861.8147 −6961.7942 47,315.0427−6948.2226 
SD3.6448 × 10−31.0728 × 1026.0161 × 10−25.2476 × 1041.2603 × 101
Best−6961.8136 −6948.3252 −6961.8137 −6961.8139 −6960.8697 
Worst−6961.8007 −6436.4211 −6961.4908 102,027 −6906.5417 
G07Mean24.6501 24.6876 51.5447 24.3062 24.3399 
SD2.2702 × 10−12.0969 × 10−11.5166 × 1013.3741 × 10−101.5397 × 10−2
Best24.3697 24.3890 33.6358 24.3062 24.3149 
Worst25.4220 25.1874 103.1055 24.3062 24.3928 
G08Mean−0.084489 −0.073303 −0.095753 −0.095825 −0.086815 
SD2.5789 × 10−22.7167 × 10−23.8616 × 10−47.0748 × 10−78.7847 × 10−3
Best−0.095825−0.095823 −0.095825−0.095825 −0.094781 
Worst−0.025812 −0.018921 −0.093709−0.095823 −0.066377 
G09Mean680.6878 680.7354 688.8294 680.630057 680.7385 
SD6.8670 × 10−25.8676 × 10−25.0177 × 1005.7815 × 10−134.0766 × 10−2
Best680.6329 680.6516 681.2589 680.630057680.6628 
Worst680.9834 680.8638 699.2373 680.630057680.8253 
G10Mean15,982.4803 7525.6372 9692.0477 7049.2480 7074.2383 
SD3.3113 × 1032.4202 × 1021.2261 × 1032.3256 × 10−73.6626 × 101
Best10,408.6164 7116.1744 7838.2872 7049.2480 7054.3972 
Worst24,921.1137 8303.2140 12,886.3826 7049.2480 7251.9313 
G11Mean0.749900 0.750092 0.749900 0.749900 0.749942 
SD3.2616 × 10−81.8700 × 10−42.4650 × 10−82.2775 × 10−126.7297 × 10−5
Best0.7499000.749904 0.7499000.749900 0.749900 
Worst0.7499000.750653 0.7499000.749900 0.750134 
G12Mean−0.9508 −0.9908 −0.9971 −0.9999 −0.9935
SD4.4286 × 10−21.2031 × 10−23.8968 × 10−36.0000 × 10−72.7150 × 10−3
Best−1.0000 −1.0000 −1.0000 −0.9999 −0.9996
Worst−0.8500 −0.9547 −0.9833 −0.9999 −0.9877 
Table 7. Ranking results of the 10 algorithms by the mean value and standard deviation.
Table 7. Ranking results of the 10 algorithms by the mean value and standard deviation.
MSIPKOPKOFLABKATTAOFOXCOANRBOEAOSFOA
G0114582376109
G0212765108943
G0319534108762
G0425973468110
G0524967108513
G0618537294106
G0727810645913
G0817645910328
G0926394571018
G1025784106913
G1116917110118
G1218945107326
Total rank17718269597891744069
Mean rank1.425.926.835.754.926.57.586.173.335.75
Final rank16943810724
Table 8. Results of the Wilcoxon test including Bonferroni correction between MSIPKO and other algorithms.
Table 8. Results of the Wilcoxon test including Bonferroni correction between MSIPKO and other algorithms.
MSIPKO vs.R+R−p-ValueBonferroni-Corrected p-Value α = 0.05 α = 0.1 α = 0.2
PKO7800.00050.0045H1H1H1
FLA7800.00050.0045H1H1H1
BKA7800.00050.0045H1H1H1
TTAO7800.00050.0045H1H1H1
FOX7800.00050.0045H1H1H1
COA7800.00050.0045H1H1H1
NRBO7800.00050.0045H1H1H1
EAO53250.03120.2808H0H0H0
SFOA7800.00050.0045H1H1H1
Table 9. Comparison results of MSIPKO’s component-wise ablation study for 12 benchmark functions.
Table 9. Comparison results of MSIPKO’s component-wise ablation study for 12 benchmark functions.
FunctionsStatisticsMSIPKOPKOPKO + C1PKO + C2PKO + C3
G01Mean−15.0000 −12.9328 −15.0000 −13.8151 −13.0992 
SD0.0000 × 1001.5953 × 1000.0000 × 1001.6750 × 1002.5502 × 100
Best−15.0000 −15.0000 −15.0000 −15.0000 −15.0000 
Worst−15.0000 −9.0000 −15.0000 −9.0000 −6.0000 
G02Mean−0.801095 −0.784224 −0.801510 −0.787246 −0.793628 
SD4.5681 × 10−31.6510 × 10−23.0839 × 10−31.2168 × 10−25.6274 × 10−3
Best−0.803619 −0.803489 −0.803567 −0.802704 −0.802903 
Worst−0.787683 −0.752728 −0.794606 −0.763718 −0.784999 
G03Mean−1.000499 −0.260171 −0.990002 −1.000499 −0.999379 
SD1.0675 × 10−62.1712 × 10−13.7131 × 10−35.2850 × 10−66.9107 × 10−4
Best−1.000500 −0.869920 −1.000419 −1.000500 −1.000199 
Worst−1.000497 −0.045100 −0.980334 −1.000497 −0.997557 
G04Mean−30,665.5387 −30,664.4202 −30,665.4029 −30,665.5387 −30,664.7662 
SD1.6229 × 10−79.4643 × 10−11.3348 × 10−13.0327 × 10−65.7969 × 10−1
Best−30,665.5387 −30,665.4131 −30,665.5338 −30,665.5387 −30,665.3512 
Worst−30,665.5387 −30,661.8847 −30,664.9830 −30,665.5387 −30,663.5512 
G05Mean5126.4967 5131.8063 5128.6840 5126.4967 5129.8834 
SD3.4771 × 10−64.1160 × 1001.4021 × 1004.8691 × 10−51.4218 × 100
Best5126.4967 5126.5416 5126.5351 5126.4967 5128.2228 
Worst5126.4967 5146.5100 5132.1536 5126.4970 5133.0762 
G06Mean−6961.8139 −6911.6279 −6961.7910 −6961.8084 −6928.3668 
SD4.6252 × 10−124.5712 × 1013.7083 × 10−24.5509 × 10−32.0436 × 101
Best−6961.8139 −6959.7614 −6961.8139 −6961.8136 −6953.5894 
Worst−6961.8139 −6754.1389 −6961.6505 −6961.7974 −6894.9480 
G07Mean24.3198 25.0260 24.3751 24.3063 24.4275 
SD1.7424 × 10−23.7751 × 10−13.3608 × 10−27.2269 × 10−155.9871 × 10−2
Best24.3072 24.5466 24.3227 24.3063 24.3462 
Worst24.4005 25.9218 24.4719 24.3063 24.5663 
G08Mean−0.095825 −0.087289 −0.095811 −0.095793 −0.093862 
SD4.7744 × 10−81.4150 × 10−21.1623 × 10−54.9728 × 10−52.0669 × 10−3
Best−0.095825 −0.095773 −0.095825 −0.095825 −0.095691 
Worst−0.095825 −0.028362 −0.095779 −0.095619 −0.087634 
G09Mean680.630069 680.727834 680.693395 680.630147 680.716837 
SD1.4812 × 10−56.8481 × 10−23.5785 × 10−21.8069 × 10−44.5802 × 10−2
Best680.630058 680.653825 680.639797 680.630063 680.671101 
Worst680.630109 680.974952 680.762383 680.630612 680.805783 
G10Mean7049.2556 7477.0739 7225.0307 7049.7022 7323.9559 
SD1.3602 × 10−21.6362 × 1029.8351 × 1011.5699 × 1001.0776 × 102
Best7049.2480 7248.9473 7057.5817 7049.2482 7087.7342 
Worst7049.2985 7837.7985 7491.3307 7055.4735 7511.1648 
G11Mean0.749900 0.749917 0.749915 0.749900 0.753134 
SD1.1292 × 10−161.1002 × 10−52.5707 × 10−51.9352 × 10−132.7618 × 10−3
Best0.749900 0.749902 0.749900 0.749900 0.750276 
Worst0.749900 0.749943 0.749982 0.749900 0.759891 
G12Mean−1.0000 −0.9885 −1.0000 −1.0000 −1.0000 
SD0.0000 × 1009.4945 × 10−30.0000 × 1005.9989 × 10−61.1720 × 10−5
Best−1.000000 −0.999857 −1.000000 −1.000000 −0.999999 
Worst−1.000000 −0.959461 −1.000000 −0.999975 −0.999970 
Table 10. Ranking results of the component-wise ablation by the mean value and standard deviation.
Table 10. Ranking results of the component-wise ablation by the mean value and standard deviation.
MSIPKOPKOPKO + C1PKO + C2PKO + C3
G0114135
G0225143
G0315423
G0415324
G0515324
G0615324
G0725314
G0815234
G0915324
G1015324
G1114325
G1215134
Total rank1458302848
Mean rank1.1674.832.52.334.00
Final rank15324
Table 11. Function evaluation budgets and constraint-handling strategies of the compared algorithms on the 12 CEC 2006 constrained benchmark functions (G01–G12).
Table 11. Function evaluation budgets and constraint-handling strategies of the compared algorithms on the 12 CEC 2006 constrained benchmark functions (G01–G12).
FunctionMSIPKOHMICABSA-SAεSMA-GMAGWOIChoA
G0150,000200,000350,00030,00030,00030,000
G02200,000200,000350,00030,00030,00030,000
G03200,000200,000350,00030,00030,00030,000
G0420,00020,000350,00030,00030,00030,000
G0550,000200,000350,00030,00030,00030,000
G0610,00080,000350,00030,00030,00030,000
G07200,000200,000350,00030,00030,00030,000
G0810001000350,00030,00030,00030,000
G0950,000200,000350,00030,00030,00030,000
G10200,000200,000350,00030,00030,00030,000
G1120,00025,000350,00030,00030,00030,000
G1220002000350,00030,00030,00030,000
Constraints handling strategiesStatic penalty functionsDeb’s rulesε-constrained methodDynamic penaltyDynamic penaltyDynamic penalty
Table 12. Comparison results of MSIPKO and five other algorithms using different constraint-handling strategies on 12 benchmark functions.
Table 12. Comparison results of MSIPKO and five other algorithms using different constraint-handling strategies on 12 benchmark functions.
FunctionsStatisticsMSIPKOHMICABSA-SAεSMA-GMAGWOIChoA
G01Mean−15−15−15−14.834−7.8403−12.915
SD0000.05911.78561.5116
Best−15−15−15−15−11.854−14.954
Worst−15−15−15−14.692−5−1.5116
G02Mean−0.801095−0.7942587−0.791922−0.5378−0.5622−0.775
SD4.5681 × 10−30.0064865.48 × 10−30.11250.05350.0143
Best−0.803619−0.803619−0.803599−0.7779−0.7127−0.7916
Worst−0.787683−0.7826−0.77988−0.30110.0535−0.733
G03Mean−1.000499−0.992825−1.000486−1−0.9658−0.9936
SD1.0675 × 10−60.01082491.64 × 10−52.51 × 10−80.01230.0019
Best−1.000500−1.0004−1.000498−1−0.9863−0.9963
Worst−1.000497−0.95767−1.000419−1−0.9363−0.9897
G04Mean−30,665.5387−30,665.539−30,665.5 4−30,666−30,652−30,664
SD1.6229 × 10−7001.7 × 10−37.80381.1844
Best−30,665.5387−30,665.539−30,665.5 4−30,666−30,663−30,665
Worst−30,665.5387−30,665.539−30,665.5 4−30,665−30,628−30,663
G05Mean5126.49675127.88095126.4975239.85263.25161.4
SD3.4771 × 10−61.75067096.91548.6211.303
Best5126.49675126.4975126.4975126.55156.55140.1
Worst5126.49675133.31785126.4975466.45311.45186.9
G06Mean−6961.8139−6961.814−6961.814−6961.81.4141 × 1018−6958.8
SD4.6252 × 10−12000.02367.7453 × 10182.2065
Best−6961.8139−6961.814−6961.814−6961.8−6950.4−6960.7
Worst−6961.8139−6961.814−6961.814−6961.74.24 × 1019−6949.8
G07Mean24.319824.488624. 346325.20732.64226.184
SD1.7424 × 10−20.17434.05 × 10−20.4071396.150.4666
Best24.3072 24.306824.306124.3832.64225.374
Worst24.400524.853924.531627.37696927.474
G08Mean−0.095825−0.095825−0.095825−0.0846−0.0958−0.0958
SD4.7744 × 10−8000.02562.21 × 10−61.61 × 10−17
Best−0.095825−0.095825−0.095825−0.0954−0.0958−0.0958
Worst−0.095825−0.095825−0.095825−0.0255−0.0958−0.0958
G09Mean680.630069680.6329680.6302680.8712.36680.88
SD1.4812 × 10−50.0014611.95 × 10−40.106248.3180.0755
Best680.630058680.6308680.6301680.65684.36680.76
Worst680.630109680.6357680.6310681.12901.86681.13
G10Mean7049.25567283.95437053.1777844.48489.18197.3
SD1.3602 × 10−2117.97936.04 × 100349.86358.25395.94
Best7049.24807050.58957049.2787065.27774.57651.4
Worst7049.29857469.19977071.2538546.49092.68686.3
G11Mean0.7499000.74990.74990.750.75010.75
SD1.1292 × 10−16001.77 × 10−58.05 × 10−51.09 × 10−5
Best0.7499000.74990.74990.750.75010.75
Worst0.7499000.74990.74990.75010.75030.7501
G12Mean−1−1−1−1−1−1
SD00001.69 × 10−70
Best−1−1−1−1−1−1
Worst−1−1−1−1−1−1
Table 13. Comparison results of the MSIPKO and three dynamic penalty function-based algorithms under the same simulation settings.
Table 13. Comparison results of the MSIPKO and three dynamic penalty function-based algorithms under the same simulation settings.
FunctionsStatisticsMSIPKOSMA-GMAGWOIChoA
G01Mean−14.8000 −14.834−7.8403−12.915
SD7.6112 × 10−10.05911.78561.5116
Best−15.0000−15−11.854−14.954
Worst−12.0000 −14.692−5−1.5116
G02Mean−0.795288−0.5378−0.5622−0.775
SD7.1825 × 10−30.11250.05350.0143
Best−0.803531−0.7779−0.7127−0.7916
Worst−0.777743−0.30110.0535−0.733
G03Mean−0.900114 −1−0.9658−0.9936
SD6.8601 × 10−42.51 × 10−80.01230.0019
Best−0.900461 −1−0.9863−0.9963
Worst−0.896726 −1−0.9363−0.9897
G05Mean5126.49675239.85263.25161.4
SD4.6252 × 10−1296.91548.6211.303
Best5126.49675126.55156.55140.1
Worst5126.49675466.45311.45186.9
G07Mean24.375225.20732.64226.184
SD5.9393 × 10−20.4071396.150.4666
Best24.31232924.3832.64225.374
Worst24.57480727.37696927.474
G09Mean680.630375680.8712.36680.88
SD5.1841 × 10−40.106248.3180.0755
Best680.630063680.65684.36680.76
Worst680.632301681.12901.86681.13
G10Mean7128.01977844.48489.18197.3
SD7.9819 × 101349.86358.25395.94
Best7053.1242827065.27774.57651.4
Worst7259.6563368546.49092.68686.3
Table 14. Comparison results between MSIPKO and other algorithms using different constraint-handling strategies.
Table 14. Comparison results between MSIPKO and other algorithms using different constraint-handling strategies.
MSIPKO vs.HMICABSA-SAεSMA-GMAGWOIChoA
G01++++
G02+++++
G03++
G04=++++
G05+++++
G06+++++
G07+++++
G08=++++
G09+++++
G10+++++
G11+++++
G12=++++
Table 15. Numerical simulation conditions and constraints on strategies of 10 comparative algorithms for solving six engineering problems.
Table 15. Numerical simulation conditions and constraints on strategies of 10 comparative algorithms for solving six engineering problems.
Engineering ProblemsPopulation SizeIterationsFEsConstraints on Strategies
P1: I-beam vertical deflection problem10050050,000Static penalty functions
P2: Speed reducer design problem10050050,000Static penalty functions
P3: Three-bar truss design problem 10050050,000Static penalty functions
P4: Welded beam design problem10050050,000Static penalty functions
P5: Tension/compression spring design problem10050050,000Static penalty functions
P6: Pressure vessel design problem10050050,000Static penalty functions
Table 16. Comparison results of MSIPKO and 9 other algorithms for 6 engineering problems (1).
Table 16. Comparison results of MSIPKO and 9 other algorithms for 6 engineering problems (1).
FunctionsStatisticsMSIPKOPKOFLABKATTAO
P1Mean0.0115390880.0115391410.0115390980.0116698590.011539086
SD5.8307 × 10−91.3315 × 10−72.0822 × 10−84.0341 × 10−42.3164 × 10−9
Best0.0115390850.0115390850.0115390850.0115390850.011539085
Worst0.0115391060.0115395280.0115391530.0131093940.011539098
P2Mean2994.471072994.471132994.648413003.338582994.47108
SD5.2520 × 10−61.2865 × 10−43.5592 × 10−14.2623 × 1006.0137 × 10−5
Best2994.471072994.471072994.472362995.391742994.47107
Worst2994.471092994.471532995.800223009.728772994.4713
P3Mean263.89585263.895895263.896001263.898092263.896034
SD1.4485 × 10−55.8965 × 10−52.2680 × 10−47.8132 × 10−32.4729 × 10−4
Best263.895843263.895854263.895846263.895844263.895866
Worst263.895888263.896061263.896506263.926822263.896882
P4Mean1.724918011.725071211.72783331.726373761.72494056
SD7.1110 × 10−51.0226 × 10−46.8226 × 10−37.4049 × 10−42.1646 × 10−4
Best1.724852311.724886431.724918931.725496281.72485231
Worst1.725126911.725277291.748280831.727760651.72579598
P5Mean0.012667280.012672810.012799550.012727660.01268665
SD2.7694 × 10−66.2121 × 10−60.000106140.000111181.9847 × 10−5
Best0.012665230.012666520.012695520.012665570.01266527
Worst0.012674280.012689690.013049030.013127130.01272224
P6Mean6059.976686062.439216589.039966544.239346429.36137
SD4.1967 × 10−17.6779 × 1005.3153 × 1025.5518 × 1022.8135 × 102
Best6059.714346059.748336059.717276059.714356077.12065
Worst6061.764026091.258677903.675647932.611737332.84151
Table 17. Comparison results of MSIPKO and 9 other algorithms for 6 engineering problems (2).
Table 17. Comparison results of MSIPKO and 9 other algorithms for 6 engineering problems (2).
FunctionsStatisticsFOXCOANRBOEAOSFOA
P1Mean0.0133481750.011540310.0115435840.011539090.01153911
SD1.0497 × 10−38.6685 × 10−79.3752 × 10−67.7422 × 10−91.4253 × 10−8
Best0.0120283990.011539140.0115390850.011539080.01153909
Worst0.0156312030.011542050.011573960.011539110.01153914
P2Mean3141.0421842994.926133008.6060282994.471072994.47121
SD2.5363 × 1023.3144 × 10−18.7472 × 1002.0211 × 10−51.7063 × 10−4
Best3001.0320392994.559342996.3212672994.471072994.47107
Worst3983.4274812995.569623026.6884982994.471142994.47169
P3Mean263.8960462263.896454263.8958547263.895859263.895853
SD1.5251 × 10−45.2429 × 10−42.9546 × 10−53.4669 × 10−51.8380 × 10−5
Best263.8958469263.895897263.8958434263.895843263.895843
Worst263.8963818263.897321263.8959382263.895945263.895903
P4Mean2.021622121.729361751.7722667321.724929911.7259699
SD1.8985 × 10−18.0685 × 10−34.3106 × 10−21.4279 × 10−44.1770 × 10−4
Best1.7456689371.725043991.725269541.724852311.72549173
Worst2.4278418731.756419581.852402871.725300841.72700692
P5Mean0.0132147460.012817890.0129513620.012665930.0126655
SD1.2683 × 10−31.5319 × 10−43.7438 × 10−42.3154 × 10−61.8155 × 10−7
Best0.0126708540.012687910.0126690290.012665230.01266527
Worst0.0177337770.013305570.0139348110.012676280.012666
P6Mean25,205.908766498.985386560.1917456063.486876061.11232
SD3.5080 × 1045.0267 × 1025.4950 × 1021.2704 × 1011.2666 × 100
Best6118.9826126059.909486059.7143356059.714346059.7767
Worst11,8920.00157903.675678135.4966576120.469236064.47063
Table 18. Ten algorithms’ ranking results of 6 engineering problems by the mean value.
Table 18. Ten algorithms’ ranking results of 6 engineering problems by the mean value.
MSIPKOPKOFLABKATTAOFOXCOANRBOEAOSFOA
P136491107825
P214683107925
P315610789342
P414763108925
P534765108921
P613975106842
Total rank10263946245845461620
Mean rank1.66674.33336.57.666749.66677.57.66672.66673.3333
Final rank15684107923
Table 19. Six engineering problems’ solution vectors obtained by MSIPKO.
Table 19. Six engineering problems’ solution vectors obtained by MSIPKO.
Engineering Problemsx1x2x3x4x5x6x7F(x)
P150.0000800.17072.87320.0115390849
P23.50000000.7000000177.300007.7153193.3502145.2866542994.471066
P30.7886750.408248263.895843
P40.20572963.47048869.03662390.20572961.7248523
P50.05168590.356642611.29337110.01266523
P60.81250.437542.0984456176.6365966059.714335
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bai, H.; Wu, T.; Luo, J.; Ta, N. Multi-Strategy Improved Pied Kingfisher Optimizer for Solving Constrained Optimization Problems. Biomimetics 2026, 11, 335. https://doi.org/10.3390/biomimetics11050335

AMA Style

Bai H, Wu T, Luo J, Ta N. Multi-Strategy Improved Pied Kingfisher Optimizer for Solving Constrained Optimization Problems. Biomimetics. 2026; 11(5):335. https://doi.org/10.3390/biomimetics11050335

Chicago/Turabian Style

Bai, Hongmei, Taosuo Wu, Jianfu Luo, and Na Ta. 2026. "Multi-Strategy Improved Pied Kingfisher Optimizer for Solving Constrained Optimization Problems" Biomimetics 11, no. 5: 335. https://doi.org/10.3390/biomimetics11050335

APA Style

Bai, H., Wu, T., Luo, J., & Ta, N. (2026). Multi-Strategy Improved Pied Kingfisher Optimizer for Solving Constrained Optimization Problems. Biomimetics, 11(5), 335. https://doi.org/10.3390/biomimetics11050335

Article Metrics

Back to TopTop