Next Article in Journal
Large-Scale Multi-UAV Task Allocation via a Centrality-Driven Load-Aware Adaptive Consensus Bundle Algorithm for Biomimetic Swarm Coordination
Previous Article in Journal
Research Progress on Biomimetic Water Collection Materials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Black-Winged Kite Algorithm Integrating Opposition-Based Learning and Quasi-Newton Strategy

Key Laboratory of Data Science and Artificial Intelligence of Jiangxi Education Institutes, Gannan Normal University, Ganzhou 341000, China
*
Author to whom correspondence should be addressed.
Biomimetics 2026, 11(1), 68; https://doi.org/10.3390/biomimetics11010068
Submission received: 20 November 2025 / Revised: 25 December 2025 / Accepted: 10 January 2026 / Published: 14 January 2026
(This article belongs to the Section Biological Optimisation and Management)

Abstract

To address the deficiencies in global search capability and population diversity decline of the black-winged kite algorithm (BKA), this paper proposes an enhanced black-winged kite algorithm integrating opposition-based learning and quasi-Newton strategy (OQBKA). The algorithm introduces a mirror imaging strategy based on convex lens imaging (MOBL) during the migration phase to enhance the population’s spatial distribution and assist individuals in escaping local optima. In later iterations, it incorporates the quasi-Newton method to enhance local optimization precision and convergence performance. Ablation studies on the CEC2017 benchmark set confirm the strong complementarity between the two integrated strategies, with OQBKA achieving an average ranking of 1.34 across all 29 test functions. Comparative experiments on the CEC2022 benchmark suite further verify its superior exploration–exploitation balance and optimization accuracy: under 10- and 20-dimensional settings, OQBKA attains the best average rankings of 2.5 and 2.17 across all 12 test functions, outperforming ten state-of-the-art metaheuristic algorithms. Moreover, evaluations on three constrained engineering design problems, including step-cone pulley optimization, corrugated bulkhead design, and reactor network design, demonstrate the practicality and robustness of the proposed approach in generating feasible solutions under complex constraints.

1. Introduction

As an important branch of artificial intelligence, swarm intelligence algorithms [1] have become a crucial tool for solving complex optimization problems due to their strong global search capability and good adaptability. In the face of challenges such as multi-objective optimization and uncertainty, traditional optimization algorithms often fail to meet the requirements. In contrast, swarm intelligence algorithms, with their flexibility and gradient-free mechanism, can achieve global optimization in complex search spaces. They have been widely applied in fields such as engineering problems, machine learning, and data mining [2,3], where they demonstrate remarkable advantages. Common swarm intelligence algorithms include particle swarm optimization (PSO) [4], harris hawks optimizer (HHO) [5], grey wolf optimizer (GWO) [6], and sparrow search algorithm (SSA) [7].
In recent years, inspired by natural phenomena and collective behaviors, numerous emerging swarm intelligence algorithms have been proposed, offering new feasible approaches for addressing complex optimization tasks. Examples include the artificial lemming algorithm (ALA) [8], mirage search optimization (MSO) [9], whale migration algorithm (WMA) [10], and black-winged kite algorithm (BKA) [11]. Among them, the BKA is a swarm intelligence optimization algorithm inspired by the migration and predation behaviors of black-winged kites. By integrating the Cauchy mutation strategy and leader-based mechanism, BKA enhances global search capability while significantly improving convergence speed, thereby achieving a good balance between global exploration and local exploitation. However, BKA suffers from insufficient population diversity, making it prone to premature convergence or repetitive convergence, which limits its performance when dealing with complex optimization problems in high-dimensional search spaces.
To address the aforementioned shortcomings, numerous researchers have proposed improvements to the BKA. Dai et al. [12] proposed the hybrid black-winged kite algorithm (BWOA), which combines the precise search mechanism of BKA with the spiral encircling strategy of the whale optimization algorithm (WOA). By dividing high-fitness individuals into subgroups for parallel optimization and introducing an elitism mechanism and levy flight perturbation, the algorithm effectively balances global exploration and local exploitation, preventing premature convergence. Zhang et al. [13] proposed the osprey-combined black-winged kite algorithm (OCBKA), which uses Logistic chaotic mapping for population initialization and integrates the osprey optimization algorithm (OOA) to enhance global search ability and overall performance. Zhou et al. [14] proposed the BKA optimization algorithm based on sine-cosine guidelines (SCBKA), which significantly improves convergence speed and solution accuracy. Zhang et al. [15] developed an Improved Black-winged Kite Algorithm (IBKA) for network security situation assessment. Their method integrates Gaussian mutation, opposition-based learning, and an optimal individual oscillation strategy, achieving superior convergence accuracy compared with other swarm intelligence algorithms. Zhu et al. [16] proposed an improved hybrid algorithm that combines the global exploration capability of BKA with the local exploitation characteristics of PSO, while introducing the differential mutation operator from differential evolution (DE) to enhance population diversity and prevent premature convergence. Li et al. [17] addressed the insufficient accuracy of BKA in solving practical problems by proposing an improved version that integrates the OOA with vertical and horizontal crossover enhancements, effectively improving the algorithm’s performance. To more clearly compare the performance differences among various algorithms, we present the comparison in Table 1.
Zhang et al. [18] proposed a PIO–L-BFGS hybrid strategy, which combines the pigeon-inspired optimization (PIO) algorithm with the limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm [19], effectively improving local search accuracy and convergence performance. Cuevas et al. [20] introduced a cheetah optimizer hybrid approach based on opposition-based learning and diversity metrics (CO-DO). In this approach, an opposition-based learning (OBL) mechanism is incorporated into the Cheetah Optimizer (CO), where candidate solutions are updated by considering their counterparts in opposite regions of the search space, thereby expanding the search range and enhancing population diversity. In addition, the BHJO algorithm proposed by Zitouni et al. [21] also integrates an OBL strategy within a hybrid metaheuristic framework, improving the global exploration capability of the algorithm and further validating the effectiveness of OBL in expanding the search space. These studies collectively indicate that quasi-Newton-based refinement and OBL mechanisms are effective in enhancing local exploitation accuracy and global exploration capability, respectively.
Despite the fact that the aforementioned improvements have achieved certain success in enhancing the performance of the BKA, issues such as insufficient global exploration capability and inadequate local exploitation accuracy still persist. To address these problems, this paper proposes a black-winged kite algorithm integrating opposition-based learning and quasi-Newton strategy (OQBKA) to further improve the overall optimization performance of the algorithm. Specifically, a mirror imaging strategy based on convex lens imaging (MOBL) [22] is introduced during the migration phase to generate dynamic opposite solutions, thereby enhancing global search capability and effectively improving population diversity. In the later stage of iteration, the L-BFGS algorithm with boundary constraints is integrated to perform local optimization on the top five individuals with the best fitness, thereby strengthening the algorithm’s local exploitation ability.
To verify the effectiveness of the integrated strategies, ablation experiments were conducted on the CEC2017 benchmark function set to analyze the impact of the MOBL and L-BFGS strategies on algorithm performance. In addition, comparative experiments on the CEC2022 benchmark set demonstrate that the proposed OQBKA algorithm significantly outperforms the original BKA in terms of convergence speed and optimization accuracy. Furthermore, to evaluate its potential for practical engineering applications, the OQBKA was applied to several constrained engineering optimization problems, including the step-cone pulley optimization problem [23], corrugated bulkhead design [24], and reactor network design [25]. The experimental results confirm the applicability of OQBKA in solving real-world constrained engineering optimization problems.
The main contributions of this study are as follows:
(1)
The novel integration of MOBL and L-BFGS into BKA, creating a more balanced and powerful optimizer;
(2)
A comprehensive empirical validation using standardized benchmarks, showing significant improvements in convergence speed and solution accuracy;
(3)
The demonstration of OQBKA’s effectiveness in solving complex, constrained real-world engineering problems.

2. Materials and Methods

2.1. Review of BKA

The black-winged kite exhibits distinctive migratory and predatory behaviors, characterized by long-distance movement and precise hovering during hunting. Drawing inspiration from these biological characteristics, Wang et al. [11] developed the BKA, which models these behaviors through three main computational components.
(1)
Population Initialization
In the BKA, the first step of population initialization is to generate a set of random solutions. The position of each black-winged kite in the population can be represented by the following matrix:
B K = B K 1 , 1 B K 1 , 2 B K 1 , dim B K 2 , 1 B K 2 , 2 B K 2 , dim B K p o p , 1 B K p o p , 2 B K p o p , dim
where p o p denotes the population size, dim represents the problem dimension, and B K i , j indicates the position of the i t h black-winged kite in the j t h dimension.
X i = B K l b + r × B K u b B K l b
During the initialization process, the BKA selects the individuals with the best fitness values, denoted as X L , from the initial population as leaders. The mathematical expression is given as follows:
f b e s t = min ( f ( X i ) ) ,
X L = X f i n d f b e s t = = f X i
where f b e s t denotes the optimal fitness value in the population, X i represents the i t h individual in the population, and f i n d refers to the search operation used to locate individuals that meet specific conditions.
(2)
Attack Behavior
As predators, black-winged kites adjust the angles of their wings and tails according to wind speed, hovering in the air to observe their prey before swiftly diving at the appropriate moment to capture it. In the algorithm, the predation strategy is modeled as attack behaviors with distinct characteristics, corresponding to the global search and local search processes, respectively. The mathematical model can be expressed as follows:
y t + 1 i , j = y t i , j + n × ( 1 + sin ( r ) ) × y t i , j ,   i f   p < r y t i , j + n × ( 2 r 1 ) × y t i , j ,   e l s e
n = 0.05 × e 2 × t T 2
where y t i , j and y t + 1 i , j represent the positions of the i t h black-winged kite in the j t h dimension at the t t h and t + 1 t h iterations, respectively; n denotes a parameter that varies with the number of iterations and is used to adjust the step size; p is a constant with a value of 0.9; t represents the current iteration number, and T denotes the maximum number of iterations.
(3)
Migration Behavior
Bird migration is a complex collective behavior that is typically influenced by environmental factors such as climate change and food availability. Inspired by this phenomenon, the BKA designs a dynamic leader migration strategy. The basic idea is as follows: if the current leader’s fitness value is inferior to that of a randomly selected candidate individual, the leader relinquishes its leadership and joins the migrating population. Conversely, when the leader’s fitness value is superior to that of the random individual, it continues to guide the population toward the target region. This strategy allows for the dynamic selection of high-quality leaders, thereby ensuring successful migration. The mathematical model of the migration behavior is as follows:
y t + 1 i , j = y t i , j + C 0 , 1 × ( y t i , j L t j ) ,   i f   F i < F r i y t i , j + C 0 , 1 × ( L t j m × y t i , j ) ,   e l s e
m = 2 × sin ( r + π 2 )
where L t j denotes the leadership score of the black-winged kite in the j t h dimension at the t t h iteration; y t i , j and y t + 1 i , j represent the positions of the i t h black-winged kite in the j t h dimension at the t t h and t + 1 t h iterations, respectively; F i denotes the current position of any black-winged kite in the j t h dimension at iteration t t h ; F r i represents the fitness value of a randomly selected position in the j t h dimension obtained from any black-winged kite at iteration t t h ; and C 0 , 1 is a random variable following the Cauchy distribution, whose probability density function is defined as follows:
f ( x , δ , μ ) = 1 π δ δ 2 + ( x μ ) 2 , < x < +
where x denotes the random variable, δ is the scale parameter that controls the width of the distribution, and μ represents the location parameter.
When δ = 1 and μ = 0 , it simplifies to the standard Cauchy distribution:
f ( x , δ , μ ) = 1 π 1 1 + x 2 , < x < +

2.2. MOBL

Opposition-based learning (OBL) is an optimization strategy that expands the search range by generating opposite solutions based on the current ones, effectively enhancing population diversity and improving global search capability. In the case of BKA, its global search ability tends to decline in the later stages of the search process, leading to reduced population diversity and a higher risk of trapping in local optima. Although the algorithm incorporates Cauchy mutation to increase perturbation amplitude, its effect is limited when dealing with high-dimensional multimodal functions, making it difficult to significantly improve search performance. Therefore, a stronger exploration mechanism needs to be introduced in the later stages of the algorithm to prevent premature convergence.
In this study, the MOBL method proposed by Yao et al. [18] is adopted. The core idea of this approach is to simulate the physical principle of convex lens imaging, enabling the opposite solutions to be uniformly distributed within the search space, thereby improving search efficiency. Specifically, for a current solution X , its opposite solution X is calculated as follows:
X = u b + l b 2 + u b + l b 2 q X q
where u b and l b denote the upper and lower bounds of the search space, respectively; q is the mirror factor, calculated as q = 10 × 1 2 × t / T 2 ; T represents the total number of iterations, and t is the current iteration number.
Compared with the standard OBL strategy, the MOBL strategy can generate higher-quality opposite solutions within a broader search space, effectively enhancing the global exploration capability of the population.
By introducing the MOBL strategy after the attacking behavior and migration behavior of the BKA, population diversity can be effectively improved, helping the algorithm escape from local optima. Meanwhile, the mirror factor in MOBL gradually decreases with the number of iterations, leading to a convergence in the amplitude of the generated opposite solutions. This facilitates more refined local search and convergence control in the later stages of iteration, thereby achieving a balanced trade-off between global exploration and local exploitation capabilities.

2.3. L-BFGS

The Newton method is a classical unconstrained optimization approach with favorable convergence properties. Its core idea is to compute the hessian matrix of the objective function to obtain a more accurate search direction. However, since each iteration requires the computation and solution of the hessian matrix, the computational cost is high, which limits its applicability to high-dimensional problems. To reduce computational complexity, the quasi-Newton method [26] was proposed. The quasi-Newton method iteratively constructs a symmetric positive definite matrix to approximate the inverse of the hessian matrix of the objective function, thereby significantly reducing computational and storage overhead.
The Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm [27] is one of the most representative methods among the quasi-Newton family. In each iteration, it dynamically updates an approximation of the inverse hessian matrix using gradient information to determine the descent direction. To further reduce storage and computational overhead, the limited-memory BFGS (L-BFGS) method retains only a limited number of historical information vectors and implicitly approximates the inverse hessian matrix, making it particularly suitable for large-scale optimization problems. The updating formula for the inverse hessian matrix is as follows:
H k + 1 = ( I ρ k s k y k T ) H k ( I ρ k y k s k T ) + ρ k s k s k T
where H k denotes the approximate inverse hessian matrix at the k t h iteration, s k = x k + 1 x k represents the position change vector, y k = f k + 1 f k denotes the gradient change vector, and ρ k = 1 / y k T s k is the scaling factor. This formula is used to update the approximation of the inverse hessian matrix, thereby obtaining a new search direction and accelerating the convergence process.
In the original version of the BKA, the leader update strategy retains only the individual with the best current fitness, which may cause the algorithm to become trapped in local optima, reduce population diversity, and eventually lead to premature convergence. To enhance the local exploitation capability and improve the solution quality, this study introduces the L-BFGS algorithm in the later stages of iteration to perform refined local searches on several high-quality individuals.
To prevent local search from interfering prematurely with global exploration, this strategy is activated during the later stages of algorithm iteration. Once the triggering condition is met, the top K individuals with the best fitness values in the current population are selected, and each of their current positions is used as an initial point to invoke the objective function and perform local refinement using the L-BFGS algorithm. Since L-BFGS is a local optimization method with relatively high computational cost, it is not suitable to be applied to the entire population. In this work, L-BFGS is only performed on the top 5 individuals with the best fitness values. This elitist strategy ensures that local refinement is conducted in promising regions of the search space, thereby improving solution quality while significantly reducing computational overhead. Experimental analysis indicates that a larger number of refined individuals can improve search accuracy but significantly increases computational cost and reduces efficiency; conversely, a smaller number may lead to insufficient local exploitation, degrading solution precision. Therefore, K is set to 5 to balance effectiveness and efficiency. Additionally, the maximum number of L-BFGS iterations is set to 20, and the function tolerance is fixed at 10 6 . Boundary constraints are handled by projecting any infeasible solutions back into the predefined search space. During the local search process, if the optimized result outperforms the original individual, the individual’s position and fitness value are updated with the new solution; otherwise, the original solution is retained. By incorporating this strategy, the algorithm maintains strong global exploration capability while performing deeper exploitation of high-quality solutions, effectively enhancing overall optimization accuracy and convergence performance.

2.4. OQBKA

The specific implementation steps of the proposed OQBKA are presented in Algorithm 1.
Algorithm 1: OQBKA
Input: Maximum number of iterations T , population size N , lower and upper bounds of the search space l b and u b , problem dimension dim , and fitness function f o b j .
Output: Best solution X b e s t and its fitness F b e s t .
1.
Initialize the population X i ( i = 1 , 2 , , N ) randomly within [ l b , u b ] .
2.
Evaluate the fitness F i = f ( X i ) of each black-winged kite.
3.
Set iteration counter t = 1 .
4.
while ( t T ) do
5.
 Sort the population according to fitness values.
6.
 Set the best individual as X L .
7.
/* Attacking behavior */
8.
for  i = 1 to T  do
9.
    Compute the attack factor n using Equation (6).
10.
  Generate random numbers p , r ( 0 , 1 ) .
11.
  if  p < r  then
12.
    X i t + 1 = X i t + n × ( 1 + sin ( r ) ) × X i t
13.
  else
14.
    X i t + 1 = X i t + n × ( 2 r 1 ) × X i t
15.
  Apply boundary control.
16.
  Evaluate the new solution and keep the better one.
17.
/* Migration behavior */
18.
  Compute the migration factor m using Equation (8).
19.
  if  F i < F r i  do
20.
    X i t + 1 = X i t + C 0 , 1 × ( X i t X L )
21.
  else
22.
    X i t + 1 = X i t + C 0 , 1 × ( X L m × X i t )
23.
  Apply boundary control.
24.
  Evaluate the new solution and keep the better one.
25.
/* MOBL */
26.
  Compute the mirror factor q using the formula q = 10 × 1 2 × t / T 2 .
27.
  Compute the opposite solution X using Equation (11).
28.
  Apply boundary control.
29.
  Evaluate the new solution and keep the better one.
30.
end for
31.
/* L-BFGS */
32.
if  t 0.7 T  then
33.
  Select the top K = min ( 5 , p o p ) elite individuals.
34.
  Apply L-BFGS local search to each elite solution.
35.
  Replace the original solution if improvement is achieved.
36.
end if
37.
  Update global best solution X b e s t and F b e s t .
38.
 Set t = t + 1 .
39.
end while
40.
return  X b e s t and F b e s t .
The source code of OQBKA (commit 46442f3) is publicly available on GitHub at https://github.com/saturnus120/OQBKA (accessed on 25 December 2025). Figure 1 shows the flowchat of OQBKA.
Although rigorous global convergence proofs for metaheuristic algorithms remain challenging, OQBKA effectively enhances its convergence performance through several key mechanisms. First, the algorithm inherits the elitism mechanism from BKA, ensuring that the best solution found so far is never lost during the iterative process. Second, the MOBL strategy strengthens global exploration and effectively mitigates premature convergence. Furthermore, the integration of the L-BFGS method in the late stage of optimization leverages its well-established local convergence guarantees under the assumption of objective function smoothness, significantly improving solution accuracy. These improvements in convergence behavior, however, could potentially increase computational cost. To verify that OQBKA remains efficient, we analyze its time complexity per iteration. Let N denote the population size and D the problem dimensionality. The main computational cost of OQBKA arises from the attack, migration, and mirror-based opposition learning operations during population evolution. In each generation, these operators update N individuals of dimensionality D , resulting in a per-iteration complexity of O ( N D ) . The L-BFGS local search is applied only to the top 5 individuals during the final 30% of iterations, introducing an additional overhead of O ( T D ) (constant factors omitted). Consequently, the overall time complexity of OQBKA is O ( T N D ) , which is on the same order as the original BKA and does not impose a significant computational burden.

3. Results

3.1. Experiment Setup

The experimental setup of this study consists of four parts:
(1)
Perform parameter sensitivity experiments on selected CEC2022 benchmark functions to determine optimal algorithm parameter settings.
(2)
Ablation experiments are conducted on the CEC2017 benchmark set to analyze the effectiveness of the introduced improvement strategies.
(3)
Comparative experiments are performed on the CEC2022 standard benchmark function set against other swarm intelligence algorithms to comprehensively evaluate the overall performance of the proposed improved algorithm.
(4)
Perform population diversity experiments on unimodal and multimodal functions to evaluate the algorithm’s exploration capability.
(5)
Engineering design experiments are conducted on three constrained optimization problems to evaluate the applicability and robustness of the proposed algorithm, where general nonlinear constraints are handled using a penalty-based objective function that penalizes infeasible solutions by adding a large constant to their fitness values, effectively guiding the search toward the feasible region.

3.2. Parameter Sensitivity Analysis

Among the key parameters in OQBKA, the L-BFGS activation threshold k (i.e., L-BFGS is triggered when t k T ) has a significant impact on the balance between exploration and exploitation. To determine its optimal value, we evaluate k 0.1 ,   0.4 ,   0.7 ,   0.9 on selected CEC2022 benchmark functions (F1, F4, F7, F10) in 10 dimensions over 30 independent runs. These functions represent four distinct types of optimization landscapes: the unimodal F1, the multimodal F4, the hybrid F7, and the composition function F10, ensuring a comprehensive evaluation. As summarized in Table 2, k = 0.7 consistently achieves the best performance in terms of solution accuracy and convergence stability. Smaller values lead to premature convergence due to early local search, while larger values delay refinement and result in suboptimal final solutions. Therefore, k = 0.7 is adopted in the final algorithm.
In addition, the number of elite individuals refined by the L-BFGS local search is set to K = 5 , as a larger value would lead to significantly higher computational overhead, while a smaller value may result in insufficient local exploitation and premature convergence. The scaling factor q in the MOBL strategy adopts the original formulation q = 10 × 1 2 × t / T 2 , which has been validated in the original study and confirmed effective by our preliminary experiments.

3.3. Ablation Experiments on CEC2017

To evaluate the effectiveness of each improvement strategy, ablation experiments are conducted on 29 benchmark functions from the CEC2017 test suite. Among these functions, F1–F3 are unimodal functions mainly used to test the convergence accuracy of the algorithm; F4–F11 are multimodal functions designed to assess the algorithm’s global search capability in escaping local optima; F12–F20 are hybrid functions, and F21–F29 are composition functions with higher overall complexity, which are employed to comprehensively evaluate the algorithm’s adaptability to complex optimization problems.
Based on the original BKA, two variants were developed by separately incorporating the L-BFGS and MOBL strategies. The algorithm integrated with the L-BFGS strategy is denoted as LBKA, while the one incorporating the MOBL strategy is denoted as OBKA. These two variants, along with the OQBKA and the original BKA, are compared in this experiment. By statistically analyzing the mean fitness values, standard deviations, and average rankings of each algorithm across all benchmark functions, the contribution of each strategy to the overall performance is evaluated.
All experiments were conducted strictly following the parameter settings recommended in the original papers of each algorithm. The maximum number of iterations was uniformly set to 200, and each experiment was independently run 30 times to ensure the reliability and statistical validity of the results. The statistical outcomes are summarized in Table 3, where bold values indicate the best results, and the convergence behavior of the algorithms is illustrated in Figure 2.
According to the experimental results, the following conclusions can be drawn:
(1)
For complex functions such as F5, F6, F8, F9, F11, F14, F17, and F18, OBKA performs significantly better than BKA, indicating that the incorporation of the MOBL strategy effectively enhances the algorithm’s global search capability, particularly when dealing with complex multimodal functions.
(2)
LBKA achieves a substantially better average ranking than BKA and obtains optimal or near-optimal results on functions F1, F3, F11, F12, F14, F17, and F18, demonstrating that the quasi-Newton strategy improves the algorithm’s local search efficiency and convergence accuracy, thereby strengthening its ability to locate high-quality solutions.
(3)
The OQBKA algorithm, which integrates both improvement strategies, demonstrates overall performance that far surpasses either variant using a single strategy. It achieves the best results on 20 benchmark functions, with an average ranking of 1.34, significantly outperforming OBKA, LBKA, and the original BKAs. Specifically, OQBKA and LBKA exhibit comparable performance on the unimodal functions F1–F3, both clearly outperforming the other algorithms; OQBKA achieves the best performance on six out of seven multimodal benchmark functions F4–F10; it ranks first on seven out of ten hybrid functions F11–F20, showing a distinct advantage particularly on F14–F19; and it obtains the best results on seven out of nine composite functions F21–F29.These findings indicate that the integration of the two strategies produces a complementary effect: the MOBL strategy enhances the algorithm’s ability to explore a broader solution space, while the L-BFGS strategy improves solution accuracy and convergence speed. As a result, OQBKA demonstrates superior optimization performance across various types of benchmark functions, particularly excelling in solving multimodal and complex optimization problems.
(4)
In terms of standard deviation, OQBKA exhibits strong stability across most functions. Notably, for F7, F15, and F25, the standard deviations are exceptionally small, indicating that the algorithm produces stable and reliable results.

3.4. Comparison Experiments on CEC2022

The CEC2022 test suite encompasses a variety of complex optimization problems and is widely used for standardized evaluation of swarm intelligence algorithms. The suite contains 12 functions, which can be categorized into four types based on their characteristics: F1–F2 are unimodal functions used to test the algorithm’s local search accuracy and convergence speed; F3–F5 are multimodal functions designed to assess the algorithm’s ability to escape local optima; F6–F8 are hybrid functions that combine multiple basic functions to increase problem complexity and evaluate the algorithm’s adaptability in non-uniform search spaces; F9–F12 are composition functions in which multiple functions with different characteristics are nested and integrated, providing a comprehensive assessment of the algorithm’s global search capability and robustness.
On the CEC2022 test suite, the proposed OQBKA algorithm is evaluated against the original BKA and several mainstream swarm intelligence algorithms. The comparison algorithms include four classical swarm intelligence algorithms: PSO, HHO, GWO, and SSA; three recently proposed algorithms: ALA, MSO, and WMA; and two improved variants of BKA: SCBKA and IBKA. In addition, to verify the statistical significance of the results, the performance of each algorithm on the test functions is analyzed using the Wilcoxon rank-sum test.
All algorithms are compared under a unified parameter configuration: N for population size and T = 200 for maximum number of iterations. To ensure statistical reliability, each algorithm is independently executed 30 times on the 12 test functions, with the best fitness value recorded for each run, and the global best results highlighted in bold. The experimental results are presented in Table 4 and Table 5, showing the performance of OQBKA compared with the 10 benchmark algorithms on the CEC2022 test suite.
To comprehensively evaluate algorithm performance, a multi-dimensional evaluation metric system is employed, including:
(1)
Statistical significance test: The Wilcoxon rank-sum test (significance level α = 0.05 ) is used to evaluate differences in algorithm performance. In the results, the symbols “−”, “=“, and “+” indicate that a comparison algorithm is statistically significantly worse than, equivalent to, or significantly better than OQBKA, respectively.
(2)
Solution ranking: Algorithms are ranked based on their average fitness values on the test functions, with lower fitness values corresponding to higher ranks. When average fitness values are equal, the algorithm with the smaller standard deviation receives a higher rank, providing a comprehensive reflection of solution quality and algorithm stability. Moreover, the Wilcoxon test is not involved in the ranking process. It is used solely for post hoc analysis to assess whether the observed performance differences are statistically significant.
(3)
Convergence efficiency analysis: The convergence curves are used to compare the dynamic optimization capabilities of the algorithms throughout the iteration process.
As shown in Table 4, which presents the experimental results on the 10-dimensional test functions, OQBKA achieves the best rankings on eight functions: F1, F2, F6, F8, F9, F10, F11, and F12, with an overall average ranking of 2.5. Although it does not obtain the best fitness value on F10, the Wilcoxon rank-sum test indicates that the difference between OQBKA and the best-performing algorithm is not statistically significant. On F1, F2, and F11, OQBKA successfully converges to the theoretical global optimum, demonstrating its precise search capability in high-dimensional complex spaces. Convergence curve analysis shows that OQBKA exhibits the fastest convergence speed on all functions except F3–F5. In particular, for functions F6–F12, the slopes of OQBKA’s convergence curves are significantly steeper than those of the comparison algorithms, indicating that the integrated strategies effectively accelerate the population’s convergence toward the optimal regions. To further illustrate the convergence behavior, Figure 3 shows the detailed convergence curves of OQBKA. The algorithm exhibits a distinct “two-stage decline” pattern: during the early iterations, the MOBL strategy enables it to rapidly reduce the fitness value, achieving efficient global exploration; in the middle and later stages, as the mirror factor gradually converges and triggers the L-BFGS local search, OQBKA maintains a slow yet steady downward trend, demonstrating strong local exploitation and continuous convergence capability. This pattern indicates that OQBKA achieves a good dynamic balance between exploration and exploitation
Table 5 presents the experimental results of different algorithms on 20-dimensional optimization problems. The results show that OQBKA achieves the best rankings on eight test functions. Although it performs slightly worse than PSO on F7, its best fitness value is close to that of PSO, remaining highly competitive. Furthermore, as the problem dimensionality increases, OQBKA maintains a leading advantage, particularly demonstrating excellent optimization capability on unimodal functions and complex composition functions.
In terms of stability, OQBKA achieves the lowest standard deviation on more than half of the test functions, indicating that its optimization results are more consistent and can maintain high reproducibility across multiple independent runs. As shown by the convergence curves in Figure 4, OQBKA demonstrates faster convergence on all test functions except F3–F5, with slopes significantly steeper than those of the comparison algorithms, further confirming its efficient global search and local exploitation capabilities.
Moreover, the convergence curves of OQBKA exhibit a distinct “two-stage descent” pattern: in the early iterations, the MOBL strategy enables the algorithm to rapidly reduce the fitness value, achieving efficient global exploration; in the middle-to-late stages, as the mirror factor gradually converges and triggers the L-BFGS local search, the algorithm maintains a slow yet steady descent, demonstrating strong local exploitation and sustained convergence capability. This behavior indicates that OQBKA achieves a well-balanced dynamic trade-off between exploration and exploitation.
In terms of statistical significance analysis, the Wilcoxon rank-sum test results indicate that OQBKA demonstrates a significant advantage over the other algorithms on most test functions. In particular, for functions F1, F2, F6, and F8–F12, OQBKA performs significantly better than the comparison algorithms, clearly highlighting its superior capability in handling high-dimensional complex optimization problems.

3.5. Search Dynamics Visualization

To quantitatively analyze the impact of the proposed enhancement strategies on population diversity and convergence behavior, comparative experiments are conducted on representative unimodal and multimodal benchmark functions. The diversity curves and convergence trajectories of BKA and OQBKA are presented for analysis.
As shown in Figure 5 and Figure 6, OQBKA consistently maintains significantly higher population diversity than BKA throughout the optimization process. The diversity metric adopted in this study, denoted as D c ( t ) , is defined as the average Euclidean distance among individuals in the population and is used to characterize the spatial distribution of the population. This metric remains at a relatively high level during the middle and late stages of OQBKA, indicating that the incorporation of the MOBL strategy effectively suppresses premature convergence and preserves the global exploration capability of the algorithm.
In contrast, the population diversity of BKA decreases rapidly after approximately 50 iterations, revealing an evident early stagnation phenomenon. This issue becomes more pronounced in the multimodal case, as shown in Figure 6, where the diversity of BKA collapses sharply around iteration 150, whereas OQBKA is able to maintain a relatively stable diversity level until convergence.The corresponding convergence curves, as shown in Figure 7 and Figure 8, further demonstrate that higher population diversity can be translated into superior solution quality and more stable convergence behavior. OQBKA exhibits a smoother convergence process and achieves lower final fitness values than BKA on both types of functions. In particular, in the multimodal scenario, BKA tends to be trapped in local optima, while OQBKA continues to improve and ultimately approaches the global optimum.
Figure 5. Population diversity curve on unimodal functions.
Figure 5. Population diversity curve on unimodal functions.
Biomimetics 11 00068 g005
Figure 6. Population diversity curve on multimodal functions.
Figure 6. Population diversity curve on multimodal functions.
Biomimetics 11 00068 g006
Figure 7. Convergence curve on unimodal functions.
Figure 7. Convergence curve on unimodal functions.
Biomimetics 11 00068 g007
Figure 8. Convergence curve on multimodal functions.
Figure 8. Convergence curve on multimodal functions.
Biomimetics 11 00068 g008

3.6. Engineering Design Problems

For general nonlinear constraints in engineering problems, a penalty-based objective function is adopted: infeasible solutions are penalized by adding a large constant to their fitness value, effectively guiding the search toward the feasible region.
(1) Step-Cone Pulley Problem
A step-cone pulley is a stepped conical structure composed of a series of pulleys. They are used in pairs to change the speed ratio between shafts. Power is transmitted from one shaft to another distant shaft by a belt or rope running over the pulleys. The primary objective of this problem is to minimize the weight of a four-step conical pulley using five design variables: four variables corresponding to the diameters of each step and a fifth variable representing the pulley width. The problem includes 11 nonlinear constraints to ensure that the transmitted power equals 0.75   h p . The mathematical formulation of the problem is defined as follows:
W = π 4 ρ w d 1 2 1 + N 1 N 2 + d 2 2 1 + N 2 N 2 + d 3 2 1 + N 3 N 2 + d 4 2 1 + N 4 N 2
where W denotes the weight of the four-step step-cone pulley; ρ = 7200   k g / m 3 denotes the material density; w denotes the radial width of the pulley, with range 16   m m w 100   m m ; d i denotes the diameter of the i t h pulley, with range 40   m m d i 100   m m ; N is the input rotational speed; and N i is the output rotational speed of the i t h pulley.
The constraints are as follows:
h 1 x = c 1 c 2 = 0
h 2 x = c 1 c 3 = 0
h 3 x = c 1 c 4 = 0
g 1 , 2 , 3 , 4 x = R i 2 0 ;   i = 1 , , 4
g 5 , 6 , 7 , 8 x = P i 0.75 × 745.6998 0 ;   i = 1 , , 4
c i = d i π 4 N i N + 1 + d i 2 N i N 1 2 4 a + 2 a
P i = s t w 1 P i × d i π N i 60 ;   i = 1 , , 4
where h 1 x , h 2 x , h 3 x represent the nonlinear equality constraints; g i x represents the nonlinear inequality constraints; c i is the belt length of the i t h pulley; R i is the tension on the i t h pulley; P i is the power transmitted to the i t h pulley; a is the center distance of the pulleys, representing the distance between the centers of the two pulleys, with a value of 3   m ; s is the permissible material strength, with a value of 1.75   M P a ; t is the axial thickness of the pulley, with a value of 8   m m ; μ is the coefficient of dynamic friction, with a value of 0.35 .
Table 6 and Figure 9 presents the performance of 11 algorithms in solving the step-cone pulley problem. Each algorithm was independently executed 20 times, and the best value, worst value, standard deviation, and mean value were recorded. It can be observed that the OQBKA algorithm achieved the best results among all algorithms across the 20 independent runs, significantly outperforming the other compared methods. The extremely small standard deviation indicates that OQBKA exhibits high stability with minimal result fluctuations across multiple runs. Moreover, its worst performance also remains at a relatively high level, fully demonstrating the superior optimization accuracy, stability, and robustness of the OQBKA algorithm.
Figure 9. Performance on step-cone pulley problem.
Figure 9. Performance on step-cone pulley problem.
Biomimetics 11 00068 g009
(2) Corrugated Bulkhead Design
The corrugated bulkhead design problem involves four design variables, denoted as x 1 , x 2 , x 3 and x 4 . The optimization objective is to minimize the weight of the corrugated bulkhead of the tanker. The mathematical model for this problem is as follows:
Objective function:
f x = 5.885 x 4 ( x 1 + x 3 ) x 1 + x 3 2 + x 2 2 1 / 2
The constraints are as follows:
g 1 x = x 2 x 4 0.4 x 1 + 1 6 x 3 0.894 x 1 + x 3 2 x 2 2 1 / 2 0
g 2 x = x 2 2 x 4 0.2 x 1 + 1 12 x 3 2.2 8.94 x 1 + x 3 2 x 2 2 1 / 2 3 / 4 0
g 3 x = x 4 0.0156 x 1 0.15 0
g 4 x = x 4 0.0156 x 3 0.15 0
g 5 x = x 4 1.05 0
g 6 x = x 4 x 2 0
where x 1 , x 2 , x 3 , x 4 represent the width, depth, length, and thickness of the corrugated bulkhead plate, respectively; g i x represents the nonlinear inequality constraints.
Table 7 and Figure 10 present the experimental results. From the results, it can be observed that OQBKA consistently achieved excellent optimal values and relatively low worst values across multiple independent runs, with a very small standard deviation, demonstrating good optimization accuracy, stability, and robustness.
Figure 10. Performance on corrugated bulkhead design problem.
Figure 10. Performance on corrugated bulkhead design problem.
Biomimetics 11 00068 g010
(3) Reactor Network Design
The optimization of a two-stage continuous stirred-tank reactor (CSTR) system aims to maximize the concentration of substance B in the second reactor by adjusting the reactor parameters. As illustrated in Figure 11, species A is fed into the first reactor, where it is sequentially converted into intermediate B and final product C. The outlet streams from each reactor contain mixtures of species A, B, and C, as indicated by the labels on the connecting arrows. The mathematical model of the system can be formulated as follows:
Objective function:
f ( x ¯ ) = x 4
The constraints are as follows:
h 1 ( x ¯ ) = k 1 x 5 x 2 + x 1 1 = 0
h 2 ( x ¯ ) = k 3 x 5 x 3 + x 3 + x 1 1 = 0
h 3 ( x ¯ ) = k 2 x 6 x 2 + x 1 + x 2 = 0
h 4 ( x ¯ ) = k 4 x 6 x 4 + x 2 x 1 + x 4 x 3 = 0
g 1 ( x ¯ ) = x 5 0.5 + x 6 0.5 4
where h i ( x ¯ ) denotes nonlinear equality constraint functions and g i ( x ¯ ) denotes nonlinear inequality constraint functions. x 1 and x 2 represent the concentrations of substances A and B in the first vessel, respectively, with ranges 0 x 1 , x 2 1 . x 3 and x 4 represent the concentrations of substances A and B in the second vessel, respectively, with ranges 0 x 3 , x 4 1 . x 5 and x 6 denote the volumes of the first and second reactor vessels, respectively, with ranges 0.00001 x 5 , x 6 16 . k 1 and k 2 denote the rate constants for the conversion of substance A to substance B in the first and second reactor vessels, respectively, with values k 1 = 0.09755988 and k 2 = 0.99 k 1 . k 3 and k 4 are additional rate constants in the first and second reactor vessels, respectively, with values k 3 = 0.0397908 and k 4 = 0.9 k 3 .
As shown in Table 8, all listed values correspond to the objective function value of x 4 . Since the original objective function is x 4 , maximizing x 4 is equivalent to minimizing x 4 . From the table, it can be observed that the OQBKA algorithm not only achieves the objective function value closest to the theoretical optimum but also outperforms other algorithms in terms of mean and standard deviation, demonstrating excellent stability. As shown in Figure 12, certain algorithms such as HHO, MSO, and WMA yield results that deviate significantly from the theoretical optimal range, suggesting that these methods may have failed to strictly satisfy all feasibility constraints during the optimization process, resulting in solutions that violate the mathematical model definitions. In contrast, the OQBKA algorithm consistently produces output values within a reasonable negative range throughout the optimization process, indicating stronger constraint-handling capability and higher solution reliability.
Figure 12. Performance on reactor network design problem. Red values with wavy lines and arrows indicate abnormally high objective values due to algorithmic divergence or poor performance.
Figure 12. Performance on reactor network design problem. Red values with wavy lines and arrows indicate abnormally high objective values due to algorithmic divergence or poor performance.
Biomimetics 11 00068 g012

4. Conclusions

This study proposes an improved BKA, named OQBKA, which integrates opposition-based learning and a quasi-Newton strategy. By incorporating the convex lens opposition-based mechanism and the L-BFGS local refinement process, OQBKA effectively enhances the global exploration and local exploitation capabilities of the original BKA. Experimental results on the CEC2022 benchmark functions demonstrate that OQBKA achieves superior performance in both convergence accuracy and speed. Moreover, through engineering applications such as the step-cone pulley optimization, corrugated silo wall design, and reactor network design problems, OQBKA exhibits strong stability and high efficiency in solving various constrained optimization problems. These findings confirm its potential for addressing complex optimization tasks. Future research may further investigate the optimization and extended applications of OQBKA in higher-dimensional and more complex problem domains.

Author Contributions

Conceptualization, N.Z.; Methodology, N.Z. and T.W.; Software, N.Z.; Validation, Y.Z.; Resources, T.W.; Data curation, T.W. and Y.Z.; Writing—original draft, N.Z.; Writing—review and editing, N.Z. and T.W.; Supervision, T.W.; Project administration, T.W.; Funding acquisition, T.W. and N.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the Key Program of Jiangxi Provincial Natural Science Foundation (20242BAB26024) and Jiangxi Provincial Postgraduate Innovation Specialty Fund Program (YC2024-S815).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon request. There are no restrictions on data availability.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Chakraborty, A.; Kar, A.K. Swarm intelligence: A review of algorithms. In Nature-Inspired Computing and Optimization; Springer: Cham, Switzerland, 2017; pp. 475–494. [Google Scholar] [CrossRef]
  2. El-Kenawy, E.S.M.; Mirjalili, S.; Alassery, F.; Zhang, Y.D.; Eid, M.M.; El-Mashad, S.Y.; Ghasemi, M.; Deriche, M.; Trojovský, P.; Mansor, Z.; et al. Novel meta-heuristic algorithm for feature selection, unconstrained functions and engineering problems. IEEE Access 2022, 10, 40536–40555. [Google Scholar] [CrossRef]
  3. Nguyen, B.H.; Xue, B.; Zhang, M. A survey on swarm intelligence approaches to feature selection in data mining. Swarm Evol. Comput. 2020, 54, 100663. [Google Scholar] [CrossRef]
  4. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar] [CrossRef]
  5. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  6. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  7. Gharehchopogh, F.S.; Namazi, M.; Ebrahimi, L.; Abdollahzadeh, B. Advances in sparrow search algorithm: A comprehensive survey. Arch. Comput. Methods Eng. 2023, 30, 427–455. [Google Scholar] [CrossRef]
  8. Xiao, Y.; Cui, H.; Khurma, R.A.; Castillo, P.A. Artificial lemming algorithm: A novel bionic meta-heuristic technique for solving real-world engineering optimization problems. Artif. Intell. Rev. 2025, 58, 84. [Google Scholar] [CrossRef]
  9. He, J.; Zhao, S.; Ding, J.; Wang, Y. Mirage search optimization: Application to path planning and engineering design problems. Adv. Eng. Softw. 2025, 203, 103883. [Google Scholar] [CrossRef]
  10. Ghasemi, M.; Deriche, M.; Trojovský, P.; Mansor, Z.; Zare, M.; Trojovská, E.; Abualigah, L.; Ezugwu, A.E.; Kadkhoda Mohammadi, S. An efficient bio-inspired algorithm based on humpback whale migration for constrained engineering optimization. Results Eng. 2025, 25, 104215. [Google Scholar] [CrossRef]
  11. Wang, J.; Wang, W.C.; Hu, X.X.; Qiu, L.; Zang, H.F. Black-winged kite algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev. 2024, 57, 98. [Google Scholar] [CrossRef]
  12. Dai, F.; Ma, T.; Gao, S. Optimal design of a fractional order PIDD2 controller for an AVR system using hybrid black-winged kite algorithm. Electronics 2025, 14, 2315. [Google Scholar] [CrossRef]
  13. Zhang, Z.; Wang, X.; Yue, Y. Heuristic optimization algorithm of black-winged kite fused with osprey and its engineering application. Biomimetics 2024, 9, 595. [Google Scholar] [CrossRef]
  14. Zhou, Y.; Wu, X.; Liu, Y.; Jiang, X. BKA optimization algorithm based on sine-cosine guidelines. In Proceedings of the 2024 4th International Symposium on Computer Technology and Information Science (ISCTIS), Dalian, China, 20–22 April 2024; IEEE: Piscataway, NJ, USA; pp. 480–484. [Google Scholar] [CrossRef]
  15. Zhang, S.; Fu, Z.; An, D.; Yi, H. Network security situation assessment based on BKA and cross dual-channel. J. Supercomput. 2025, 81, 461. [Google Scholar] [CrossRef]
  16. Zhu, X.; Zhang, J.; Jia, C.; Liu, Y.; Fu, M. A hybrid black-winged kite algorithm with PSO and differential mutation for superior global optimization and engineering applications. Biomimetics 2025, 10, 236. [Google Scholar] [CrossRef] [PubMed]
  17. Li, Y.; Shi, B.; Qiao, W.; Du, Z. A black-winged kite optimization algorithm enhanced by osprey optimization and vertical and horizontal crossover improvement. Sci. Rep. 2025, 15, 6737. [Google Scholar] [CrossRef] [PubMed]
  18. Zhang, S.; Zheng, S.; Zhang, X.; Li, Y. A hybrid optimization approach combining PIO and L-BFGS for phase diversity image restoration. In Proceedings of the 2024 6th International Conference on Frontier Technologies of Information and Computer (ICFTIC), Qingdao, China, 13–15 December 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 587–590. [Google Scholar] [CrossRef]
  19. Liu, D.C.; Nocedal, J. On the limited memory BFGS method for large scale optimization. Math. Program. 1989, 45, 503–528. [Google Scholar] [CrossRef]
  20. Cuevas, E.; Barba, O.; Escobar, H. A novel cheetah optimizer hybrid approach based on opposition-based learning and diversity metrics. Computing 2025, 107, 62. [Google Scholar] [CrossRef]
  21. Zitouni, F.; Harous, S.; Almazyad, A.; Ali, M.; Xiong, G.; Khechiba, F.; Kherchouche, K. BHJO: A novel hybrid metaheuristic algorithm combining the Beluga Whale, Honey Badger, and Jellyfish Search optimizers for solving engineering design problems. Comput. Model. Eng. Sci. 2024, 141, 219–247. [Google Scholar] [CrossRef]
  22. Yao, L.; Yuan, P.; Tsai, C.Y.; Zhang, T.; Lu, Y.; Ding, S. ESO: An enhanced snake optimizer for real-world engineering problems. Expert Syst. Appl. 2023, 230, 120594. [Google Scholar] [CrossRef]
  23. Liu, J.; Li, W.; Li, Y. LWMEO: An efficient equilibrium optimizer for complex functions and engineering design problems. Expert Syst. Appl. 2022, 198, 116828. [Google Scholar] [CrossRef]
  24. Bayzidi, H.; Talatahari, S.; Saraee, M.; Lamarche, C.P. Social network search for solving engineering optimization problems. Comput. Intell. Neurosci. 2021, 2021, 8548639. [Google Scholar] [CrossRef]
  25. Kumar, A.; Wu, G.; Ali, M.Z.; Mallipeddi, R.; Suganthan, P.N.; Das, S. A test-suite of non-convex constrained optimization problems from the real-world and some baseline results. Swarm Evol. Comput. 2020, 56, 100693. [Google Scholar] [CrossRef]
  26. Nocedal, J.; Wright, S.J. Quasi-Newton methods. In Numerical Optimization; Springer: New York, NY, USA, 2006; pp. 135–163. [Google Scholar] [CrossRef]
  27. Head, J.D.; Zerner, M.C. A Broyden–Fletcher–Goldfarb–Shanno optimization procedure for molecular geometries. Chem. Phys. Lett. 1985, 122, 264–270. [Google Scholar] [CrossRef]
Figure 1. Algorithm flowchart.
Figure 1. Algorithm flowchart.
Biomimetics 11 00068 g001
Figure 2. Convergence curves of CEC2017 test functions.
Figure 2. Convergence curves of CEC2017 test functions.
Biomimetics 11 00068 g002aBiomimetics 11 00068 g002b
Figure 3. Convergence curves of CEC2022 test functions (dim = 10).
Figure 3. Convergence curves of CEC2022 test functions (dim = 10).
Biomimetics 11 00068 g003
Figure 4. Convergence curves of CEC2022 test functions (dim = 20).
Figure 4. Convergence curves of CEC2022 test functions (dim = 20).
Biomimetics 11 00068 g004
Figure 11. Schematic of a two-stage CSTR system in series. Letters above the arrows indicate stream composition: A (reactant), B (intermediate), C (final product).
Figure 11. Schematic of a two-stage CSTR system in series. Letters above the arrows indicate stream composition: A (reactant), B (intermediate), C (final product).
Biomimetics 11 00068 g011
Table 1. Comparative overview of BKA variants and their key design features.
Table 1. Comparative overview of BKA variants and their key design features.
AlgorithmPrimary Enhancement StrategyRole of OBLHybridization FrameworkCore Characteristics
BWOAWOA-based hybrid strategy with Lévy flightNoneCross-algorithm hybridAvoids premature convergence and improves solution accuracy
OCBKAOOA-based hybrid strategy with chaotic initializationNoneCross-algorithm hybridEnhances global search capability
SCBKASine–cosine guided searchNoneSingle-strategy enhancementImproves convergence speed
IBKAGaussian mutation and oscillation with OBLEscape from local optimaMulti-operator hybridImproves solution accuracy
BKAPIPSO hybridization with DE mutationNoneCross-algorithm hybridAvoids premature convergence
OQBKAMOBL with L-BFGS refinementEscape from local optima and maintain population diversity in the later stages of iterationMulti-operator hybridBalances exploration and exploitation and improves solution accuracy
Table 2. Sensitivity analysis of the L-BFGS activation threshold k in OQBKA.
Table 2. Sensitivity analysis of the L-BFGS activation threshold k in OQBKA.
F(x)kMeanStd
F10.13.00 × 1025.56 × 10−9
0.43.00 × 1022.36 × 10−9
0.73.00 × 1022.24 × 10−9
0.93.00 × 1022.31 × 10−9
F50.19.71 × 1026.42 × 101
0.49.37 × 1021.96 × 101
0.79.24 × 1021.56 × 101
0.99.40 × 1027.83 × 101
F80.12.23 × 1032.57 × 100
0.42.23 × 1034.19 × 100
0.72.23 × 1032.32 × 100
0.92.23 × 1032.34 × 100
F110.12.62 × 1034.70 × 101
0.42.62 × 1035.85 × 101
0.72.60 × 1031.74 × 100
0.92.61 × 1032.74 × 101
Bold values indicate the best performance in each column.
Table 3. Test results of different improved strategies.
Table 3. Test results of different improved strategies.
F(x) BKAOBKALBKAOQBKA
F1Mean7.94 × 1084.07 × 1071.00 × 1021.00 × 102
Std2.42 × 1081.70 × 1072.75 × 10−71.97 × 10−4
Rank4312
F2Mean1.20 × 1035.22 × 1033.00 × 1023.00 × 102
Std1.24 × 1035.60 × 1001.05 × 10−118.80 × 10−12
Rank3421
F3Mean4.97 × 1024.19 × 1024.00 × 1024.00 × 102
Std1.28 × 1021.31 × 1011.27 × 10−133.94 × 10−12
Rank4312
F4Mean5.26 × 1025.29 × 1025.51 × 1025.28 × 102
Std5.58 × 1001.82 × 1002.88 × 1012.04 × 101
Rank1342
F5Mean6.33 × 1026.21 × 1026.38 × 1026.06 × 102
Std1.03 × 1003.95 × 1002.82 × 1001.39 × 100
Rank3241
F6Mean7.65 × 1027.51 × 1027.80 × 1027.39 × 102
Std7.01 × 1001.90 × 1017.37 × 1008.62 × 100
Rank3241
F7Mean8.24 × 1028.32 × 1028.31 × 1028.21 × 102
Std6.84 × 10−18.39 × 1009.15 × 1001.43 × 10−5
Rank2431
F8Mean1.13 × 1031.08 × 1031.30 × 1039.35 × 102
Std1.30 × 1014.00 × 1016.92 × 1014.14 × 101
Rank3241
F9Mean1.83 × 1031.78 × 1031.98 × 1031.42 × 103
Std3.26 × 1022.00 × 1011.77 × 1013.76 × 102
Rank3241
F10Mean1.15 × 1031.16 × 1031.19 × 1031.11 × 103
Std1.37 × 1013.16 × 1008.37 × 1018.44 × 100
Rank2341
F11Mean1.24 × 1062.65 × 1051.38 × 1031.42 × 103
Std1.72 × 1061.16 × 1057.60 × 1011.45 × 102
Rank4312
F12Mean4.10 × 1037.18 × 1031.49 × 1031.52 × 103
Std2.68 × 1031.50 × 1031.51 × 1021.70 × 101
Rank3412
F13Mean1.48 × 1031.49 × 1031.46 × 1031.45 × 103
Std4.27 × 1018.90 × 1002.78 × 1016.81 × 100
Rank3421
F14Mean4.58 × 1032.30 × 1031.62 × 1031.55 × 103
Std4.23 × 1032.05 × 1021.57 × 1021.63 × 101
Rank4321
F15Mean1.80 × 1031.65 × 1031.67 × 1031.60 × 103
Std7.11 × 1017.77 × 10−17.63 × 1013.79 × 10−3
Rank4231
F16Mean1.77 × 1031.75 × 1031.76 × 1031.75 × 103
Std2.09 × 1012.66 × 1002.99 × 1019.15 × 100
Rank4132
F17Mean3.12 × 1031.07 × 1041.94 × 1031.91 × 103
Std1.40 × 1031.34 × 1037.13 × 1014.93 × 101
Rank3421
F18Mean2.00 × 1033.43 × 1031.96 × 1031.95 × 103
Std5.06 × 1011.30 × 1034.03 × 1015.28 × 100
Rank3421
F19Mean2.06 × 1032.13 × 1032.17 × 1032.04 × 103
Std5.27 × 1014.78 × 1011.11 × 1021.50 × 101
Rank2341
F20Mean2.32 × 1032.30 × 1032.29 × 1032.32 × 103
Std3.15 × 1005.13 × 1011.17 × 1023.15 × 100
Rank4213
F21Mean2.36 × 1032.32 × 1032.30 × 1032.30 × 103
Std6.40 × 1011.98 × 1007.30 × 10−11.23 × 100
Rank4321
F22Mean2.65 × 1032.63 × 1032.69 × 1032.62 × 103
Std1.94 × 1011.31 × 1016.75 × 1011.50 × 101
Rank3241
F23Mean2.77 × 1032.75 × 1032.78 × 1032.74 × 103
Std2.83 × 1017.00 × 1003.43 × 1016.94 × 100
Rank3241
F24Mean2.91 × 1032.94 × 1032.92 × 1032.92 × 103
Std1.32 × 1009.34 × 1003.65 × 1013.36 × 101
Rank1432
F25Mean3.08 × 1033.12 × 1033.03 × 1032.90 × 103
Std1.95 × 1022.37 × 1003.80 × 1018.49 × 10−8
Rank3421
F26Mean3.12 × 1033.09 × 1033.10 × 1033.08 × 103
Std3.84 × 1007.68 × 10−12.02 × 1014.99 × 100
Rank4231
F27Mean3.41 × 1033.27 × 1033.14 × 1033.14 × 103
Std3.60 × 1003.76 × 1015.99 × 1015.99 × 101
Rank4312
F28Mean3.25 × 1033.21 × 1033.29 × 1033.19 × 103
Std1.13 × 1026.32 × 1013.36 × 1016.89 × 101
Rank3241
F29Mean4.89 × 1044.87 × 1053.63 × 1033.44 × 103
Std6.41 × 1041.26 × 1058.22 × 1011.20 × 101
Rank3421
Average Rank3.102.902.661.34
Bold values indicate the best performance in each row.
Table 4. Comparison of test results on CEC2022 test functions (dim = 10).
Table 4. Comparison of test results on CEC2022 test functions (dim = 10).
F(x) PSOHHOGWOSSAALAMSOWMABKASCBKAIBKAOQBKA
F1Mean5.95 × 1025.07 × 1035.13 × 1032.18 × 1039.30 × 1021.09 × 1043.39 × 1021.40 × 1039.99 × 1026.85 × 1033.00 × 102
Rank3897411265101
Std2.91 × 1029.88 × 1023.43 × 1031.69 × 1036.43 × 1026.14 × 1031.93 × 1022.40 × 1031.41 × 1033.07 × 1031.48 × 10−11
Wilcoxon3.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11N/A
+/−/=N/A
F2Mean4.22 × 1025.05 × 1024.38 × 1024.16 × 1024.10 × 1024.35 × 1024.09 × 1024.17 × 1024.26 × 1024.13 × 1024.00 × 102
Rank7111053926841
Std2.47 × 1019.88 × 1012.62 × 1012.44 × 1011.55 × 1013.14 × 1011.24 × 1012.59 × 1013.90 × 1011.33 × 1011.04 × 10−11
Wilcoxon1.33 × 10−104.97 × 10−113.02 × 10−113.47 × 10−104.97 × 10−114.97 × 10−113.02 × 10−111.41 × 10−98.15 × 10−114.97 × 10−11N/A
+/−/=N/A
F3Mean6.00 × 1026.42 × 1026.03 × 1026.05 × 1026.01 × 1026.19 × 1026.01 × 1026.28 × 1026.28 × 1026.06 × 1026.07 × 102
Rank1114538210967
Std1.12 × 1001.23 × 1012.81 × 1007.37 × 1001.23 × 1008.93 × 1001.30 × 1009.07 × 1009.78 × 1001.95 × 1002.50 × 100
Wilcoxon6.72 × 10−103.02 × 10−111.00 × 10−43.68 × 10−28.10 × 10−101.41 × 10−93.02 × 10−113.02 × 10−113.02 × 10−111.06 × 10−1N/A
+/−/=+++++=N/A
F4Mean8.15 × 1028.28 × 1028.20 × 1028.29 × 1028.22 × 1028.27 × 1028.26 × 1028.20 × 1028.24 × 1028.25 × 1028.23 × 102
Rank1102114983675
Std7.61 × 1007.44 × 1001.04 × 1019.54 × 1006.74 × 1001.03 × 1019.72 × 1008.52 × 1008.34 × 1009.85 × 1005.40 × 100
Wilcoxon6.00 × 10−46.00 × 10−36.87 × 10−21.75 × 10−25.17 × 10−18.59 × 10−21.71 × 10−12.85 × 10−26.58 × 10−17.50 × 10−1N/A
+/−/=+====+==N/A
F5Mean9.01 × 1021.47 × 1039.33 × 1021.29 × 1039.20 × 1021.21 × 1039.07 × 1021.08 × 1031.10 × 1039.67 × 1029.62 × 102
Rank1114103927865
Std1.30 × 1001.89 × 1026.00 × 1012.33 × 1022.38 × 1012.40 × 1021.01 × 1018.82 × 1011.10 × 1025.05 × 1017.16 × 101
Wilcoxon2.61 × 10−103.16 × 10−106.00 × 10−31.56 × 10−81.40 × 10−35.00 × 10−91.78 × 10−101.01 × 10−81.10 × 10−83.29 × 10−1N/A
+/−/=++++=N/A
F6Mean4.31 × 1031.61 × 1041.16 × 1044.53 × 1033.94 × 1033.78 × 1034.62 × 1033.59 × 1034.45 × 1036.34 × 1041.83 × 103
Rank5109743826111
Std2.48 × 1031.58 × 1046.20 × 1032.21 × 1032.06 × 1031.93 × 1032.11 × 1032.10 × 1032.03 × 1031.65 × 1052.22 × 101
Wilcoxon4.08 × 10−113.02 × 10−113.02 × 10−119.76 × 10−103.02 × 10−119.76 × 10−103.02 × 10−113.69 × 10−117.39 × 10−113.02 × 10−11N/A
+/−/=N/A
F7Mean2.02 × 1032.09 × 1032.04 × 1032.03 × 1032.03 × 1032.06 × 1032.02 × 1032.06 × 1032.05 × 1032.03 × 1032.03 × 103
Rank1117631029854
Std7.89 × 1002.96 × 1011.31 × 1012.65 × 1017.67 × 1003.92 × 1018.69 × 1002.36 × 1012.23 × 1017.14 × 1008.59 × 100
Wilcoxon5.09 × 10−61.33 × 10−101.25 × 10−28.45 × 10−11.92 × 10−11.00 × 10−47.30 × 10−34.20 × 10−108.84 × 10−71.16 × 10−1N/A
+/−/=+==+=N/A
F8Mean2.24 × 1032.24 × 1032.23 × 1032.23 × 1032.22 × 1032.25 × 1032.24 × 1032.23 × 1032.22 × 1032.23 × 1032.22 × 103
Rank9106731185241
Std3.78 × 1011.34 × 1012.20 × 1013.78 × 1013.71 × 1004.86 × 1012.54 × 1012.29 × 1015.32 × 1004.96 × 1003.19 × 100
Wilcoxon8.97 × 10−23.37 × 10−51.78 × 10−44.53 × 10−16.87 × 10−26.56 × 10−23.02 × 10−111.00 × 10−34.95 × 10−24.40 × 10−3N/A
+/−/=====N/A
F9Mean2.55 × 1032.66 × 1032.59 × 1032.54 × 1032.53 × 1032.56 × 1032.53 × 1032.57 × 1032.55 × 1032.53 × 1032.49 × 103
Rank6111053829741
Std4.68 × 1014.44 × 1013.78 × 1013.73 × 1011.70 × 10−11.86 × 1018.44 × 10−144.31 × 1013.09 × 1012.56 × 1001.04 × 10−10
Wilcoxon2.97 × 10−113.02 × 10−113.02 × 10−111.71 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11N/A
+/−/=N/A
F10Mean2.57 × 1032.64 × 1032.56 × 1032.64 × 1032.58 × 1032.58 × 1032.58 × 1032.61 × 1032.57 × 1032.51 × 1032.51 × 103
Rank4103116789512
Std6.64 × 1012.40 × 1026.11 × 1012.09 × 1021.80 × 1021.15 × 1027.86 × 1011.58 × 1026.96 × 1013.18 × 1013.51 × 101
Wilcoxon1.00 × 10−44.20 × 10−101.32 × 10−26.28 × 10−62.07 × 10−23.20 × 10−94.00 × 10−42.78 × 10−78.84 × 10−78.59 × 10−2N/A
+/−/==N/A
F11Mean2.86 × 1032.99 × 1033.04 × 1032.81 × 1032.83 × 1033.00 × 1032.93 × 1032.81 × 1032.79 × 1032.96 × 1032.60 × 103
Rank6911351074281
Std1.86 × 1022.47 × 1021.51 × 1021.34 × 1021.06 × 1023.95 × 1022.74 × 1021.56 × 1021.45 × 1021.32 × 1027.63 × 10−7
Wilcoxon2.87 × 10−103.69 × 10−113.69 × 10−118.15 × 10−119.92 × 10−113.02 × 10−113.02 × 10−111.46 × 10−108.10 × 10−104.50 × 10−11N/A
+/−/=N/A
F12Mean2.87 × 1032.95 × 1032.87 × 1032.88 × 1032.86 × 1032.88 × 1032.87 × 1032.87 × 1032.87 × 1032.86 × 1032.85 × 103
Rank8116921047531
Std1.30 × 1015.95 × 1011.00 × 1014.05 × 1011.79 × 1001.62 × 1011.89 × 1001.30 × 1017.41 × 1009.30 × 10−11.09 × 100
Wilcoxon2.99 × 10−113.02 × 10−113.02 × 10−113.01 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11N/A
+/−/=N/A
Average Rank4.3310.36.757.173.588.754.586.425.925.752.50
Bold values indicate the best performance in each row.
Table 5. Comparison of test results on CEC2022 test functions (dim = 20).
Table 5. Comparison of test results on CEC2022 test functions (dim = 20).
F(x) PSOHHOGWOSSAALAMSOWMABKASCBKAIBKAOQBKA
F1Mean2.38 × 1044.78 × 1042.22 × 1045.05 × 1041.99 × 1044.20 × 1041.98 × 1041.43 × 1041.51 × 1045.02 × 1043.00 × 102
Rank7961158423101
Std1.06 × 1041.74 × 1046.38 × 1031.67 × 1046.09 × 1031.27 × 1048.25 × 1037.44 × 1031.08 × 1041.19 × 1041.79 × 10−9
Wilcoxon3.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11N/A
+/−/=N/A
F2Mean4.86 × 1027.26 × 1025.17 × 1024.64 × 1024.76 × 1025.46 × 1024.58 × 1028.24 × 1027.48 × 1025.73 × 1024.00 × 102
Rank5963472111081
Std5.40 × 1018.88 × 1015.08 × 1012.23 × 1012.86 × 1016.29 × 1011.75 × 1014.09 × 1023.73 × 1026.17 × 1017.01 × 10−11
Wilcoxon3.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11N/A
+/−/=N/A
F3Mean6.06 × 1026.65 × 1026.10 × 1026.33 × 1026.14 × 1026.38 × 1026.12 × 1026.58 × 1026.52 × 1026.27 × 1026.34 × 102
Rank1112648310957
Std4.25 × 1009.20 × 1004.55 × 1001.34 × 1016.22 × 1001.11 × 1016.70 × 1001.30 × 1019.83 × 1004.39 × 1001.33 × 101
Wilcoxon4.40 × 10−41.83 × 10−41.01 × 10−37.97 × 10−16.74 × 10−62.29 × 10−11.83 × 10−41.83 × 10−41.83 × 10−45.19 × 10−2N/A
+/−/=++=+=+=N/A
F4Mean8.66 × 1028.91 × 1028.68 × 1028.90 × 1028.84 × 1028.79 × 1029.06 × 1028.87 × 1028.83 × 1029.03 × 1028.83 × 102
Rank1928631175104
Std2.82 × 1011.06 × 1013.00 × 1011.92 × 1012.42 × 1012.33 × 1013.33 × 1011.82 × 1011.77 × 1012.13 × 1013.79 × 100
Wilcoxon1.60 × 10−33.40 × 10−34.07 × 10−22.70 × 10−29.75 × 10−13.82 × 10−13.00 × 10−33.09 × 10−18.45 × 10−15.57 × 10−10N/A
+/−/=++====N/A
F5Mean1.06 × 1033.22 × 1031.35 × 1032.43 × 1031.91 × 1032.36 × 1031.24 × 1032.38 × 1032.46 × 1032.74 × 1032.20 × 103
Rank1113846279105
Std3.46 × 1023.52 × 1023.52 × 1021.77 × 1026.19 × 1026.38 × 1023.92 × 1024.41 × 1025.37 × 1027.28 × 1021.59 × 102
Wilcoxon5.80 × 10−33.02 × 10−113.08 × 10−82.44 × 10−92.30 × 10−23.93 × 10−13.02 × 10−112.70 × 10−21.50 × 10−38.00 × 10−4N/A
+/−/=+++=+N/A
F6Mean1.50 × 1061.51 × 1067.92 × 1066.51 × 1032.97 × 1047.26 × 1031.10 × 1043.00 × 1076.75 × 1063.80 × 1061.86 × 103
Rank6710253411981
Std7.88 × 1061.87 × 1061.51 × 1074.48 × 1034.12 × 1045.58 × 1037.56 × 1031.18 × 1082.29 × 1077.49 × 1061.26 × 101
Wilcoxon3.02 × 10−113.02 × 10−113.02 × 10−116.07 × 10−113.02 × 10−117.39 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11N/A
+/−/=N/A
F7Mean2.09 × 1032.22 × 1032.12 × 1032.14 × 1032.12 × 1032.14 × 1032.11 × 1032.14 × 1032.14 × 1032.13 × 1032.09 × 103
Rank1114751038962
Std5.00 × 1017.35 × 1014.94 × 1016.46 × 1014.28 × 1017.36 × 1014.98 × 1014.32 × 1014.07 × 1012.75 × 1012.06 × 101
Wilcoxon2.99 × 10−13.47 × 10−103.50 × 10−22.40 × 10−31.75 × 10−22.60 × 10−33.49 × 10−16.00 × 10−42.53 × 10−42.60 × 10−5N/A
+/−/===N/A
F8Mean2.29 × 1032.33 × 1032.29 × 1032.29 × 1032.25 × 1032.32 × 1032.31 × 1032.29 × 1032.25 × 1032.24 × 1032.23 × 103
Rank6118531097421
Std6.65 × 1011.07 × 1027.41 × 1015.89 × 1012.56 × 1011.07 × 1026.41 × 1017.55 × 1013.22 × 1019.02 × 1005.02 × 100
Wilcoxon4.00 × 10−49.21 × 10−51.50 × 10−33.00 × 10−41.86 × 10−12.00 × 10−43.02 × 10−111.44 × 10−34.00 × 10−43.76 × 10−2N/A
+/−/=N/A
F9Mean2.51 × 1032.65 × 1032.53 × 1032.48 × 1032.49 × 1032.49 × 1032.48 × 1032.60 × 1032.57 × 1032.50 × 1032.47 × 103
Rank7118345210961
Std4.94 × 1017.07 × 1014.29 × 1018.14 × 10−26.64 × 1006.59 × 1002.11 × 10−27.05 × 1016.05 × 1017.62 × 1004.22 × 10−10
Wilcoxon3.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11N/A
+/−/=N/A
F10Mean3.25 × 1034.72 × 1033.64 × 1033.89 × 1034.21 × 1033.54 × 1034.24 × 1034.31 × 1034.07 × 1033.48 × 1032.86 × 103
Rank2115684910731
Std5.22 × 1021.10 × 1031.07 × 1037.15 × 1021.01 × 1035.64 × 1021.64 × 1031.05 × 1038.45 × 1021.15 × 1037.17 × 102
Wilcoxon1.40 × 10−21.21 × 10−102.40 × 10−31.00 × 10−41.00 × 10−41.80 × 10−32.10 × 10−31.86 × 10−91.43 × 10−84.00 × 10−4N/A
+/−/=N/A
F11Mean3.01 × 1035.27 × 1033.79 × 1032.94 × 1033.89 × 1034.30 × 1032.90 × 1035.50 × 1035.36 × 1036.32 × 1032.80 × 103
Rank4853672109111
Std3.25 × 1018.12 × 1024.40 × 1021.26 × 1024.20 × 1025.37 × 1025.41 × 1001.24 × 1031.26 × 1038.19 × 1021.47 × 102
Wilcoxon2.38 × 10−73.02 × 10−113.02 × 10−111.17 × 10−33.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11N/A
+/−/=N/A
F12Mean3.02 × 1033.33 × 1032.99 × 1032.99 × 1032.97 × 1033.03 × 1032.99 × 1033.12 × 1033.06 × 1032.97 × 1032.90 × 103
Rank7116528410931
Std7.81 × 1011.91 × 1022.89 × 1013.32 × 1013.07 × 1015.82 × 1013.77 × 1019.62 × 1017.63 × 1011.06 × 1013.96 × 100
Wilcoxon4.08 × 10−113.02 × 10−114.08 × 10−116.07 × 10−115.57 × 10−103.34 × 10−113.02 × 10−113.34 × 10−113.34 × 10−115.49 × 10−11N/A
+/−/=N/A
Average Rank4.009.925.425.584.676.584.588.587.676.832.17
Bold values indicate the best performance in each row.
Table 6. Experimental results of the step-cone pulley problem.
Table 6. Experimental results of the step-cone pulley problem.
PSOHHOGWOSSAALAMSOWMABKASCBKAIBKAOQBKA
Best1.63 × 1011.84 × 1011.63 × 1011.63 × 1011.63 × 1011.67 × 1012.73 × 1011.63 × 1011.66 × 1011.65 × 1011.62 × 101
Worst2.19 × 1012.12 × 1021.69 × 1011.74 × 1011.84 × 1012.38 × 1014.73 × 1073.36 × 1015.63 × 1011.85 × 1011.66 × 101
Std1.69 × 1006.83 × 1011.82 × 10−12.86 × 10−14.77 × 10−11.70 × 1001.45 × 1074.15 × 1009.28 × 1005.62 × 10−18.70 × 10−2
Avg1.74 × 1015.69 × 1011.66 × 1011.65 × 1011.68 × 1011.86 × 1016.33 × 1061.96 × 1012.17 × 1011.73 × 1011.63 × 101
Bold values indicate the best performance in each row.
Table 7. Experimental results of the corrugated bulkhead design problem.
Table 7. Experimental results of the corrugated bulkhead design problem.
PSOHHOGWOSSAALAMSOWMABKASCBKAIBKAOQBKA
Best6.85 × 1007.13 × 1006.91 × 1006.92 × 1006.92 × 1006.97 × 1008.83 × 1006.96 × 1006.89 × 1007.21 × 1006.84 × 100
Worst7.41 × 1009.85 × 1007.29 × 1001.03 × 1019.13 × 1001.27 × 1011.94 × 1019.06 × 1009.10 × 1009.29 × 1007.17 × 100
Std1.34 × 10−17.37 × 10−11.09 × 10−17.51 × 10−15.47 × 10−11.35 × 1003.33 × 1005.37 × 10−15.32 × 10−15.80 × 10−19.02 × 10−2
Avg6.99 × 1007.99 × 1007.05 × 1007.45 × 1007.51 × 1008.78 × 1001.34 × 1017.51 × 1007.49 × 1007.85 × 1006.92 × 100
Bold values indicate the best performance in each row.
Table 8. Experimental results of the reactor network design problem.
Table 8. Experimental results of the reactor network design problem.
PSOHHOGWOSSAALAMSOWMABKASCBKAIBKAOQBKA
Best−3.51 × 10−15.16 × 10−1−2.65 × 10−1−3.64 × 10−1−3.81 × 10−11.37 × 10−23.24 × 101−3.51 × 10−1−5.56 × 10−3−2.57 × 10−1−3.82 × 10−1
Worst3.24 × 10−11.54 × 1028.74 × 10−12.34 × 1003.71 × 1014.45 × 1011.01 × 1033.29 × 1011.33 × 1011.33 × 101−3.74 × 10−1
Std2.07 × 10−14.67 × 1013.60 × 10−16.31 × 10−18.26 × 1001.22 × 1012.42 × 1027.40 × 1003.74 × 1004.81 × 1002.41 × 10−3
Avg−4.32 × 10−26.84 × 1012.78 × 10−17.56 × 10−22.30 × 1009.15 × 1003.05 × 1022.32 × 1002.01 × 1004.24 × 1003.75 × 10−1
Bold values indicate the best performance in each row.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, N.; Wang, T.; Zhu, Y. Black-Winged Kite Algorithm Integrating Opposition-Based Learning and Quasi-Newton Strategy. Biomimetics 2026, 11, 68. https://doi.org/10.3390/biomimetics11010068

AMA Style

Zhao N, Wang T, Zhu Y. Black-Winged Kite Algorithm Integrating Opposition-Based Learning and Quasi-Newton Strategy. Biomimetics. 2026; 11(1):68. https://doi.org/10.3390/biomimetics11010068

Chicago/Turabian Style

Zhao, Ning, Tinghua Wang, and Yating Zhu. 2026. "Black-Winged Kite Algorithm Integrating Opposition-Based Learning and Quasi-Newton Strategy" Biomimetics 11, no. 1: 68. https://doi.org/10.3390/biomimetics11010068

APA Style

Zhao, N., Wang, T., & Zhu, Y. (2026). Black-Winged Kite Algorithm Integrating Opposition-Based Learning and Quasi-Newton Strategy. Biomimetics, 11(1), 68. https://doi.org/10.3390/biomimetics11010068

Article Metrics

Back to TopTop