Next Article in Journal
Research on Assembly Sequence Planning of Large Cruise Ship Cabins Based on Improved Genetic Algorithm
Previous Article in Journal
Synthesis and Application of Sol-Gel-Derived Nano-Silica in Glass Ionomer Cement for Dental Cementation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Black-Winged Kite Algorithm with PSO and Differential Mutation for Superior Global Optimization and Engineering Applications

by
Xuemei Zhu
1,
Jinsi Zhang
2,*,
Chaochuan Jia
3,
Yu Liu
3 and
Maosheng Fu
3
1
Experimental Training Teaching Management Department, West Anhui University, Yu’an District, Lu’an 237012, China
2
School of Electrical and Optoelectronic Engineering, West Anhui University, Yu’an District, Lu’an 237012, China
3
School of Electronics and Information Engineering, West Anhui University, Yu’an District, Lu’an 237012, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(4), 236; https://doi.org/10.3390/biomimetics10040236
Submission received: 26 February 2025 / Revised: 2 April 2025 / Accepted: 8 April 2025 / Published: 11 April 2025

Abstract

:
This study addresses the premature convergence issue of the Black-Winged Kite Algorithm (BKA) in high-dimensional optimization problems by proposing an enhanced hybrid algorithm (BKAPI). First of all, BKA provides dynamic global exploration through its hovering and dive attack strategies, while Particle Swarm Optimization (PSO) enhances local exploitation via its velocity-based search mechanism. Then, PSO enables efficient local refinement, and Differential Evolution (DE) introduces a differential mutation strategy to maintain population diversity and prevent premature convergence. Finally, the integration ensures a balanced exploration–exploitation trade-off, overcoming BKA’s sensitivity to parameter settings and insufficient local search capabilities. By combining these mechanisms, BKAPI achieves a robust balance, significantly improving convergence speed and computational accuracy. To validate its effectiveness, the performance of the enhanced hybrid algorithm is rigorously evaluated against seven other intelligent optimization algorithms using the CEC 2017 and CEC 2022 benchmark test functions. Experimental results demonstrate that the proposed integrated strategy surpasses other advanced algorithms, highlighting its superiority and strong application potential. Additionally, the algorithm’s practical utility is further confirmed through its successful application to three real-world engineering problems: welding beam design, the Himmelblau function, and visible light positioning, underscoring the effectiveness and versatility of the proposed approach.

1. Introduction

Intelligent optimization algorithms, inspired by natural phenomena and biological behaviors, are increasingly being used to optimize various tasks, including design parameters, scheduling, resource allocation, path planning, control systems, and process optimization [1,2,3,4]. These algorithms are particularly effective for addressing complex challenges, such as improving performance, enhancing efficiency, and reducing costs across multiple industries [5,6,7] when compared to conventional methods like the Conjugate Gradient and Newton methods [8,9,10]. Meta-heuristic algorithms [11,12,13], a category of computational methods that mimic natural phenomena, are designed to find global optimal solutions in intricate search spaces, making them highly suitable for tackling high-dimensional, nonlinear, and multi-modal problems [14]. Examples of these algorithms include Genetic Algorithms (simulating biological evolution) [15], Particle Swarm Optimization (mimicking the collective behavior of birds or fish) [16], Gray Wolf Optimization (simulating the hunting strategies of gray wolves) [17], Differential Evolution (group-based optimization through cooperation and competition) [18], simulated annealing (mimicking the annealing process) [19], cuckoo search (based on bird breeding behaviors) [20], and Ant Colony Optimization (based on ants’ foraging paths) [21]. An adaptive Differential Evolution algorithm (AdapDE) with adaptive operator selection and parameter control aims at optimizing the input weights and hidden biases of extreme learning machines (ELMs), enhancing performance through improved adaptive operator selection mechanisms and parameter adjustment methods [22]. A multi-objective evolutionary algorithm based on decomposition (MOEA/D) combined with an adaptive operator selection strategy called FRRMAB, demonstrating superior performance on various test instances and emphasizing its effectiveness in balancing exploration and exploitation [23]. Each of these algorithms leverages unique aspects of nature to explore and exploit the solution space efficiently.
Despite their strengths, individual meta-heuristic algorithms often face challenges when applied to highly complex or high-dimensional problems. For example, the GA excels in global search but struggles with local optimization, while PSO converges quickly but may get trapped in local optimal solutions [24]. Hybrid intelligent optimization algorithms have emerged as a promising solution to overcome these limitations. Hybrid approaches achieve superior optimization performance compared to individual methods by combining the strengths of multiple algorithms. Examples include the fusion of the GA and PSO [25], DE and SA [26], and ABC and GWO [27]. These hybrid algorithms leverage the complementary strengths of their constituent methods, enhancing both global and local search efficiency and addressing the limitations of individual algorithms [28,29,30,31,32].
The Black-Winged Kite Algorithm (BKA) [33], one of the more recent optimization techniques, has attracted considerable interest due to its distinctive inspiration derived from the hunting strategies of the black-winged kite. The BKA emulates two primary search strategies, local search (hovering) and global search (dive attack), effectively balancing exploration and exploitation to circumvent local optima while efficiently locating the global optimal solution. However, the BKA faces significant challenges when applied to high-dimensional or complex nonlinear problems. Its tendency to converge prematurely, known as “precocious convergence”, reduces its effectiveness in navigating complex landscapes with multiple local optima [34]. Additionally, the BKA’s performance is highly sensitive to parameter settings, and its local search capabilities are often insufficient for high-dimensional problems, leading to increased computational costs and reduced solution quality [35].
To overcome these limitations, researchers have introduced a range of improvements to the BKA. Rasooli et al. advanced a clustering solution based on the BKA for data analysis and machine learning [36], while Zhang et al. combined the black-winged kite with another similar bird, the osprey, to solve optimization problems [37]. Xue et al. proposed a multi-strategic integrated model combining the BKA and artificial rabbit optimization [38], and Zhao et al. enhanced the BKA using chaotic mapping and confrontation learning to enhance its capability to avoid becoming trapped in local optima [39]. Fu et al. integrated the BKA with deep hybrid nuclear limit learning machines to enhance prediction accuracy for battery state-of-health [40]. These improvements highlight the potential of the BKA but also underscore the need for further enhancements to address its constraints in high-dimensional and sophisticated optimization tasks.
The hybridization of metaheuristic algorithms has been the subject of considerable academic attention, with various strategies combining different algorithms to achieve better performance [41]. For instance, hybrid algorithms like GA-PSO (Genetic Algorithm with PSO) [15] and DE-ABC (Differential Evolution with Artificial Bee Colony) [18] have demonstrated improved performance in specific contexts. Several novel bio-inspired algorithms have been proposed, each offering unique mechanisms for balancing exploration and exploitation. For instance, the Subtraction-Average-Based Optimizer (SABO) [42] introduces a novel subtraction-average mechanism to enhance population diversity and avoid premature convergence. Similarly, the Mantis Search Algorithm (MSA) [43] mimics the hunting behavior of mantises, incorporating a dynamic balance between exploration and exploitation through a unique prey–predator interaction model. The Kepler Algorithm (KA) [44] draws inspiration from Kepler’s laws of planetary motion, utilizing orbital dynamics to guide the search process. Additionally, the Improved Bio-Inspired Material Generation Algorithm (IMGMA) [45] enhances the original Material Generation Algorithm by incorporating adaptive mechanisms to improve convergence speed and solution accuracy. However, these hybrid algorithms often face challenges in consistently balancing exploration and exploitation across different types of problems. In contrast, the proposed BKAPI integrates the BKA’s dynamic exploration with PSO’s efficient exploitation, creating a more versatile and robust optimization framework. Additionally, the inclusion of the Random-Elite Difference Mutation strategy provides a unique mechanism for maintaining diversity, which is often lacking in other hybrid approaches. This makes the BKAPI particularly effective for solving optimization problems that are complex and multimodal.
This paper seeks to address the above research gap by proposing a novel hybrid algorithm, BKAPI, which integrates the superior aspects of the BKA in contrast with alternative optimization approaches to address its problems. The proposed algorithm integrates mechanisms to improve performance, thereby reducing susceptibility to premature settling and refining solution performance. The remainder of this paper is structured as follows: Section 2 clarifies the fundamental aspects of the BKA through a flow diagram and calculation formulas, describes the proposed BKAPI, and provides its pseudo-code. Section 3 analyzes the performance outcomes of eight distinct algorithms, including the BKAPI. Section 4 applies the algorithm to three engineering cases. Finally, this paper concludes with a summary and a discussion on the potential directions for future work.

2. Black-Winged Kite Algorithm

The Black-Winged Kite Algorithm (BKA) is a metaheuristic optimization algorithm based on the predation behavior of the black-winged kite, simulating its hovering foraging (local search) and dive attack (global search) behaviors to find the optimal solution in the solution space [33]. This equilibrium between local and global exploration is essential for enabling the algorithm to identify the global optimum in challenging optimization problems. Additionally, the BKA incorporates migration behavior, which introduces randomness based on the Cauchy distribution [33].

2.1. Population Initialization

In the BKA, the initialization matrix of Pop × Dim is generated: Pop is the population size, and Dim is the dimension of the problem. The population matrix BK is defined as follows:
B K = B K 1 , 1 B K 1 , 2 B K 1 , dim B K 2 , 1 B K 2 , 2 B K 2 , dim B K pop , 1 B K pop , 2 B K pop , dim ,

2.2. Attacking Behavior

Local exploration: If p < r , indicating global exploration, the formula is as follows:
k t + 1 i , j = k t i , j + n · ( 1 + sin ( r ) ) · k t i , j
Here, k t i , j represents the current position of the black-winged kite individual labeled i and k t + 1 i , j represents the new position after the update. Local exploration nearby resolves spaces near the current position.
Local exploration: If p > r , indicating partial exploration, the formula is as follows:
k t + 1 i , j = k t i , j + n · ( 2 r 1 ) · k t i , j
where r is a random number and n is the adaptive step size:
n = 0.05 · exp 2 t T 2
where k t i , j represents the current position of the black-winged kite individual labeled i and k t + 1 i , j represents the newer position after the update. Global exploration is achieved by introducing larger random disturbances to explore a larger space.

2.3. Migration Behavior

The leader steps down and joins the migrating population if the fitness of the current population is lower than that of a randomly selected individual but retains its position and continues to guide the population if its fitness is higher.
A reference individual r i is generated randomly from the population. Reference fitness is computed as F r i = f ( k t r i , j )
If F i < F r i , the leader steps down and the formula is as follows:
k t + 1 i , j = k t i , j + C ( 0 , 1 ) · ( k t i , j L t j )
If F i > F r i , the leader retains its position and the formula is as follows:
k t + 1 i , j = k t i , j + C ( 0 , 1 ) · ( L t j m · k t i , j )
where L t j represents the leader of the black-winged kites in the j t h dimension of the t t h iteration so far, k t i , j represents the current position of the black-winged kite individual labeled i, and k t + 1 i , j represents the newer position after the update, which is m = 2 · s i n ( r + π 2 ) and C ( 0 , 1 ) = t a n ( ( ori_value 0.5 ) · π ) .

3. Enhanced Black-Winged Kite Optimization

While the Black-Winged Kite Algorithm (BKA) presents some unique advantages, it may experience difficulties when facing problems with highly nonlinear or multiple local optimal solutions. Specifically, the BKA sometimes converges to suboptimal solutions too quickly, meaning that it may stop searching without fully exploring all potential solutions. This premature convergence phenomenon is particularly evident in high-dimensional and complex problems, which may result in the need for larger population sizes and more iterations to achieve satisfactory results, thereby increasing computational costs. Furthermore, the BKA is very sensitive to parameter selection, especially when dealing with high-dimensional problems. If the parameters are not set properly, the algorithm may not be able to search effectively and may even converge too prematurely. Moreover, as the problem dimension increases, the BKA’s local search ability appears to be relatively insufficient, and it struggles to escape from the local optimal solution and find the global optimal solution. It is possible to integrate the PSO mechanism and random elite differential mutation strategies into the basic BKA to enhance the BKA’s ability to tackle high-dimensional and complex nonlinear problems. This hybrid approach leverages the strengths of both methods to overcome stagnation and improve convergence during the BKA search process.

3.1. PSO

The Particle Swarm Intelligent Optimization Algorithm (PSO) is an optimization algorithm based on group behavior. By simulating the social behavior of birds or fishes, individuals follow the current optimal solution in the solution space and constantly update their own position to find the global optimal solution. The PSO component updates the particles’ velocities and positions using cognitive and social components.
Velocity Update:
V t + 1 i , j = w · V t i , j + c 1 · r a n d · ( k t + 1 i , j k t i , j ) + c 2 · r a n d · ( L t j k t i , j )
k t + 1 i , j = k t i , j + V t + 1 i , j
where w is the inertia weight controlling exploration and exploitation. c 1 and c 2 are learning factors for personal and global components. k t + 1 i , j is the particle’s best-known position. L t j is the global best position across all particles.
In the traditional BKA algorithm, the search process primarily focuses on local exploration, which, while efficient within a specific area of the solution space, tends to get stuck in local minima. The global search mechanism of PSO addresses this shortcoming by introducing a broader exploration strategy; this enables the algorithm to explore a broader spectrum of potential solutions. PSO achieves this by updating particle positions based on two primary influences, namely the global best (collective experience) and the personal best (individual experience), leading to more thorough global exploration across the solution space [46].

3.2. Random-Elite Difference Mutation

Random-Elite Difference Mutation is a probability-based variant strategy that guides individuals to evolve towards better solutions by combining global optimal solutions and randomly selected individual differences while introducing randomness to enhance population diversity. It works as follows:
(1)
Random Selection for Mutation: A value for rand is randomly generated within the range [0, 1]. If rand > p, the mutation operation is triggered. Otherwise, the individual solution remains unchanged. This probabilistic approach ensures that not every individual undergoes mutation, promoting exploration and stability.
(2)
Mutation Formula: When mutation occurs, the new solution is computed using the following formula:
k t + 1 i , j = k t i , j + F · ( L t , j k t i , j ) + F · ( B K i , j ( r 1 ( n p , : ) ) ( B K i , j ( r 2 ( n p , : ) )
k t i , j is the current position vector of an individual.
L t , j represents the global best solution found so far.
B K i , j is the matrix of all individuals in the population.
F is a scaling factor that controls the magnitude of the mutation.
r ( n p ) is used to randomly select two different population members for the second mutation term.
The formula consists of two parts:
Global Best Influence: F · ( L t , j k t i , j ) represents the difference between the global best solution and the current individual. By scaling this difference by the factor F, the individual is attracted towards the best solution found so far.
Random Population Influence: F · ( B K i , j ( r 1 ( n p , : ) ) ( B K i , j ( r 2 ( n p , : ) ) introduces a random component by selecting two different individuals from the population. This term introduces more diversity by altering the individual based on the difference between two randomly chosen solutions.
Additional mutations using a custom operator (mutations) are applied to improve diversity [47]. In the BKA section, the differential mutation strategy in the mutation function is called the mutant operation to the new position. The mutation mechanism can introduce new solutions, enhance the diversity of populations, and prevent premature convergence to a local optimum. Mutation operations help the algorithm to maintain a better exploration ability in complex issues. To increase population diversity, a Random-Elite Difference Mutation strategy is added to the algorithm. The mutation operation is governed by a probability p.
The Particle Swarm Optimization (PSO) mechanism was selected due to its established ability to search globally and converge effectively. The higher speed rules of PSO integrate both cognitive (personal best) and social (world best) components, allowing the algorithm to effectively explore search spaces while leveraging high-quality solutions. This feature makes PSO particularly suitable for accelerated convergence in complex optimization problems. To address the limitations of traditional mutation operators that usually lack direction, random elite differential mutation strategies have been incorporated. This strategy introduces controlled diversity by utilizing the differences between elite and random solutions to investigate new areas of the search space without sacrificing convergence speed. By integrating these strategies, global search capabilities are improved, and convergence is expedited, making the algorithm highly effective in addressing complex optimization challenges. The synergy between these mechanisms improves robustness and adaptability, ensuring outstanding performance in complex situations. The computational complexity of BKAPI is analyzed to measure its runtime consumption; the component is as follows: initialization O(N), position update of the search agent O(Tmax × N × Dim), and fitness evaluation O(Tmax × N), where N stands for the population size, Dim represents the dimension, and Tmax is the maximum iterations. Consequently, the total computational complexity of BKAPI is O(BKAPI) = O(initialization) + O(position update) + O(fitness evaluation).
In the hybrid improved BKA based on the PSO algorithm, PSO’s global search is alternated with BKA’s local search, allowing the BKAPI to first explore the global search space and then focus on refining solutions through local searches.The pseudocode of BKAPI (Algorithm 1) is provided, offering a clear description of the execution process. The flow chart of the BKAPI is shown in Figure 1. Specifically, the hybrid strategy uses the PSO algorithm for a global search during the early iterations, where w represents the inertia weight, the individual learning factor, and the global learning factor guiding particles toward the best solutions. In contrast, the BKA focuses on more fine-tuned local exploration, especially after promising global solutions are identified. By adding these two components together, the algorithm explores the search space with greater effectiveness, allowing individuals to evolve toward effective solutions while maintaining diversity.
Algorithm 1 BKAPI algorithm
Require:  p o p : Population size.
d i m : Dimension of the problem.
u b , l b : Upper and lower bounds for each dimension.
T: Maximum number of iterations.
f o b j : Objective function.
Ensure:  X b e s t : The best quasi-optimal solution obtained by BKAPI for a given optimization problem.
F b e s t : The fitness value of the best solution.
  • Initialization phase: Initialize the population positions, PSO velocities, personal best positions, global best position, and global best fitness.
  • for  t = 1 to T do
  •     BKA Phase: Local Search (Attacking behavior)
  •     if  p < r  then
  •         Update the position applying Equation (2).
  •     else if  p r  then
  •         Update the position applying Equation (3).
  •     end if
  •     Differential Evolution
  •     Update the position applying Equation (9).
  •     Evaluate the fitness objective function.
  •     Migration behavior
  •     Generate a reference individual r i randomly from the population.
  •     Compute reference fitness F r i = f ( k t r i , j ) .
  •     if  F i < F r i  then
  •         Update the position applying Equation (5).
  •     else
  •         Update the position applying Equation (6).
  •     end if
  •     Evaluate the fitness objective function.
  •     PSO Phase: Global Search
  •     Update velocity and position using Equation (7).
  •     Boundary handling
  •     Ensure the position stays within bounds.
  •     Evaluate the fitness objective function.
  •     Select the best individual
  •     if  k t + 1 i , j < L t j  then
  •          X b e s t = k t + 1 i , j , F b e s t = f ( k t + 1 i , j )
  •     else
  •          X b e s t = L t j , F b e s t = f ( L t j )
  •     end if
  •     Update global best
  • end for
  • return Best Fitness and Best Position.

3.3. Ablation Study of BKAPI

The ablation study of BKAPI involves analyzing its comparative performance with related algorithms, including hybrid Black-Winged Kite Algorithm based on particle swarm algorithm (BKAP), BKA, and improved Black-Winged Kite Algorithm based on mutation operations (BKAI), using the provided radar plots and ranking bar charts (Figure 2 for CEC2017 and Figure 3 for CEC2022). The radar plots depict BKAPI’s superior performance across multiple functions, with its position consistently closer to the center, indicating lower rankings and better stability. In contrast, BKA exhibits the widest spread, suggesting higher variance and weaker performance. The ranking bar charts further confirm BKAPI’s effectiveness. For the CEC 2017 dataset, BKAPI achieves the lowest average rank (1.79), followed by BKAP (2.03), BKAI (3.03), and BKA (3.14). Similarly, for the CEC 2022 dataset, BKAPI maintains the best rank (1.33), with BKAP at 2.17, BKA at 3.08, and BKAI at 3.42. The consistent trend across both datasets highlights BKAPI’s robustness in comparison to its alternatives.
The study also provides insight into the impact of individual components within BKAPI. The significant improvement from BKAP to BKAPI suggests that the additional mechanisms incorporated into BKAPI play a crucial role in optimizing performance. The relatively poor performance of BKA further underscores the importance of these enhancements, as its lack of improvements results in greater instability and lower rankings. The combination of radar plots and ranking metrics solidifies BKAPI’s dominance, demonstrating its efficiency and stability across diverse test functions. Overall, the ablation study confirms that BKAPI is the most effective algorithm among its variants, making it the optimal choice for solving complex optimization problems.

4. Experimental Analysis

Test sets were selected based on research requirements and algorithm characteristics to assess the improved Black-Winged Kite Algorithm (BKAPI). Classic functions were used for preliminary verification, while the CEC series was employed for in-depth evaluation. Following the principle that multiple test sets offer a thorough evaluation of the performance, CEC 2017 was selected to validate the effectiveness for complex continuous optimization problems, and CEC 2022 was selected to evaluate its performance in the latest complex optimization scenarios. These test functions are used to assess the performance of optimization algorithms across various complex problem scenarios; to ensure fairness, all algorithms were executed on the same computational platform under consistent conditions during the experiments.

4.1. Experimental Setting

Choosing appropriate comparison algorithms is critical to ensuring that the results of the experiments are both scientific and comparable. Effective selection helps to evaluate the new algorithm’s performance and offers insights for potential improvements [48]. The following criteria should be considered when selecting comparison algorithms. Classic Benchmark Algorithms: These are well-established algorithms that ensure the experimental results are widely accepted and comparable. Algorithms with similar mechanisms allow for an exhaustive study of the unique aspects of the new algorithm in comparison to others with similar features, including new algorithms. Incorporating the latest algorithms ensures that the results are timely and relevant to current optimization challenges. Specialized Algorithms for Specific Problems: These algorithms are selected to evaluate the new algorithm’s performance in specialized application scenarios, offering insights into its practical effectiveness. According to the above principles, eight algorithms were selected for comparison, namely the BKA, PSO, the Genetic Algorithm (GA) [25], the Cuckoo Optimization Algorithm (COA) [49], Beluga Whale Optimization (BWO) [50], Adaptive Optics (AO) [51], and the Starfish Optimization Algorithm (SFOA) [52], based on the test sets CEC 2017 and 2022. The experiments were equipped with an Intel® Core Ultra 9 185H 2.30 GHz processor (Intel Corporation, Santa Clara, CA, USA) and a Windows 11 operating system.
The implementation of all algorithms was carried out using MATLAB 2024a, with a population size set to 30 and 300 iterations per run, and each algorithm was executed 30 times. The dimensionality of the problem was set to 10 and 100. The selection of these parameters aimed to find an equilibrium between computational efficiency, statistical validity, and algorithm performance. This configuration represents a common experimental design in optimization algorithm research, effectively assessing algorithmic performance while conserving computational resources. If higher precision or the handling of more complex problems is required, these parameters can be appropriately adjusted (e.g., by increasing the population size).

4.2. Results and Analysis of the Test Functions for CEC 2017 with Dimensions of 10 and 100

Categories of test functions (F1–F30) include unimodal functions with Gaussian noise (F1–F3) that can test an algorithm’s convergence ability in smooth, unimodal landscapes. Simple multimodal functions (F4–F10) can simulate scenarios with multiple local minima. Hybrid functions (F11–F20) can combine several basic functions into one, creating heterogeneous landscapes. Composition functions (F21–F30) can compound multiple multimodal functions into a single problem to simulate highly complex landscapes [41]. Std and avg represent the standard deviation and average, respectively, as shown in Table 1 and Table 2.
The BKAPI was evaluated against seven other intelligent optimization algorithms: BKA, PSO, GA, COA, BWO, AO, and SFOA. The BKAPI consistently excels in both average performance and stability (standard deviation) across a range of functions. It frequently achieves the lowest avg and std values, demonstrating both superior optimization and stability. For F1, the BKAPI achieves the best results with an avg value of 5.71 × 10 3 . and a std value of 5.05 × 10 3 , showing high optimization capability and stability. Other algorithms, like BWO and AO, show extremely high avg and std values, indicating poor performance. For F3, the BKAPI and PSO deliver a similar avg value of 3.00 × 10 2 , but the BKAPI has a much smaller std value, 3.74 × 10 11 , demonstrating far better stability. The GA and COA show higher variability and worse fitness, with an avg value reaching 5.47 × 10 4 ; this indicates that the BKAPI exhibits the strongest convergence capability in smooth, unimodal landscapes. For functions F4 to F10, the BKAPI achieves both the lowest average values and the smallest standard deviation, as indicated in bold. BWO and AO exhibit significantly higher avg values and std values, reflecting inefficiency in tackling this problem. The analysis demonstrates that the BKAPI exhibits a robust overall optimization capability, effectively preventing premature convergence and avoiding local optima. The BKAPI can achieve the optimal solution within the search space, highlighting its efficiency in finding globally optimal solutions across various test functions. For F11–F20, while most algorithms perform poorly (e.g., avg value of 1.04 × 10 4 for AO of F11), the BKAPI maintains impressive avg values (1.14 × 10 3 ) and small std values (4.88 × 10 1 ) for F11. The BKAPI significantly outperforms the other algorithms with regard to F11, with an avg value of 2.16 × 10 4 and a std value of 1.62 × 10 4 of F18. The BKA follows closely, but algorithms like the GA demonstrate poor performance, which shows that the BKAPI features an extensive search range, high population diversity, and robust global exploration capabilities. Incorporating Random-Elite Difference Mutation enhances population diversity, effectively preventing premature convergence. This mechanism filters out the best search agents, thereby strengthening local search precision. The complementary advantages of these features ensure that the algorithm avoids search stagnation. For functions F21 to F30, the differences between the eight algorithms are not significant; however, overall, the BKAPI demonstrates the best performance. This indicates that the BKAPI is more capable of solving highly complex landscape problems compared to the BKA algorithm, which highlights one of the main limitations of the BKA.
The BKAPI has the lowest std values across most functions, highlighting its robustness and reliability. Algorithms like COA, BWO, and AO often exhibit high std values, reflecting inconsistency in optimization performance. PSO performs reasonably well in terms of std values but falls short compared to the BKAPI. The BKAPI generally surpasses the BKA in both avg values and std values, showcasing the advantage of the hybrid integration with enhanced global search and faster convergence. PSO occasionally matches the BKAPI in avg values (e.g., (F3)), but its higher std values suggest lower reliability. The BKAPI consistently outperforms the GA, COA, BWO, AO, and SFOA in both the average fitness and spread of results across nearly all functions. Its superior optimization performance and stability make the BKAPI highly effective for addressing diverse and complex problems in the CEC 2017 benchmark. Algorithms like PSO and the BKA are competitive in specific cases but lack the consistency of the BKAPI. The GA, COA, and AO show significant limitations, with high fitness values and variability across functions, especially for complex or high-dimensional problems.
The Wilcoxon rank-sum test is employed to assess if there is a significant statistical difference between two independent data sets. In this context, it was used to assess whether the performance of the BKAPI significantly differed from that of the other algorithms [53]. To analyze the table for values greater than 0.05 (5 × 10 2 ), which is a commonly used threshold in statistics for determining the significance of results, we identified cases where the p-value exceeded this threshold, which are described in Table 3. A p-value greater than 5 × 10 2 typically suggests that the difference is not statistically significant and may be attributed to random variation rather than a genuine effect. From the data, it is observed that the COA, BWO, AO, and SFOA frequently achieve the smallest p-value, at 1.73 × 10 6 , indicating that the BKAPI demonstrates statistically significant differences for the corresponding test functions. Conversely, PSO, BKA, and the SFOA display larger p-values for multiple functions, suggesting that their performance is occasionally similar. This similarity arises because the BKAPI builds upon and improves the features of the BKA and PSO. Additionally, the SFOA, as a newly proposed and effective algorithm, also shows competitive performance in certain cases, reflecting its potential alongside the BKAPI in achieving high optimization accuracy.
The images above present parts of the fitness convergence plots comparing various optimization algorithms across different benchmark objective functions of CEC 2017 with dimensions of 10 (Figure 4) and 100 (Figure 5). The number of iterations is represented on the x-axis, while the average fitness values are shown on the y-axis, either on a logarithmic or linear scale, depending on the function. The algorithm’s performance improves as the fitness value decreases, since the goal is to minimize the objective function. The BKAPI (magenta line), such as F1, F5, F6, F9, F10, F20, F23, and F29 with dimensions of 10 (Figure 4) and F1, F5, F6, F9, F21, and F30 with dimensions of 100 (dark wine red line) (Figure 5), shows the fastest and most effective convergence, rapidly reaching the minimum fitness value across most functions. Algorithms like the BKA and PSO exhibit slower convergence and higher final fitness values, indicating suboptimal performance. As for other curves, the performance of the BKAPI in certain functions is not the best, which is normal. Overall, the BKAPI consistently demonstrates superior convergence speed and optimization accuracy across all tested functions, making it the most robust and effective among the compared methods. Other algorithms, such as the BKA and AO, show moderate performance, while methods like the GA, PSO, and BWO generally underperform.
Parts of the ANOVA test results with dimensions of 10 are presented in Figure 6, while those with dimensions of 100 are presented in Figure 7. The standard deviation acts as a clear indicator of each algorithm’s stability in solving optimization problems; a smaller standard deviation reflects stronger overall optimization ability and stability. Consequently, the algorithms are ranked based on their standard deviations. The BKAPI integrates a hybrid algorithm strategy and a mutation strategy, which enhance its performance, resulting in improved convergence rates and calculation precision. For functions F1–F3, the BKAPI exhibits the smallest standard deviation among all algorithms, ranking it first. This indicates its strong stability, superior optimization ability, and practicality in solving unimodal functions. For functions F4–F10, the BKAPI employs additional strategies to expand the population space and avoid local optima, enhancing both its global and local search abilities. Its standard deviation remains smaller than those of other algorithms, further demonstrating its stability. Similarly, for functions F11–F20, BKAPI demonstrates strong overall search capabilities, enabling it to reliably identify precise solutions within the search space. Its performance, characterized by a low standard deviation, underscores its superiority and stability. For F21 and F23, the BKAPI and BKA are clearly dominant, with minimal variation and consistently low fitness values. For F24 to F26, the trend continues, with the BKAPI maintaining its edge. The SFOA occasionally shows competitive results but lacks consistency. For functions F27–F30, the BKAPI maintains consistent stability with relatively small standard deviations compared to the other algorithms. This stability highlights its effectiveness in tackling complex function problems. Metrics such as the mean value, standard deviation, and p-value are used as evaluation indicators to further verify its robustness. The robustness of BKAPI is evident in several aspects: it balances exploration and exploitation to achieve faster convergence and higher precision, maintains a relatively small standard deviation indicative of strong optimization ability, and ensures that even in cases of higher deviation, catastrophic or combinatorial failures are avoided.
Figure 8a shows a radar chart illustrating the comparison of other algorithms across 29 test functions in with dimensions of 10, while Figure 9a shows the same with dimensions of 100. Each axis corresponds to a specific test function, and the larger the value along an axis, the worse the algorithm’s performance. The smaller overall shape in the radar chart indicates better performance. Different colors and symbols represent the algorithms, with the BKAPI (blue dots) exhibiting the smallest enclosed area, signifying superior performance for most test functions.
The comparison chart of algorithm performance based on the average ranks of the algorithms with dimensions of 10, where a smaller ranking value reflects better performance, is shown in Figure 8b. The x-axis (algorithm) lists different optimization algorithms, namely the BKAPI, BKA, PSO, GA, COA, BWO, AO, and SFOA, while the y-axis (average rank) represents the average ranking of each algorithm, and bars show the average rank of each algorithm, with different colors representing different algorithms. A trend line that connects the ranking values provides a visual representation of ranking changes across algorithms. Among all algorithms, the BKAPI achieves the best results, consistently ranking the lowest, with 1.72 with dimensions of 10 and 1.52 with dimensions of 100 (Figure 9b). The BKAPI has the best performance, while the SFOA (Starfish Optimization Algorithm) has a mid-range performance (its rank is 3.21 with dimensions of 100), indicating its strong optimization capabilities. PSO performs competitively but falls slightly behind the BKA and SFOA (its rank is 2.24 with dimensions of 100). In contrast, the GA, COA, AO, and BWO demonstrate relatively poor performance, with higher average rankings across the test functions. While these algorithms may excel in certain specific functions, their overall average rankings are not as favorable as those of the BKAPI, BKA, and SFOA.
Overall, BKAPI showcases significant advantages in optimization, reflecting strong and stable performance across multiple test functions. These observations are consistent with the results presented in Table 1, Table 2 and Table 3 and Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 for dimensions of 10 and Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9 for dimensions of 100, further confirming the BKAPI’s dominance and stability in function optimization tasks in high-dimensional spaces.

4.3. Results and Analysis of the Test Functions for CEC 2022

The CEC 2022 test function has 12 single-target test functions (like CEC 2017): a single-peak function (F1), basic functions (F2–F5), mixed functions (F6–F8), and combination functions (F9–F12) [54]. Table 4 shows the standard deviation and average fitness value results of the CEC 2022 test set.
The BKAPI achieves the lowest mean values and standard deviation values across most functions, which indicates its superior efficiency and robustness in optimization. It excels particularly in F1 (avg: 3.00 × 10 2 , std: 1.07 × 10 10 ) and F5 (avg: 9.01 × 10 2 , std: 1.48 × 10 0 ). The SFOA emerges as a strong competitor, especially in F8, F10, F11, and F12, demonstrating excellent consistency (e.g., F12: std: 1.65 × 10 0 ). The BKA and PSO show competitive but less consistent results, trailing the BKAPI in most cases. Algorithms like the GA, COA, and BWO generally underperform, with higher avg and std values, highlighting their instability in handling complex functions.
We analyze Table 5 for values greater than 0.05 (5 × 10 2 ) by identifying and comparing cases where the p-value exceeds this threshold. From the data, it is observed that the GA, COA, BWO, and AO frequently achieve the smallest p-values, at 3.02 × 10 11 , indicating that the BKAPI demonstrates statistically significant differences for the corresponding test functions.
The results of the convergence curve and ANOVA test on partial functions of CEC 2022 for the algorithms are presented in Figure 10 and Figure 11. The BKAPI exhibits the fastest convergence speed and optimal fitness value among all test functions, making it the most stable and effective algorithm. The BKA also exhibits good performance but is slightly inferior to the BKAPI in some functions. PSO and the GA perform well on most functions, but not as well as the BKAPI and BKA. The COA, BWO, AO, and SFOA perform poorly on most functions. These analyses indicate that the BKAPI is the most reliable and effective algorithm suitable for these test functions.
The radar chart illustrates the performance comparison of various optimization algorithms across 12 test functions, as shown in Figure 12a. Each algorithm is represented by distinct colors and symbols, with the BKAPI (denoted by blue dots) showing the smallest enclosed area, which signifies its superior performance across most of the test functions. Figure 12b presents the average ranking of the algorithms, where a lower ranking number reflects better overall performance. Among all the algorithms evaluated, the BKAPI stands out with the best results, maintaining a consistently low rank of 2.00. The BKA and SFOA also perform admirably, achieving relatively low average rankings that highlight their robust optimization capabilities. PSO demonstrates competitive performance but ranks slightly below the BKA and SFOA. In contrast, the GA, COA, AO, and BWO exhibit relatively weaker performance, as evidenced by their higher average rankings. Although these algorithms might excel on specific functions, their overall performance does not match the consistency and effectiveness of the BKAPI, BKA, and SFOA.
In summary, the BKAPI exhibits significant advantages in optimization tasks, showcasing its strong and stable performance across multiple test functions. The consistent top rankings of the BKAPI are further supported by the data in Table 4 and Table 5, as well as Figure 10 and Figure 11. The BKA and SFOA also prove to be versatile and reliable optimization algorithms, suitable for diverse application scenarios. This analysis underscores the BKAPI’s dominance and stability in optimizing a wide range of functions. Building on these promising results, the next step is to validate the BKAPI’s effectiveness in real-world engineering problems, where complex constraints and practical challenges further test its robustness and applicability.

5. Engineering Optimization Application

The BKAPI is applied to optimization problems for classic engineering verification (welded beam project and Himmelblau function) and actual engineering applications (visible light positioning (VLP) system) to further demonstrate the effectiveness of improvement strategies and the engineering applicability of the BKAPI. This section will present the problem formulation, the application of the BKAPI, and the results obtained, emphasizing its performance in tackling these challenging engineering optimization tasks.

5.1. Welded Beam Project

The objective of the welded beam design problem is to minimize the cost function while satisfying a set of seven inequality constraints (for details on the relevant introduction and calculation formulas, see [55]). The objective of the welded beam design problem is to minimize the cost function f(x) while adhering to a set of seven inequality constraints. These constraints include limitations on the shear stress ( τ ), beam bending stress ( σ ), buckling load of the bar ( P C ), end deflection of the beam ( δ ), and others. As illustrated in Figure 13, the design involves four variables: the height of the weld ( h ( x 1 ) ), the length of the weld ( l ( x 2 ) ), the thickness of the beam ( t ( x 3 ) ), and the width of the beam ( b ( x 4 ) ) [55].
Mathematically, the optimization model can be formulated as follows:
Consider variable x = [ x 1 , x 2 , x 3 , x 4 ] = [ h , l , t , b ]
Minimize
min f ( x ) = 1.1047 x 1 2 x 2 + 0.04811 x 3 x 4 14.0 + x 2
subject to
g 1 ( x ) = τ ( x ) τ max 0
g 2 ( x ) = σ ( x ) σ max 0
g 3 ( x ) = x 1 x 4 0
g 4 ( x ) = 0.104 x 1 2 + 0.04811 x 3
x 4 14.0 + x 2 5.0 0
g 5 ( x ) = 0.125 x 1 0
g 6 ( x ) = δ ( x ) δ max 0
g 7 ( x ) = P P c ( x ) 0
Boundary constraints and related parameters:
τ ( x ) = τ 2 + 2 τ τ x 2 2 R + τ 2
τ = P 2 x 1 x 2
τ = M R J
M = P L + x 2 2
R = x 2 2 4 + x 1 + x 3 2 2
J = 2 2 x 1 x 2 x 2 2 12 + x 1 + x 3 2 2
σ ( x ) = 6 P L x 3 2 x 4 , δ ( x ) = 4 P L 3 E x 3 3 x 4
P c ( x ) = 4.013 E x 3 2 x 4 6 36 L 2 1 x 3 2 L E 4 G
P = 6000   lb , L = 14   in , E = 30 × 10 6   psi , G = 12 × 10 6   psi ,
τ max = 13,600   psi , σ max = 30,000   psi , δ max = 0.25   in
0.1 x 1 2.0 , 0.1 x 2 10.0 ,
0.1 x 3 10.0 , 0.1 x 4 2.0
The BKAPI was applied to solve the welding beam project problem, with the following parameter settings: a population size of 30, a maximum of 300 iterations, a movement probability of 0.9, and an exchange probability of 0.9. The algorithm was executed independently to obtain the optimal value, average value, and function evaluation counts for the objective function. The convergence curve and index statistics are shown in Figure 14 and Table 6, respectively. All algorithms demonstrate good convergence speed, and the BKAPI, PSO, and SFOA show good global optimization capabilities in Figure 14. The standard deviation of the BKAPI is 0.05, which is much better than the corresponding value of 0.65 for the BKA. The data in Table 6 reveal that the BKAPI achieved the best results in both the optimal value and average value across 30 independent runs while also requiring the fewest function evaluations. The results in Table 7 show that the BKAPI obtains the best optimal value, at 1.6702, indicating that BKAPI can provide the best solutions for the welding beam project problem. The BKAPI is the best choice if accuracy and stability are the top priorities, despite its slower computational time (2.95 s). In conclusion, the BKAPI and SFOA stand out as the most effective algorithms, balancing optimization accuracy, consistency, and computational efficiency.

5.2. Himmelblau Function

D. M. Himmelblau cited this problem to simulate process design challenges in his work [56]. It has since become a widely used benchmark for evaluating nonlinear constrained optimization algorithms. This problem involves five variables and six nonlinear constraints, with its detailed description provided in [57].
Minimize
f ( x ¯ ) = 5.3578547 x 3 2 + 0.8356891 x 1 x 5 + 37.293239 x 1 40792.141
subject to
g 1 ( x ¯ ) = G 1 0 , g 2 ( x ¯ ) = G 1 92 0 ,
g 3 ( x ¯ ) = 90 G 2 0
g 4 ( x ¯ ) = G 2 110 0 ,
g 5 ( x ¯ ) = 20 G 3 0 ,
g 6 ( x ¯ ) = G 3 25 0 ,
where
G 1 = 85.334407 + 0.0056858 x 2 x 5 + 0.0006262 x 1 x 4 0.0022053 x 3 x 5 ,
G 2 = 80.51249 + 0.0071317 x 2 x 5 + 0.0029955 x 1 x 2 + 0.0021813 x 3 2 ,
G 3 = 9.300961 + 0.0047026 x 3 x 5 + 0.00125447 x 1 x 3 + 0.0019085 x 3 x 4 .
within the bounds of
78 x 1 102 ,
33 x 2 45 ,
27 x 3 45 ,
27 x 4 45 ,
27 x 5 45
The BKAPI was applied to solve this problem by using the following parameter settings: a population of 30 and 300 iterations and 30 independent runs to obtain the optimal value, average value, and function evaluation counts for the objective function [58]. The outcomes were evaluated against seven alternative algorithms, with the convergence trends illustrated in Figure 15 and the corresponding performance metrics detailed in Table 8 and Table 9. All algorithms exhibit good convergence speeds, with the BKAPI, BKA, and SFOA demonstrating strong global optimization capabilities in Figure 15. The standard deviation is −30,186,151 and the average value is −30,617 for the BKAPI, respectively, demonstrating a significant advantage over the other algorithms. The computation time of the BKAPI (3.68 s) was on the slower side, but its superior performance may justify the extra computational time. These data indicate that the BKAPI achieved the best performance in terms of both the optimal value and the average value over 30 independent runs, while also requiring the lowest number of function evaluations in Table 9.
While the proposed BKAPI demonstrates significant improvements in global exploration, local search capabilities, and solution quality, it is important to acknowledge and discuss its limitations. One key limitation is its computational complexity, which arises from the integration of multiple mechanisms, including PSO’s velocity-update rule and the Random-Elite Difference Mutation strategy. These components, while enhancing performance, increase the computational overhead compared to the basic BKA algorithm. Additionally, the BKAPI’s performance is sensitive to hyperparameter settings, such as the swarm size in PSO and mutation rates. The improper tuning of these parameters can result in suboptimal performance or increased computational costs.
To address these limitations, a runtime analysis was conducted to compare the computational efficiency of the BKAPI with other state-of-the-art algorithms, including the basic BKA, PSO, GA, and SFOA. The analysis was performed on the CEC 2017 and CEC 2022 benchmark functions, which include a variety of high-dimensional and complex optimization problems. The results indicate that BKAPI, while slightly more computationally expensive than the basic BKA, achieves faster convergence rates and higher solution accuracy. This trade-off between computational cost and solution quality is justified, particularly for complex and multimodal problems where traditional algorithms often struggle.

5.3. Visible Light Positioning (VLP) System

Visible light communication (VLC) technology has advanced rapidly in recent years, leveraging light-emitting diodes (LEDs) for data transmission. VLC offers benefits, making it a promising solution for indoor positioning. Visible light positioning (VLP) systems improve LED utilization and enable high-precision positioning. These systems are divided into two categories based on the receiver type: image sensor-based systems, which use imaging and geometric relationships for positioning, and photodiode (PD)-based systems, which rely on methods like TOA, TDOA, AOA, and RSS to calculate distances or angles and estimate locations through triangulation [59,60,61].
This study used MATLAB as an experimental platform to design an experimental framework for a visible light indoor positioning algorithm system to evaluate and compare the performance of different single-point estimation algorithms based on received signal intensity (RSS). The experimental space was set to a three-dimensional area of 5 m × 5 m × 2 m, and the sampling points were generated by arranging 0.5 m on the x- and y-axes and a fixed z-axis at 2 m. This sampling method guarantees a uniform distribution of test points in space, enabling a comprehensive assessment of the performance in various locations [62]. To guarantee the dependability of the experimental outcomes, the two algorithms were run separately at each sampling point, and their estimated positions and positioning errors were recorded.
During the data acquisition process, a single-point estimation function was used to perform single-point estimation. This took the sample point coordinates as the input and returned the estimated position and positioning error. By looping through all sample points, the performance data of the two algorithms on each point were collected. These data included estimated position coordinates and corresponding positioning errors, laying the foundations for subsequent performance evaluation and visual analysis.
The location results of the BKA and BKAPI on all sampling points were obtained. First, we calculated the average positioning error for both algorithms through the collection and processing of experimental data. The results indicate that the average error of the BKAPI is 2.48148 cm, which is significantly better than the 8.10665 cm error for the BKA algorithm. This result shows that the position improvement mechanism introduced in the BKAPI effectively improves the positioning accuracy.
To provide a more intuitive demonstration of the comparison, a three-dimensional point cloud visualization was created. Figure 16 shows the distribution of sampling points, BKA estimation points, and BKAPI estimation points in three-dimensional space. As shown in the figures, the BKAPI estimation points (green asterisk) are closer to the actual sampling points (black dot) compared to the BKA estimation points (red circle), further confirming the superiority of the BKAPI.
In terms of error analysis, the positioning error curves of the two algorithms at all sampling points were drawn (Figure 17). The error curve (blue solid line) of the BKAPI is located below that of the BKA algorithm (black dashed line), indicating that the BKAPI exhibits better positioning accuracy at most sampling points. The errors of both algorithms in certain areas are relatively large, which may be related to signal propagation characteristics or environmental interference and thus deserve further research.
Finally, Figure 18 presents a bar chart comparing the average errors of the two algorithms; the error for the BKAPI is significantly lower than that for the BKA, and this result aligns with the previous analysis. Through this multi-dimensional performance evaluation and visual analysis, we not only verify the superiority of the BKAPI but also provide an important reference for subsequent algorithm improvements.
The experimental findings reveal that the BKAPI significantly outperforms the BKA in positioning accuracy, exhibiting a notably lower average positioning error. Through three-dimensional point cloud visualization and error analysis, the superiority and stability of BKAPI across different spatial locations are further confirmed.
In summary, while the BKAPI introduces additional computational complexity and requires careful hyperparameter tuning, its enhanced performance and versatility make it a promising approach for solving challenging optimization problems. Future work could focus on developing adaptive mechanisms to reduce hyperparameter sensitivity and further optimize computational efficiency.

6. Conclusions and Prospects

This article introduces an enhanced Black-Winged Kite Algorithm (BKA) integrated with Particle Swarm Optimization (PSO) that is designed to address functional optimization and engineering challenges. By combining the BKA’s search mechanisms with PSO’s global search capabilities, local development potential, and Random-Elite Difference Mutation strategy, the hybrid BKAPI achieves greater population diversity and prevents premature convergence. The BKAPI demonstrates reduced dependency on initial populations, adaptability to various optimization problem types, and versatility in solving continuous, discrete, and combinatorial optimization problems, thereby improving its robustness, adaptability, and broad applicability.
In benchmark testing for CEC 2017 and CEC 2022, the BKAPI demonstrates superior performance with regard to both the standard deviation and average results, highlighting its robust stability and global optimization capabilities. Convergence curve analysis shows that BKAPI significantly enhances global optimization ability and robustness, outperforming existing methods in engineering optimization problems. The experimental results confirm that the BKAPI improves both convergence speed and accuracy, making it highly suitable for practical engineering applications. These advancements position the hybrid algorithm as an effective solution for complex optimization tasks, such as indoor positioning and outdoor path planning, offering improved performance, greater solution efficiency, and enhanced robustness.
Future research directions include further optimization of the algorithm through additional channels, such as integrating it with deep learning models or testing it on large-scale industrial datasets. Building on our prior research in indoor positioning, the BKAPI can be applied to engineering applications like indoor positioning and robotic path planning. Specific recommendations for future work include the following:
  • Algorithm Enhancement: The BKAPI faces challenges related to computational cost, scalability, parameter sensitivity, and premature convergence. Future research should focus on adaptive parameter tuning, hybridization with machine learning, parallel computing, and theoretical analysis to address these limitations and enhance its performance.
  • Large-Scale Testing: The algorithm’s performance should be validated on large-scale industrial datasets to assess its scalability and efficiency in real-world scenarios.
  • Extension to Constrained Optimization: The BKAPI should be extended to handle constrained optimization problems, which are ubiquitous in real-world applications. This could involve incorporating constraint-handling techniques such as penalty functions, feasibility rules, or multi-archive strategies, as discussed in recent works [63,64].
By pursuing these directions, the BKAPI can be further refined and applied to a broader range of complex optimization challenges, solidifying its practical relevance and impact.

Author Contributions

Conceptualization, X.Z. and J.Z.; methodology, X.Z. and C.J.; software, X.Z. and Y.L.; validation, X.Z., J.Z. and C.J.; formal analysis, X.Z.; investigation, X.Z. and M.F.; resources, J.Z.; data curation, X.Z.; writing—original draft preparation, X.Z.; writing—review and editing, J.Z. and C.J.; visualization, Y.L.; supervision, M.F.; project administration, J.Z.; funding acquisition, M.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Anhui Provincial Department of Education: 2023AH010078, 2024AH051994, TCMADM-2024-07, Research Project of West Anhui University: WXZR202309, and High-level Talent Start-up Project of West Anhui University: WGKQ2024011, WGKQ2023003, WGKQ2021052, WGKO2020010005.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon request. There are no restrictions on data availability.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
BKABlack-Winged Kite Algorithm
PSOParticle Swarm Optimization
BKAPIHybrid Black-Winged Kite Algorithm with Particle Swarm Optimization
VLPVisible Light Positioning

References

  1. Dong, Y.; Sun, C.; Han, Y.; Liu, Q. Intelligent optimization: A novel framework to automatize multi-objective optimization of building daylighting and energy performances. J. Build. Eng. 2021, 43, 102804. [Google Scholar] [CrossRef]
  2. Kamal, M.; Tariq, M.; Srivastava, G.; Malina, L. Optimized security algorithms for intelligent and autonomous vehicular transportation systems. IEEE Trans. Intell. Transp. Syst. 2021, 24, 2038–2044. [Google Scholar] [CrossRef]
  3. Li, W.; Wang, G.-G.; Gandomi, A.H. A survey of learning-based intelligent optimization algorithms. Arch. Comput. Methods Eng. 2021, 28, 3781–3799. [Google Scholar] [CrossRef]
  4. Liu, S.; Xiao, C. Application and comparative study of optimization algorithms in financial investment portfolio problems. Mob. Inf. Syst. 2021, 2021, 3462715. [Google Scholar] [CrossRef]
  5. Zhou, Y.; Rao, B.; Wang, W. UAV swarm intelligence: Recent advances and future trends. IEEE Access 2020, 8, 183856–183878. [Google Scholar] [CrossRef]
  6. Tang, J.; Duan, H.; Lao, S. Swarm intelligence algorithms for multiple unmanned aerial vehicles collaboration: A comprehensive review. Artif. Intell. Rev. 2023, 56, 4295–4327. [Google Scholar] [CrossRef]
  7. Zhou, M.C.; Cui, M.; Xu, D.; Zhu, S.; Zhao, Z.; Abusorrah, A. Evolutionary optimization methods for high-dimensional expensive problems: A survey. IEEE/CAA J. Autom. Sin. 2024, 11, 1092–1105. [Google Scholar] [CrossRef]
  8. Berahas, A.S.; Bollapragada, R.; Nocedal, J. An investigation of Newton-sketch and subsampled Newton methods. Optim. Methods Softw. 2020, 35, 661–680. [Google Scholar] [CrossRef]
  9. Li, M. Some New Descent Nonlinear Conjugate Gradient Methods for Unconstrained Optimization Problems with Global Convergence. Asia-Pac. J. Oper. Res. 2024, 41, 2350020. [Google Scholar] [CrossRef]
  10. Upadhyay, B.B.; Pandey, R.K.; Liao, S. Newton’s method for interval-valued multiobjective optimization problem. J. Ind. Manag. Optim. 2024, 20, 1633–1661. [Google Scholar] [CrossRef]
  11. Altay, E.V.; Alatas, B. Intelligent optimization algorithms for the problem of mining numerical association rules. Physica A 2020, 540, 123142. [Google Scholar] [CrossRef]
  12. Chandra, V.S.; Anand, H.S. Nature inspired meta heuristic algorithms for optimization problems. Computing 2022, 104, 251–269. [Google Scholar]
  13. Wang, T.; Huang, Q. A new Newton method for convex optimization problems with singular Hessian matrices. AIMS Math. 2023, 8, 21161–21175. [Google Scholar] [CrossRef]
  14. Qiu, Y.; Yang, X.; Chen, S. An improved Gray Wolf Optimization algorithm solving to functional optimization and engineering design problems. Sci. Rep. 2024, 14, 14190. [Google Scholar] [CrossRef] [PubMed]
  15. Katoch, S.; Chauhan, S.S.; Kumar, V. A review on Genetic Algorithm: Past, present, and future. Multimed. Tools Appl. 2021, 80, 8091–8126. [Google Scholar] [CrossRef]
  16. Shami, T.M.; El-Saleh, A.A.; Alswaitti, M.; Al-Tashi, Q.; Summakieh, M.A.; Mirjalili, S. Particle Swarm Optimization: A comprehensive survey. IEEE Access 2022, 10, 10031–10061. [Google Scholar] [CrossRef]
  17. Li, Y.; Lin, X.; Liu, J. An improved Gray Wolf Optimization algorithm to solve engineering problems. Sustainability 2021, 13, 3208. [Google Scholar] [CrossRef]
  18. Deng, W.; Shang, S.; Cai, X.; Zhao, H.; Song, Y.; Xu, J. An improved Differential Evolution algorithm and its application in optimization problem. Soft Comput. 2021, 25, 5277–5298. [Google Scholar] [CrossRef]
  19. Guilmeau, T.; Chouzenoux, E.; Elvira, V. Simulated annealing: A review and a new scheme. In Proceedings of the 2021 IEEE Statistical Signal Processing Workshop (SSP), Rio de Janeiro, Brazil, 11–14 July 2021. [Google Scholar]
  20. Cuong-Le, T.; Minh, H.-L.; Khatir, S.; Wahab, M.A.; Tran, M.T.; Mirjalili, S. A novel version of Cuckoo search algorithm for solving optimization problems. Expert Syst. Appl. 2021, 186, 115669. [Google Scholar] [CrossRef]
  21. Fidanova, S.; Fidanova, S. ACO for Image Edge Detection. In Ant Colony Optimization and Applications; Springer: Berlin/Heidelberg, Germany, 2021; pp. 101–107. [Google Scholar]
  22. Li, K.; Fialho, A.; Kwong, S.; Zhang, Q. Adaptive operator selection with bandits for a multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 2013, 18, 114–130. [Google Scholar] [CrossRef]
  23. Li, K.E.; Wang, R.; Kwong, S.A.M.; Cao, J. Evolving extreme learning machine paradigm with adaptive operator selection and parameter control. Int. J. Uncertainty Fuzziness-Knowl.-Based Syst. 2013, 21 (Suppl. 2), 143–154. [Google Scholar] [CrossRef]
  24. Yan, P.; Shang, S.; Zhang, C.; Yin, N.; Zhang, X.; Yang, G.; Zhang, Z.; Sun, Q. Research on the processing of coal mine water source data by optimizing BP neural network algorithm with sparrow search algorithm. IEEE Access 2021, 9, 108718–108730. [Google Scholar] [CrossRef]
  25. Garg, H. A hybrid PSO-GA algorithm for constrained optimization problems. Appl. Math. Comput. 2016, 274, 292–305. [Google Scholar] [CrossRef]
  26. Jain, M.; Saihjpal, V.; Singh, N.; Singh, S.B. An overview of variants and advancements of PSO algorithm. Appl. Sci. 2022, 12, 8392. [Google Scholar] [CrossRef]
  27. Mirsadeghi, E.; Khodayifar, S. Hybridizing Particle Swarm Optimization with simulated annealing and Differential Evolution. Cluster Comput. 2021, 24, 1135–1163. [Google Scholar] [CrossRef]
  28. Meng, Z.; Zhong, Y.; Yang, C. CS-DE: Cooperative strategy based Differential Evolution with population diversity enhancement. Inf. Sci. 2021, 577, 663–696. [Google Scholar] [CrossRef]
  29. Pan, J.-S.; Liu, N.; Chu, S.-C.; Lai, T. An efficient surrogate-assisted hybrid optimization algorithm for expensive optimization problems. Inf. Sci. 2021, 561, 304–325. [Google Scholar] [CrossRef]
  30. Yang, R.; Liu, Y.; Yu, Y.; He, X.; Li, H. Hybrid improved Particle Swarm Optimization-cuckoo search optimized fuzzy PID controller for micro gas turbine. Energy Rep. 2021, 7, 5446–5454. [Google Scholar] [CrossRef]
  31. Al-Tameemi, Z.H.A.; Lie, T.T.; Foo, G.; Blaabjerg, F. Optimal coordinated control of DC microgrid based on hybrid PSO–GWO algorithm. Electricity 2022, 3, 346–364. [Google Scholar] [CrossRef]
  32. Ahmad, I.; Qayum, F.; Rahman, S.U.; Srivastava, G. Using Improved Hybrid Grey Wolf Algorithm Based on Artificial Bee Colony Algorithm Onlooker and Scout Bee Operators for Solving Optimization Problems. Int. J. Comput. Intell. Syst. 2024, 17, 111. [Google Scholar] [CrossRef]
  33. Wang, J.; Wang, W.-C.; Hu, X.-X.; Qiu, L.; Zang, H.-F. Black-Winged Kite Algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev. 2024, 57, 98. [Google Scholar] [CrossRef]
  34. Ma, H.; Azizan, A.; Feng, Y.; Cheng, L.; Delgoshaei, A.; Ismail, M.I.S.; Ramli, H.R. Improved Black-Winged Kite Algorithm and finite element analysis for robot parallel gripper design. Adv. Mech. Eng. 2024, 16, 16878132241288402. [Google Scholar]
  35. Du, C.; Zhang, J.; Fang, J. An innovative complex-valued encoding Black-Winged Kite Algorithm for global optimization. Sci. Rep. 2025, 15, 932. [Google Scholar] [CrossRef] [PubMed]
  36. Rasooli, A.Q.; Inan, O. Clustering with the Blackwinged Kite Algorithm. Int. J. Comput. Sci. Commun. (IJCSC) 2024, 9, 22–33. [Google Scholar] [CrossRef]
  37. Zhang, Z.; Wang, X.; Yue, Y. Heuristic Optimization Algorithm of Black-Winged Kite Fused with Osprey and Its Engineering Application. Biomimetics 2024, 9, 595. [Google Scholar] [CrossRef] [PubMed]
  38. Xue, R.; Zhang, X.; Xu, X.; Zhang, J.; Cheng, D.; Wang, G. Multi-strategy Integration Model Based on Black-Winged Kite Algorithm and Artificial Rabbit Optimization. In Proceedings of the International Conference on Swarm Intelligence, Shenzhen, China, 17–19 July 2024; Springer: Cham, Switzerland, 2024; pp. 197–207. [Google Scholar]
  39. Zhao, M.; Su, Z.; Zhao, C.; Hua, Z. Improved Black-Winged Kite Algorithm based on chaotic mapping and adversarial learning. J. Phys. Conf. Ser. 2024, 2898, 012040. [Google Scholar] [CrossRef]
  40. Fu, J.; Song, Z.; Meng, J.; Wu, C. Prediction of Lithium-Ion Battery State of Health Using a Deep Hybrid Kernel Extreme Learning Machine Optimized by the Improved Black-Winged Kite Algorithm. Batteries 2024, 10, 398. [Google Scholar] [CrossRef]
  41. Kulkarni, A.J.; Siarry, P. Handbook of AI-Based Metaheuristics; CRC Press: Boca Raton, FL, USA, 2021. [Google Scholar]
  42. Moustafa, G.; Tolba, M.A.; El-Rifaie, A.M.; Ginidi, A.; Shaheen, A.M.; Abid, S. A Subtraction-Average-Based Optimizer for solving engineering problems with applications on TCSC allocation in power systems. Biomimetics 2023, 8, 332. [Google Scholar] [CrossRef]
  43. Moustafa, G.; Alnami, H.; Hakmi, S.H.; Ginidi, A.; Shaheen, A.M.; Al-Mufadi, F.A. An advanced bio-inspired Mantis Search Algorithm for characterization of PV panel and global optimization of its model parameters. Biomimetics 2023, 8, 490. [Google Scholar] [CrossRef]
  44. Hakmi, S.H.; Shaheen, A.M.; Alnami, H.; Moustafa, G.; Ginidi, A. Kepler Algorithm for large-scale systems of economic dispatch with heat optimization. Biomimetics 2023, 8, 608. [Google Scholar] [CrossRef]
  45. Gafar, M.; Sarhan, S.; Ginidi, A.R.; Shaheen, A.M. An Improved Bio-Inspired Material Generation Algorithm for engineering optimization problems including PV source penetration in distribution systems. Appl. Sci. 2025, 15, 603. [Google Scholar] [CrossRef]
  46. Irawan, Y.H.; Lin, P.T. Parametric optimization technique for continuous and combinational problems based on simulated annealing algorithm. J. Energy Mech. Mater. Manuf. Eng. 2023, 8, 75–82. [Google Scholar] [CrossRef]
  47. Zhong, X.; Duan, M.; Cheng, P. Ranking-based hierarchical random mutation in Differential Evolution. PLoS ONE 2021, 16, e0245887. [Google Scholar] [CrossRef]
  48. Zaini, F.A.; Sulaima, M.F.; Razak, I.A.W.A.; Zulkafli, N.I.; Mokhlis, H. A review on the applications of PSO-based algorithm in demand side management: Challenges and opportunities. IEEE Access 2023, 11, 53373–53400. [Google Scholar] [CrossRef]
  49. Jafari, S.; Bozorg-Haddad, O.; Chu, X. Cuckoo Optimization Algorithm (COA). Advanced Optimization by Nature-Inspired Algorithms; Springer: Berlin/Heidelberg, Germany, 2018; pp. 39–49. [Google Scholar]
  50. Zhong, C.; Li, G.; Meng, Z. Beluga Whale Optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  51. Tehrani, K.F.; Xu, J.; Zhang, Y.; Shen, P.; Kner, P. Adaptive Optics stochastic optical reconstruction microscopy (AO-STORM) using a Genetic Algorithm. Opt. Express 2015, 23, 13677–13692. [Google Scholar] [CrossRef]
  52. Zhong, C.; Li, G.; Meng, Z.; Li, H.; Yildiz, A.R.; Mirjalili, S. Starfish Optimization Algorithm (SFOA): A bio-inspired metaheuristic algorithm for global optimization compared with 100 optimizers. Neural Comput. Appl. 2025, 37, 3641–3683. [Google Scholar] [CrossRef]
  53. Wilcoxon, F. Individual comparisons by ranking methods. In Breakthroughs in Statistics: Methodology and Distribution; Springer: Berlin/Heidelberg, Germany, 1992; pp. 196–202. [Google Scholar]
  54. Yazdani, D.; Mavrovouniotis, M.; Li, C.; Luo, W.; Omidvar, M.N.; Gandomi, A.H.; Nguyen, T.T.; Branke, J.; Li, X.; Yang, S. Competition on Dynamic Optimization Problems Generated by Generalized Moving Peaks Benchmark (GMPB). arXiv 2021, arXiv:2106.06174. [Google Scholar]
  55. Alkurdi, A.A. Optimization of Welded Beam Design Problem Using Water Evaporation Optimization Algorithm. Acad. J. Nawroz Univ. 2023, 12, 499–509. [Google Scholar] [CrossRef]
  56. Himmelblau, D.M. Applied Nonlinear Programming; McGraw-Hill: New York, NY, USA, 2018. [Google Scholar]
  57. He, F.; Fu, C.; He, Y.; Huo, S.; Tang, J.; Long, X. Improved dwarf mongoose optimization algorithm based on hybrid strategy for global optimization and engineering problems. J. Supercomput. 2025, 81, 483. [Google Scholar] [CrossRef]
  58. Rahimbaeva, E.; Mezina, A.; Belova, Y. Comparative study of the heuristic bioinspired algorithms effectiveness in the optimization of multi-extremal functions. AIP Conf. Proc. 2023, 2507, 050007. [Google Scholar]
  59. Yi, L.; Lee, S.G. Performance Improvement of Dimmable VLC System with Variable Pulse Amplitude and Position Modulation Control Scheme. In Proceedings of the 2014 International Conference on Wireless Communication and Sensor Network, Wuhan, China, 13–14 December 2014; pp. 81–85. [Google Scholar]
  60. Luckyarno, Y.F.; Cherntanomwong, P.; Wijaya, R. Posturometry data transmission using visible light communication. In Proceedings of the 2016 13th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), Chiang Mai, Thailand, 28 June–1 July 2016; pp. 1–4. [Google Scholar]
  61. Jia, C.; Yang, T.; Wang, C.; Sun, M. High-accuracy 3D indoor visible light positioning method based on the improved adaptive cuckoo search algorithm. Arab. J. Sci. Eng. 2022, 47, 2479–2498. [Google Scholar]
  62. Wang, X. An intensified northern goshawk optimization algorithm for solving optimization problems. Eng. Res. Express 2024, 6, 045267. [Google Scholar] [CrossRef]
  63. Li, K.; Chen, R.; Fu, G.; Yao, X. Two-archive evolutionary algorithm for constrained multiobjective optimization. IEEE Trans. Evol. Comput. 2018, 23, 303–315. [Google Scholar] [CrossRef]
  64. Wang, S.; Li, K. Constrained Bayesian optimization under partial observations: Balanced improvements and provable convergence. Proc. Aaai Conf. Artif. Intell. 2024, 38, 15607–15615. [Google Scholar] [CrossRef]
Figure 1. The flow chart of BKAPI.
Figure 1. The flow chart of BKAPI.
Biomimetics 10 00236 g001
Figure 2. The comparison chart of the ablation study based on the average rank for CEC 2017. (a) The radar chart. (b) The average rank chart (the comparison chart of ablation study based on average rank for CEC 2017).
Figure 2. The comparison chart of the ablation study based on the average rank for CEC 2017. (a) The radar chart. (b) The average rank chart (the comparison chart of ablation study based on average rank for CEC 2017).
Biomimetics 10 00236 g002
Figure 3. The comparison chart of the ablation study based on the average rank for CEC 2022. (a) The radar chart. (b) The average rank chart (the comparison chart of algorithm performance based on average rank for CEC 2022).
Figure 3. The comparison chart of the ablation study based on the average rank for CEC 2022. (a) The radar chart. (b) The average rank chart (the comparison chart of algorithm performance based on average rank for CEC 2022).
Biomimetics 10 00236 g003
Figure 4. Parts of the convergence analysis of the BKAPI and competing algorithms on selected functions of the CEC 2017 benchmark with dimensions of 10. (a) f1, (b) f5, (c) f6, (d) f9, (e) f10, (f) f20, (g) f23, (h) f29.
Figure 4. Parts of the convergence analysis of the BKAPI and competing algorithms on selected functions of the CEC 2017 benchmark with dimensions of 10. (a) f1, (b) f5, (c) f6, (d) f9, (e) f10, (f) f20, (g) f23, (h) f29.
Biomimetics 10 00236 g004aBiomimetics 10 00236 g004b
Figure 5. Parts of the convergence analysis of BKAPI and competing algorithms on selected functions of the CEC 2017 benchmark with dimensions of 100. (a) f1, (b) f5, (c) f6, (d) f9, (e) f21, (f) f30.
Figure 5. Parts of the convergence analysis of BKAPI and competing algorithms on selected functions of the CEC 2017 benchmark with dimensions of 100. (a) f1, (b) f5, (c) f6, (d) f9, (e) f21, (f) f30.
Biomimetics 10 00236 g005
Figure 6. Parts of the boxplot comparing the proposed BKAPI and competitor algorithms on selected functions of the CEC 2017 benchmark with dimensions of 10 (Circles represent outliers). (a) f1, (b) f5, (c) f6, (d) f9, (e) f10, (f) f20, (g) f23, (h) f29.
Figure 6. Parts of the boxplot comparing the proposed BKAPI and competitor algorithms on selected functions of the CEC 2017 benchmark with dimensions of 10 (Circles represent outliers). (a) f1, (b) f5, (c) f6, (d) f9, (e) f10, (f) f20, (g) f23, (h) f29.
Biomimetics 10 00236 g006aBiomimetics 10 00236 g006b
Figure 7. Parts of the boxplot comparing the proposed BKAPI and competitor algorithms on selected functions of the CEC 2017 benchmark with dimensions of 100 C. (a) f1, (b) f5, (c) f6, (d) f9, (e) f21, (f) f30.
Figure 7. Parts of the boxplot comparing the proposed BKAPI and competitor algorithms on selected functions of the CEC 2017 benchmark with dimensions of 100 C. (a) f1, (b) f5, (c) f6, (d) f9, (e) f21, (f) f30.
Biomimetics 10 00236 g007
Figure 8. Sorting diagram of BKAPI’s performance compared to other algorithms on the CEC 2017 test set with dimensions of 10. (a) The radar chart. (b) The comparison chart of algorithm performance based on average rank.
Figure 8. Sorting diagram of BKAPI’s performance compared to other algorithms on the CEC 2017 test set with dimensions of 10. (a) The radar chart. (b) The comparison chart of algorithm performance based on average rank.
Biomimetics 10 00236 g008
Figure 9. Sorting diagram of BKAPI’s performance compared to other algorithms on the CEC 2017 test set with dimensions of 100. (a) The radar chart. (b) The comparison chart of algorithm performance based on average rank.
Figure 9. Sorting diagram of BKAPI’s performance compared to other algorithms on the CEC 2017 test set with dimensions of 100. (a) The radar chart. (b) The comparison chart of algorithm performance based on average rank.
Biomimetics 10 00236 g009
Figure 10. Convergence analysis of the BKAPI and competing algorithms on selected functions of the CEC 2022 benchmark with dimensions of 10. (a) f1, (b) f2 (c) f3, (d) f4, (e) f5, (f) f6, (g) f7, (h) f8, (i) f9, (j) f10, (k) f11, (l) f12.
Figure 10. Convergence analysis of the BKAPI and competing algorithms on selected functions of the CEC 2022 benchmark with dimensions of 10. (a) f1, (b) f2 (c) f3, (d) f4, (e) f5, (f) f6, (g) f7, (h) f8, (i) f9, (j) f10, (k) f11, (l) f12.
Biomimetics 10 00236 g010aBiomimetics 10 00236 g010bBiomimetics 10 00236 g010c
Figure 11. Boxplot comparing the performance of the BKAPI and competitor algorithms on selected functions of the CEC 2022 benchmark with dimensions of 10 (Circles represent outliers). (a) f1, (b) f2 (c) f3, (d) f4, (e) f5, (f) f6, (g) f7, (h) f8, (i) f9, (j) f10, (k) f11, (l) f12.
Figure 11. Boxplot comparing the performance of the BKAPI and competitor algorithms on selected functions of the CEC 2022 benchmark with dimensions of 10 (Circles represent outliers). (a) f1, (b) f2 (c) f3, (d) f4, (e) f5, (f) f6, (g) f7, (h) f8, (i) f9, (j) f10, (k) f11, (l) f12.
Biomimetics 10 00236 g011aBiomimetics 10 00236 g011bBiomimetics 10 00236 g011c
Figure 12. Sorting diagram of the BKAPI’s performance compared to other algorithms on the CEC 2022 test set with dimensions of 10. (a) The radar chart. (b) The comparison chart of algorithm performance based on average rank.
Figure 12. Sorting diagram of the BKAPI’s performance compared to other algorithms on the CEC 2022 test set with dimensions of 10. (a) The radar chart. (b) The comparison chart of algorithm performance based on average rank.
Biomimetics 10 00236 g012
Figure 13. Welded beam design project.
Figure 13. Welded beam design project.
Biomimetics 10 00236 g013
Figure 14. The average convergence curve of the welded beam project.
Figure 14. The average convergence curve of the welded beam project.
Biomimetics 10 00236 g014
Figure 15. The average convergence curve of the Himmelblau function optimization.
Figure 15. The average convergence curve of the Himmelblau function optimization.
Biomimetics 10 00236 g015
Figure 16. Sample positions and their estimated positions for the BKA and BKAPI.
Figure 16. Sample positions and their estimated positions for the BKA and BKAPI.
Biomimetics 10 00236 g016
Figure 17. The positioning error curves.
Figure 17. The positioning error curves.
Biomimetics 10 00236 g017
Figure 18. The bar chart of the positioning mean error.
Figure 18. The bar chart of the positioning mean error.
Biomimetics 10 00236 g018
Table 1. CEC 2017 test set standard deviation value results with dimensions of 10.
Table 1. CEC 2017 test set standard deviation value results with dimensions of 10.
Func.BKAPIBKAPSOGACOABWOAOSFOA
F15.0464 × 1031.5145 × 1081.2790 × 1071.6664 × 1073.8116 × 1094.4097 × 1091.1448 × 1081.8762 × 105
F33.7433 × 10−111.6003 × 1033.9216 × 1012.1379 × 1042.8050 × 1032.9759 × 1031.3958 × 1031.1623 × 101
F42.8080 × 1017.6158 × 1012.2099 × 1015.5817 × 1015.5150 × 1026.1985 × 1023.6989 × 1019.4219
F57.32131.6916 × 1011.3267 × 1011.6580 × 1012.3751 × 1011.6905 × 1019.19837.1405
F62.1839 × 10−11.0964 × 1017.71391.5626 × 1018.75816.55236.51141.4954
F76.1033 2.1250 × 1011.0046 × 1013.8370 × 1012.4738 × 1011.4120 × 1011.2983 × 1011.4490 × 101
F87.3601 9.20511.0148 × 1011.2611 × 1018.65546.74768.91828.5151
F94.7548 1.2694 × 1026.3308 × 1012.2671 × 1021.8608 × 1022.1063 × 1021.0356 × 1022.6600 × 101
F103.2500 × 1021.8461 × 1023.1883 × 1022.8035 × 1021.7675 × 1022.1546 × 1023.3261 × 1023.3115 × 102
F114.8805 × 1014.2515 × 1013.7330 × 1018.3714 × 1031.2837 × 1035.9459 × 1034.0757 × 1026.3904
F121.7226 × 1067.0995 × 1051.4927 × 1065.1580 × 1062.0073 × 1084.8231 × 1084.2216 × 1061.2198 × 104
F138.9421 × 1031.3062 × 1037.4830 × 1031.0243 × 1041.0985 × 1043.5018 × 1071.2377 × 1042.3808 × 102
F141.3225 × 1024.2050 × 1014.3146 × 1037.8841 × 1032.9085 × 1013.4428 × 1021.6960 × 1034.9984
F151.8812 × 1038.3161 × 1015.8859 × 1038.8665 × 1033.6689 × 1032.3477 × 1035.6589 × 1032.6217 × 101
F161.1473 × 1029.8940 × 1011.4577 × 1021.3478 × 1021.4880 × 1021.1463 × 1021.3996 × 1025.8275 × 101
F175.4768 × 1012.1348 × 1016.0177 × 1013.5123 × 1013.5999 × 1013.4504 × 1012.0671 × 1011.5171 × 101
F181.6189 × 1044.5761 × 1031.3511 × 1041.0191 × 1041.3792 × 1066.1287 × 1081.1925 × 1056.9965 × 101
F197.6337 × 1032.6306 × 1038.4864 × 1031.1906 × 1041.8340 × 1041.0652 × 1076.5601 × 1045.5813
F204.6099 × 1015.0520 × 1019.0633 × 1016.7195 × 1015.8024 × 1014.1409 × 1015.7978 × 1011.3454 × 101
F213.4899 × 1017.1972 × 1015.2179 × 1013.2437 × 1013.9614 × 1016.9485 × 1012.7548 × 1016.5936 × 101
F222.3855 × 1014.6529 × 1012.1309 × 1022.6350 × 1024.3237 × 1024.3150 × 1028.25971.8371 × 101
F239.84771.8153 × 1011.9001 × 1012.3358 × 1013.2453 × 1013.3027 × 1011.3969 × 1017.9853
F248.9663 × 1017.8188 × 1011.0152 × 1024.6834 × 1016.1779 × 1011.1091 × 1027.5869 × 1018.4826 × 101
F252.6453 × 1015.8833 × 1012.6868 × 1015.9959 × 1012.4807 × 1023.0698 × 1022.8520 × 1012.4668 × 101
F263.0883 × 1023.8933 × 1024.1253 × 1025.3401 × 1023.7620 × 1021.9171 × 1021.6743 × 1028.5319 × 101
F272.4819 × 1012.3157 × 1014.5813 × 1015.1675 × 1015.3861 × 1015.7689 × 1016.49171.1351 × 101
F281.5188 × 1021.7391 × 1021.1196 × 1022.3199 × 1021.3939 × 1021.0777 × 1029.8828 × 1011.4011 × 102
F295.2242 × 1015.6719 × 1017.4825 × 1016.9211 × 1018.7135 × 1019.4258 × 1014.4585 × 1014.7423 × 101
F301.4232 × 1061.0522 × 1064.9450 × 1053.5079 × 1065.7574 × 1061.3829 × 1072.0131 × 1064.2247 × 105
Table 2. CEC 2017 test set average fitness value results with dimensions of 10.
Table 2. CEC 2017 test set average fitness value results with dimensions of 10.
Func.BKAPIBKAPSOGACOABWOAOSFOA
F15.7144 × 1032.7818 × 1072.3374 × 1068.4230 × 1061.0105 × 10101.6695 × 10101.5082 × 1082.2207 × 105
F33.0000 × 1021.0821 × 1033.0976 × 1025.4669 × 1041.2046 × 1041.1962 × 1044.5568 × 1033.1345 × 102
F44.1266 × 1024.3410 × 1024.1483 × 1024.8501 × 1021.2231 × 1031.8883 × 1034.4643 × 1024.0651 × 102
F55.1496 × 1025.3467 × 1025.3177 × 1025.8888 × 1025.8187 × 1026.2033 × 1025.3820 × 1025.4020 × 102
F66.0008 × 1026.2945 × 1026.0824 × 1026.6285 × 1026.4253 × 1026.6256 × 1026.2424 × 1026.0373 × 102
F77.2183 × 1027.6051 × 1027.3049 × 1028.3445 × 1028.0451 × 1028.3673 × 1027.6521 × 1027.5763 × 102
F88.1329 × 1028.2237 × 1028.2086 × 1028.8030 × 1028.5356 × 1028.6124 × 1028.2507 × 1028.4099 × 102
F99.0150 × 1021.2102 × 1039.3258 × 1021.0153 × 1031.4100 × 1031.9390 × 1031.1077 × 1039.2572 × 102
F101.6796 × 1031.8682 × 1031.8948 × 1032.5509 × 1032.5722 × 1032.6004 × 1032.1646 × 1032.1071 × 103
F111.1408 × 1031.1551 × 1031.1496 × 1037.4309 × 1032.3615 × 1031.0362 × 1041.4203 × 1031.1190 × 103
F124.5173 × 1052.0441 × 1052.8908 × 1055.2373 × 1062.5052 × 1086.8974 × 1083.3312 × 1068.8845 × 103
F139.7457 × 1032.9486 × 1039.3106 × 1031.6640 × 1041.5246 × 1042.2063 × 1072.2634 × 1041.4287 × 103
F141.5448 × 1031.4879 × 1035.2661 × 1038.7232 × 1031.5219 × 1031.8836 × 1032.8204 × 1031.4326 × 103
F152.5258 × 1031.6583 × 1036.0420 × 1031.2770 × 1045.8516 × 1039.3150 × 1039.3386 × 1031.5379 × 103
F161.7225 × 1031.7884 × 1031.8611 × 1031.9016 × 1032.0390 × 1032.2674 × 1031.8753 × 1031.6741 × 103
F171.7760 × 1031.7665 × 1031.7908 × 1031.7836 × 1031.8113 × 1031.8570 × 1031.7809 × 1031.7668 × 103
F182.1646 × 1044.5652 × 1031.9866 × 1041.6269 × 1042.7169 × 1055.6907 × 1085.7860 × 1041.8861 × 103
F196.3909 × 1032.5461 × 1039.1281 × 1031.2543 × 1041.2310 × 1046.3083 × 1064.6974 × 1041.9095 × 103
F202.0629 × 1032.1130 × 1032.1265 × 1032.2195 × 1032.1859 × 1032.2717 × 1032.1485 × 1032.0620 × 103
F212.3032 × 1032.2793 × 1032.3011 × 1032.3849 × 1032.3593 × 1032.3560 × 1032.3276 × 1032.2963 × 103
F222.3006 × 1032.3218 × 1032.3363 × 1032.5283 × 1033.1260 × 1033.1485 × 1032.3212 × 1032.3015 × 103
F232.6236 × 1032.6356 × 1032.6525 × 1032.7017 × 1032.7057 × 1032.7371 × 1032.6479 × 1032.6287 × 103
F242.7236 × 1032.7463 × 1032.7341 × 1032.8666 × 1032.8774 × 1032.9097 × 1032.7520 × 1032.7271 × 103
F252.9232 × 1032.9500 × 1032.9307 × 1033.0512 × 1033.4507 × 1033.9118 × 1032.9521 × 1032.9216 × 103
F263.0634 × 1033.1413 × 1033.1233 × 1033.8855 × 1034.2061 × 1033.8658 × 1033.0937 × 1032.9269 × 103
F273.1103 × 1033.1082 × 1033.1444 × 1033.2222 × 1033.1913 × 1033.2002 × 1033.1088 × 1033.0951 × 103
F283.3710 × 1033.2789 × 1033.3290 × 1033.5458 × 1033.7234 × 1033.8608 × 1033.4782 × 1033.2701 × 103
F293.2029 × 1033.2439 × 1033.2724 × 1033.3550 × 1033.3977 × 1033.5537 × 1033.2916 × 1033.2194 × 103
F308.7553 × 1056.9174 × 1052.9351 × 1053.3497 × 1065.1961 × 1061.5161 × 1071.7869 × 1062.4944 × 105
Table 3. CEC 2017 test set 10DIM-WILCOXON rank and test results with dimensions of 10.
Table 3. CEC 2017 test set 10DIM-WILCOXON rank and test results with dimensions of 10.
Func.BKAPSOGACOABWOAOSFOA
F13.0198 × 10−111.2467 × 10−23.0198 × 10−113.0198 × 10−113.0198 × 10−113.0198 × 10−113.0198 × 10−11
F32.9376 × 10−112.9376 × 10−112.9376 × 10−112.9376 × 10−112.9376 × 10−112.9376 × 10−112.9376 × 10−11
F43.0058 × 10−44.8066 × 10−41.5580 × 10−83.0198 × 10−113.0198 × 10−118.8410 × 10−72.6805 × 10−4
F59.4903 × 10−73.5541 × 10−62.9953 × 10−112.9953 × 10−112.9953 × 10−111.0852 × 10−106.6430 × 10−11
F63.0198 × 10−117.3890 × 10−113.0198 × 10−113.0198 × 10−113.0198 × 10−113.0198 × 10−113.0198 × 10−11
F74.9751 × 10−119.2112 × 10−53.0198 × 10−113.0198 × 10−113.0198 × 10−113.6897 × 10−116.6955 × 10−11
F81.1683 × 10−43.0906 × 10−32.9728 × 10−112.9728 × 10−112.9728 × 10−114.0862 × 10−68.0313 × 10−11
F92.4918 × 10−119.1737 × 10−53.0497 × 10−112.4918 × 10−112.4918 × 10−112.4918 × 10−114.2850 × 10−10
F101.0762 × 10−21.3271 × 10−22.3714 × 10−106.6955 × 10−111.6132 × 10−103.3241 × 10−61.8681 × 10−5
F115.8281 × 10−32.9205 × 10−21.9567 × 10−103.0198 × 10−113.0198 × 10−118.4847 × 10−99.7051 × 10−1
F121.0314 × 10−28.1874 × 10−11.6947 × 10−93.0198 × 10−113.0198 × 10−113.4971 × 10−99.0687 × 10−3
F133.1820 × 10−49.9410 × 10−14.6371 × 10−31.0314 × 10−23.0198 × 10−118.8828 × 10−61.4643 × 10−10
F141.5797 × 10−15.5999 × 10−76.6955 × 10−111.2235 × 10−16.0103 × 10−82.4386 × 10−95.4940 × 10−11
F152.8388 × 10−45.9705 × 10−54.9979 × 10−91.1077 × 10−63.8201 × 10−103.8248 × 10−93.8201 × 10−10
F165.8281 × 10−37.6972 × 10−44.4204 × 10−62.6694 × 10−93.6897 × 10−112.7725 × 10−56.2040 × 10−1
F173.4028 × 10−18.4999 × 10−25.0120 × 10−22.1264 × 10−47.5991 × 10−73.5136 × 10−22.3985 × 10−1
F183.8052 × 10−77.2826 × 10−14.5529 × 10−17.1718 × 10−13.0198 × 10−111.2732 × 10−23.0198 × 10−11
F193.0102 × 10−72.9205 × 10−24.0329 × 10−36.6688 × 10−33.6897 × 10−112.8789 × 10−63.0198 × 10−11
F202.4327 × 10−54.6371 × 10−32.8715 × 10−103.4971 × 10−93.0198 × 10−111.4733 × 10−72.3243 × 10−2
F214.8251 × 10−13.3874 × 10−21.1737 × 10−93.8349 × 10−62.4156 × 10−24.3106 × 10−89.8834 × 10−3
F225.4620 × 10−61.1198 × 10−17.6949 × 10−83.0198 × 10−113.0198 × 10−111.3111 × 10−86.5486 × 10−4
F234.6365 × 10−31.2017 × 10−83.0179 × 10−113.0179 × 10−113.0179 × 10−115.9644 × 10−91.2730 × 10−2
F242.6069 × 10−26.3743 × 10−31.4608 × 10−103.8116 × 10−105.4511 × 10−99.7829 × 10−52.8119 × 10−2
F252.5188 × 10−12.2823 × 10−13.0198 × 10−113.0198 × 10−113.0198 × 10−111.5291 × 10−58.7663 × 10−1
F267.1715 × 10−16.7345 × 10−12.9068 × 10−92.5950 × 10−103.8059 × 10−91.2719 × 10−23.7755 × 10−2
F276.7349 × 10−11.0184 × 10−52.6083 × 10−103.8229 × 10−92.4373 × 10−92.2358 × 10−22.3160 × 10−6
F282.3065 × 10−22.2964 × 10−19.7882 × 10−31.9388 × 10−94.6986 × 10−113.7430 × 10−53.8025 × 10−3
F292.1566 × 10−36.2828 × 10−63.1967 × 10−96.1210 × 10−104.5043 × 10−114.1127 × 10−74.8413 × 10−2
F307.2825 × 10−15.3948 × 10−11.6809 × 10−45.1819 × 10−72.2261 × 10−92.0520 × 10−32.5300 × 10−4
Table 4. Standard deviation and average fitness value results of the CEC 2022 test set.
Table 4. Standard deviation and average fitness value results of the CEC 2022 test set.
Func.TypeBKAPIBKAPSOGACOABWOAOSFOA
F1std0.001377.835.3514,486.181750.4240,282.572857.4118.73
avg300.00710.61301.6036,199.358169.6841,983.556257.44316.96
F2std22.1028.7227.2526.88683.43918.3453.282.12
avg416.88417.01422.02470.321426.232387.02460.50406.93
F3std0.4410.596.0013.9210.108.348.522.50
avg600.19632.08605.22664.35644.91663.13624.78603.93
F4std6.397.059.7914.367.837.029.289.51
avg814.36819.65821.36879.23848.49856.05826.26841.56
F5std1.48115.7441.8332.73158.33172.12145.7675.53
avg900.781115.52922.27948.071383.781785.921129.73951.38
F6std2451.281525.482164.753962.445,256,162.17983,998,623153,581.981116.90
avg5129.503110.973638.715141.802,669,099.831,214,747,876.1130,314.122166.58
F7std5.6728.9317.2030.2014.7719.8820.757.28
avg2021.872052.372036.442099.172088.072132.002063.632034.02
F8std5.3336.4851.4642.725.2025.725.233.74
avg2219.802239.152249.112264.192232.272273.822232.022227.02
F9std24.8035.3044.6553.8635.1143.2936.440.00
avg2535.822544.672545.492691.612731.182787.402632.552529.28
F10std61.3487.38102.63364.52187.92156.7965.1745.79
avg2556.422571.022590.352698.172722.482749.222569.722518.29
F11std33.47256.4257.02474.68488.69477.3475.5199.96
avg2898.772866.192912.423595.093926.813565.782774.762712.29
F12std13.2415.9011.9759.6644.0680.736.561.65
avg2872.032872.602875.132984.692961.552985.142871.842861.14
Table 5. CEC 20122 test set 10DIM-WILCOXON rank and test results with dimensions of 10.
Table 5. CEC 20122 test set 10DIM-WILCOXON rank and test results with dimensions of 10.
Func.BKAPSOGACOABWOAOSFOA
F12.8358 × 10−112.8358 × 10−112.8358 × 10−112.8358 × 10−112.8358 × 10−112.8358 × 10−112.8358 × 10−11
F21.8264 × 10−26.5526 × 10−11.5130 × 10−82.9008 × 10−112.9008 × 10−112.1422 × 10−74.5489 × 10−1
F33.0198 × 10−114.5725 × 10−93.0198 × 10−113.0198 × 10−113.0198 × 10−113.0198 × 10−115.4940 × 10−11
F47.6868 × 10−42.7447 × 10−33.0047 × 10−113.3217 × 10−113.0047 × 10−119.5068 × 10−71.7686 × 10−10
F52.9247 × 10−112.1075 × 10−62.9247 × 10−112.9247 × 10−112.9247 × 10−112.9247 × 10−114.3648 × 10−11
F61.1142 × 10−31.8367 × 10−26.3087 × 10−12.0152 × 10−83.0198 × 10−112.6098 × 10−101.6979 × 10−8
F72.1947 × 10−82.9589 × 10−53.0198 × 10−113.0198 × 10−113.0198 × 10−113.6897 × 10−112.6694 × 10−9
F81.4733 × 10−76.7868 × 10−23.0198 × 10−114.0771 × 10−113.0198 × 10−113.0198 × 10−111.0104 × 10−8
F94.3711 × 10−95.9482 × 10−23.9348 × 10−123.1578 × 10−123.1578 × 10−123.3715 × 10−112.4772 × 10−8
F103.5136 × 10−24.5143 × 10−26.6688 × 10−31.1674 × 10−51.8608 × 10−63.0339 × 10−38.0727 × 10−1
F117.8782 × 10−22.1155 × 10−94.5329 × 10−78.7486 × 10−128.7486 × 10−121.8630 × 10−75.7769 × 10−8
F124.3760 × 10−11.6271 × 10−23.6782 × 10−111.2014 × 10−101.2014 × 10−101.7142 × 10−14.1880 × 10−10
Table 6. Welding beam project index statistics.
Table 6. Welding beam project index statistics.
AlgorithmOptimal ValueWorst ValueStandard DeviationAverage ValueMedian ValueAverage Time
BKAPI1.67021.78170.05371.68831.67512.9511
BKA1.67091.70120.65981.67811.67390.6954
PSO1.67282.58740.28121.81641.70520.3050
GA1.83222.59770.22722.09122.03200.5071
COA1.84552.22950.11802.13632.19340.7834
BWO2.36364.01890.60313.29403.40592.8143
AO1.81432.43770.23632.09172.07130.6454
SFOA1.67061.67420.07111.67221.67190.2899
Table 7. The welded beam design problem’s best outcomes from the various algorithms.
Table 7. The welded beam design problem’s best outcomes from the various algorithms.
AlgorithmOptimal Values for VariablesOptimal Cost
hltb
BKAPI0.19883.33749.19200.19881.6702
BKA0.19833.34879.19200.19881.6709
PSO0.19853.34369.20320.19891.6728
GA0.15924.62569.00430.21101.8322
COA0.14735.19059.42580.19781.8455
BWO0.36572.40016.66640.38192.3636
AO0.16694.21518.90010.21601.8143
SFOA0.19883.33909.19420.19881.6706
Table 8. Himmelblau function’s best outcomes from the various algorithms.
Table 8. Himmelblau function’s best outcomes from the various algorithms.
AlgorithmOptimal Values for VariablesOptimal Cost
x 1 x 2 x 3 x 4 x 5
BKAPI78.000033.000029.995345.000036.7758−30,665
BKA78.000033.000029.995345.000036.7759−30,665
PSO78.000033.000029.995345.000036.7758−30,665
GA80.881535.680532.042237.591834.4031−29,802
COA78.000033.000030.010844.938736.7616−30,623
BWO78.000033.000031.725842.173435.1803−30,266
AO78.577333.276830.426944.274935.9693−30,544
SFOA78.000033.000029.995345.000036.7757−30,665
Table 9. Himmelblau function optimization index statistics.
Table 9. Himmelblau function optimization index statistics.
AlgorithmOptimal ValueWorse ValueStandard DeviationAverage ValueMedian ValueAverage Time
BKAPI−30,665 −30,186 151 −30,617 −30,665 3.68
BKA −30,665 −30,186 150 −30,615 −30,662 0.68
PSO −30,665 −30,662 1 −30,665 −30,665 0.29
GA −29,802 −28,895 259 −29,445 −29,421 0.53
COA −30,623 −29,690 323 −30,160 −30,142 0.75
BWO −30,266 −29,594 187 −30,028 −30,069 3.91
AO −30,544 −29,753 235 −30,240 −30,258 0.69
SFOA −30,665 −30,424 76 −30,640 −30,663 0.31
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, X.; Zhang, J.; Jia, C.; Liu, Y.; Fu, M. A Hybrid Black-Winged Kite Algorithm with PSO and Differential Mutation for Superior Global Optimization and Engineering Applications. Biomimetics 2025, 10, 236. https://doi.org/10.3390/biomimetics10040236

AMA Style

Zhu X, Zhang J, Jia C, Liu Y, Fu M. A Hybrid Black-Winged Kite Algorithm with PSO and Differential Mutation for Superior Global Optimization and Engineering Applications. Biomimetics. 2025; 10(4):236. https://doi.org/10.3390/biomimetics10040236

Chicago/Turabian Style

Zhu, Xuemei, Jinsi Zhang, Chaochuan Jia, Yu Liu, and Maosheng Fu. 2025. "A Hybrid Black-Winged Kite Algorithm with PSO and Differential Mutation for Superior Global Optimization and Engineering Applications" Biomimetics 10, no. 4: 236. https://doi.org/10.3390/biomimetics10040236

APA Style

Zhu, X., Zhang, J., Jia, C., Liu, Y., & Fu, M. (2025). A Hybrid Black-Winged Kite Algorithm with PSO and Differential Mutation for Superior Global Optimization and Engineering Applications. Biomimetics, 10(4), 236. https://doi.org/10.3390/biomimetics10040236

Article Metrics

Back to TopTop