Next Article in Journal
An Improved Red-Billed Blue Magpie Algorithm and Its Application to Constrained Optimization Problems
Previous Article in Journal
Three-Dimensional Printing Parameter Assessment of Elastomers for Tendon Graft Applications
Previous Article in Special Issue
Multi-Strategy Improved POA for Global Optimization Problems and 3D UAV Path Planning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

FBCA: Flexible Besiege and Conquer Algorithm for Multi-Layer Perceptron Optimization Problems

1
Center for Artificial Intelligence, Jilin University of Finance and Economics, Changchun 130117, China
2
Jilin Province Key Laboratory of Fintech, Jilin University of Finance and Economics, Changchun 130117, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Biomimetics 2025, 10(11), 787; https://doi.org/10.3390/biomimetics10110787
Submission received: 29 October 2025 / Revised: 15 November 2025 / Accepted: 17 November 2025 / Published: 19 November 2025
(This article belongs to the Special Issue Exploration of Bio-Inspired Computing: 2nd Edition)

Abstract

A Multi-Layer Perceptron (MLP), as the basic structure of neural networks, is an important component of various deep learning models such as CNNs, RNNs, and Transformers. Nevertheless, MLP training faces significant challenges, with a large number of saddle points and local minima in its non-convex optimization space, which can easily lead to gradient vanishing and premature convergence. Compared with traditional heuristic algorithms relying on a population-based parallel search, such as GA, GWO, DE, etc., the Besiege and Conquer Algorithm (BCA) employs a one-spot update strategy that provides a certain level of global optimization capability but exhibits clear limitations in search flexibility. Specifically, it lacks fast detection, fast adaptation, and fast convergence. First, the fixed sinusoidal amplitude limits the accuracy of fast detection in complex regions. Second, the combination of a random location and fixed perturbation range limits the fast adaptation of global convergence. Finally, the lack of a hierarchical adjustment under a single parameter (BCB) hinders the dynamic transition from exploration to exploitation, resulting in slow convergence. To address these limitations, this paper proposes a Flexible Besiege and Conquer Algorithm (FBCA), which improves search flexibility and convergence capability through three new mechanisms: (1) the sine-guided soft asymmetric Gaussian perturbation mechanism enhances local micro-exploration, thereby achieving a fast detection response near the global optimum; (2) the exponentially modulated spiral perturbation mechanism adopts an exponential spiral factor for fast adaptation of global convergence; and (3) the nonlinear cognitive coefficient-driven velocity update mechanism improves the convergence performance, realizing a more balanced exploration–exploitation process. In the IEEE CEC 2017 benchmark function test, FBCA ranked first in the comprehensive comparison with 12 state-of-the-art algorithms, with a win rate of 62% over BCA in 100-dimensional problems. It also achieved the best performance in six MLP optimization problems, showing excellent convergence accuracy and robustness, proving its excellent global optimization ability in complex nonlinear MLP optimization training. It demonstrates its application value and potential in optimizing neural networks and deep learning models.

1. Introduction

The Multi-Layer Perceptron (MLP) [1], as an early representative of deep neural networks (DNNs) [2], holds a crucial position in the development of neural networks. The success of MLP has inspired the development of numerous subsequent deep learning models, such as Convolutional Neural Networks (CNNs) [3], Transformers [4], YOLO [5], and DeepSeek [6], significantly advancing the application of artificial intelligence across various fields. Despite its strong performance in tasks such as pattern recognition, classification, and regression, MLP faces several challenges during training. These include the non-convexity and high-dimensional nonlinearity of its weight-bias space, which often causes the model to get stuck in saddle points or local minima [7,8], limiting its generalization ability and performance improvement.
Against this backdrop, Metaheuristic Algorithms (MAs), as global optimization strategies, have gradually become effective tools for addressing these issues [9]. In recent years, many metaheuristic algorithms have been incorporated into MLP training, achieving positive results. For instance, the Grey Wolf Optimizer (GWO) in swarm intelligence has excelled in MLP classification tasks [10], while Particle Swarm Optimization (PSO) has outperformed Stochastic Gradient Descent (SGD) in problems like estimating the vertical dispersion coefficient of a natural flow, demonstrating the effectiveness of metaheuristic methods in MLP weight-bias optimization [11]. Genetic algorithms (GAs) have been used in evolutionary algorithms to optimize MLP hyperparameters and achieve 100% key recovery rate in AES side-channel attacks, which is significantly better than stochastic and Bayesian baselines [12]. In addition, algorithms such as the Slime Mould Algorithm (SMA) [13], Black-winged Kite Algorithm (BKA) [14], and Harris Hawks Optimization (HHO) [15] also showed good optimization performance in different MLP training tasks. Some studies also try to construct hybrid models or multi-objective optimization models, such as MLP-PSODE fused with PSO and differential evolution (DE) for suspended sediment load estimation [16], and the MLP-MOSSA model based on the multi-objective Salp Swarm Algorithm for water evaporation prediction [17]. These studies fully show that the combination of MLP and metaheuristic algorithms has become an important research direction in the field of optimization [18,19].
However, although many optimization algorithms have provided solutions for MLP, most of these algorithms rely on parallel search structures. While they offer some search efficiency, they may not fully leverage strategies that combine exploration around the optimal solution with random position exploration in the process of finding the global optimum. When designing the original Besiege and Conquer Algorithm (BCA) [20,21], this consideration was fully incorporated and transformed into an advantage in the optimization process. BCA balances global optimization ability with local exploitation, and although it has achieved some success in MLP optimization problems, it still has certain limitations: search dynamics can become rigid, it can easily get stuck in local minima, and there is an insufficient balance between global exploration and local exploitation. These shortcomings limit the flexibility of its global optimization capability and constrain its performance in complex nonlinear problems such as MLP weight-bias optimization. To address these shortcomings, this paper proposes an improved Flexible Besiege and Conquer Algorithm (FBCA), with the following key research motivations and contributions summarized.

1.1. Motivation

Although the original BCA demonstrates certain optimization capability, its inherent limitations restrict global search efficiency and convergence performance. To enhance its adaptability and robustness in complex optimization scenarios, this study proposes the Flexible Besiege and Conquer Algorithm (FBCA), inspired by three main research motivations.
Motivation 1: BCA possesses structure advantages and special mechanisms over traditional MAs like GA [22] and DE [23]. First, regarding particle generation, BCA introduces a more detailed hierarchical structure: population–army(sub-population)–soldier(particle), whereas traditional algorithms typically employ a simple population–particle. Second, BCA controls the exploration and exploitation through binary gates, using sine and cosine factors to guide single-point besieges and random perturbations, unlike GA and DE, which use a uniform F/CR or crossover rate parameter. This design makes BCA easier to understand, implement, and improve and allows for flexible application to optimization scenarios such as MLP.
Motivation 2: The exploitation phase of BCA primarily relies on a sine-based factor to guide the search direction. Although this mechanism can converge to a certain extent to a better solution, due to the limited search direction, the population is prone to fall into the local optimum and lacks the ability to further explore the solution space in detail. Therefore, it is necessary to consider introducing mechanisms with perturbative and flexible exploration at the exploitation stage to enhance the accuracy of local search and the ability of solution refinement.
Motivation 3: The global search process in BCA depends on sine and cosine factors within a fixed interval [−1, 1]. Although this periodic driving mechanism promotes early-stage diversity, premature convergence may occur if exploitation begins before reaching promising regions. To strengthen global exploration, an adaptive and dynamically regulated position-update mechanism is needed to achieve a more flexible and effective global search.
Motivation 4: In BCA, the transition between exploration and exploitation is governed by a single binary control gate BCB. This fixed control structure restricts the algorithm’s flexibility during phase transitions. Once a branch is selected, soldiers can only follow fixed update rules, preventing dynamic balance at the mechanism level. Hence, a hierarchical and self-adaptive control structure is proposed, refining the new mechanisms within both BCB branches and introducing additional binary gates to achieve a more flexible exploration–exploitation balance in FBCA.

1.2. Contributions

The main contributions of this paper are summarized as follows:
  • Sine-Guided Soft Asymmetric Gaussian Perturbation Mechanism: an optimization mechanism that integrates Gaussian flexible micro-perturbations under sine factor guidance, enhancing the ability to quickly detect high-precision solutions, reducing the risk of local stagnation.
  • Exponentially Modulated Spiral Perturbation Mechanism: a position update mechanism that applies exponential modulation through an adaptive spiral factor to improve population diversity and ensure fast-adaptive global convergence.
  • Nonlinear Cognitive Coefficient-Driven Velocity Update Mechanism: drawing on the PSO’s velocity-based update mechanism, the nonlinear cognitive coefficient dynamically regulates the soldier position update, thereby improving fast-convergent performance and achieving a balanced exploration–exploitation trade-off.
  • Validation on IEEE CEC 2017 and Six MLP Problems: extensive experiments on the IEEE CEC 2017 benchmark set (30D, 50D, and 100D) demonstrate FBCA’s excellence in numerical accuracy, convergence behavior, stability, and the Wilcoxon rank sum test. Notably, FBCA shows outstanding performance in high-dimensional composite function optimization. Moreover, in six MLP optimization problems, FBCA achieves an order-of-magnitude lead in the mean results of MSE on XOR and Heart datasets, surpassing the original BCA and other state-of-the-art algorithms such as SMA in function approximation problems.

2. Related Work

2.1. BCA: Besiege and Conquer Algorithm

This section reviews the BCA [21], which explicitly divides the optimization process into exploration and exploitation, controlled by the parameter BCB. The exploitation phase focuses on generating new soldiers around the best army using a sine-based factor, while the exploration phase updates soldier positions around random armies through a cosine-based factor. The BCA population is randomly initialized within the defined upper and lower bounds. It then alternates between exploration and exploitation based on the BCB parameter to estimate the optimal solution for continuous optimization problems. The detailed pseudocode is presented in Algorithm 1.
Algorithm 1 The pseudocode of the BCA
  1:
Input: Population Size: N, Problem Dimension: D, Max Iteration: Max_Gen, The Number Of Soldiers: nSoldiers
  2:
Output: Obtained best solution
  3:
Initialize the BCB parameters
  4:
Initialize the solutions’ positions randomly
  5:
while t ← 1 to Max_Gen do
  6:
     for i ← 1 to nArmies do
  7:
           for j ← 1 to nSoldiers do
  8:
                 for  d 1 to D do
  9:
                       if rand < BCB then
10:
                             Update the position by Equation (1)
11:
                             Update the position that exceeds the search boundaries By Equation (3)
12:
                       else
13:
                             Update the position by Equation (2)
14:
                             Update the position that exceeds the search boundaries By Equation (4)
15:
                       end if
16:
                 end for
17:
           end for
18:
     end for
19:
     Update gBest and gBestPos.
20:
end while
When r a n d  <  B C B , it is defined as BCA exploitation, and when r a n d  ≥  B C B it is defined as BCA exploration. The BCA divides the entire population into multiple armies, each consisting of a fixed number of soldiers, which can be regarded as the descendants of their respective armies. When a soldier achieves a better fitness value than its current army, it becomes the leader of that army. During the exploitation phase, the update rule for soldiers within each army is defined by Equation (1), while in the exploration phase, it follows Equation (2). The parameters α and β are random values within the range [0, 2 π ].
S j , d t + 1 = B d t + | A r , d t A i , d t | × s i n ( α )     r a n d < B C B
S j , d t + 1 = A r , d t + | A r , d t A i , d t | × c o s ( β )     r a n d B C B
Meanwhile, BCA also accounts for soldier position updates that exceed the search boundaries during exploration and exploitation. If the soldier crosses the border in algorithm exploitation will use Equation (3) for correction processing, if the soldier crosses the border in algorithm exploration, it will use Equation (4) for correction processing., where S j , d t + 1 is the j t h soldier of the d t h dimension of the t + 1 iteration, B d t is the best army of the current t t h iteration, A i , d t is the i t h army of the d t h dimension of the t iteration, A r , d t is the random army of the d t h dimension of the t iteration, and BCB is set to a fixed value of 0.8.
S j , d t + 1 = B C B d t × B d t + ( 1 B C B d t ) × A i , d t
S j , d t + 1 = l b + ( u b l b ) × r a n d

2.2. PSO: Particle Swarm Optimization

The particle swarm optimization (PSO) algorithm [24], proposed by Kennedy and Eberhart in 1995, is a swarm intelligence optimization algorithm that simulates the collaborative behavior of bird or fish populations in a search space. The optimization process is as follows: firstly, the initial particle swarm is randomly generated in the search range according to the upper and lower bounds of the variables, and each particle is assigned a position and velocity; subsequently, the particle state is updated through iteration, and the fitness function is used to determine the quality of the solution and guide the search direction.
x i ( t + 1 ) = x i ( t ) + v i ( t + 1 )
v i ( t + 1 ) = w × v i ( t ) + c 1 × r 1 × ( p i ( t ) x i ( t ) ) + c 2 × r 2 × ( g ( t ) x i ( t ) )
For the i t h particle, its position and velocity update equations are defined in Equations (5) and (6): where x i ( t ) and v i ( t ) denote the current position and velocity of the particle at the tth iteration, respectively; w is the inertia weight to regulate the motion continuity; c 1 and c 2 represent the cognitive and social coefficients, respectively, to balance the individual learning and group learning; r 1 , r 2 [ 0 , 1 ] are random numbers; p i ( t ) denotes the particle’s own historical optimal position, and g ( t ) is the global optimal position of the group.
w × v i ( t ) , inertia   part c 1 × r 1 × ( p i ( t ) x i ( t ) ) , cognitive   part c 2 × r 2 × ( g ( t ) x i ( t ) ) , social   part
As shown in Equation (7), the particle’s velocity update process can be decomposed into three parts. The inertial part maintains the particle motion trend to expand the search range; the cognitive part guides the particle back to its individual optimal region; and the social part drives the particle to approach the global optimal of the group. The interaction of the three components enables the PSO to achieve a dynamic balance between global exploration and local exploitation, which results in a better global optimization capability and convergence nature.

2.3. MLP: Multi-Layer Perceptron

The MLP is a typical feed-forward artificial neural network model [25] consisting of at least three layers of nodes: an input layer, a hidden layer, and an output layer. The example model contains a hidden layer as shown in Figure 1. The orange nodes denote the input layer neurons, the number of which depends on the dimensions of the input features, denoted as X 1 to X i , and the purple nodes denote the hidden layer neurons, denoted as H 1 to H j . The number of layers and the number of nodes in the hidden layer are not fixed, and they can be adjusted according to the specific task requirements and experimental settings. The blue nodes denote the output layer neurons, denoted as O 1 to O k , and their number is usually determined by the training exploitation type.
In the training process of MLP, Mean Squared Error (MSE) is often used as a performance evaluation metric, which is calculated as shown in Equation (8), where m denotes the number of output units, d i k is the target output value of the k t h training sample in the i t h output unit, o i k is the actual output value of the k t h training sample at the time of input of that output unit, and s denotes the number of training samples in the dataset.
M S E ¯ = k = 1 s i = 1 m ( o i k d i k ) 2 s

3. Methods

3.1. Sine-Guided Soft Asymmetric Gaussian Perturbation Mechanism

The original BCA algorithm was only developed by a sine factor, and its single mechanism can easily lead to the population falling into the local optimum, thus limiting the accuracy of the solution. Therefore, this section proposes the Sine-Guided Soft Asymmetric Gaussian Perturbation Mechanism. Based on the original sine factor, a small Gaussian perturbation [26] is adopted as a correction item to break the original symmetry structure without changing the main search direction, so as to avoid the algorithm falling into the trap of accurate numerical values and improve the probability of the population jumping out of the local optimum.
S j , d t + 1 = B d t + | A r , d t A i , d t | × s i n ( α ) + N ( 0 , 0.1 2 )
As shown in Equation (9), compared with the original BCA exploitation mechanism, this mechanism adopts an additional Gaussian perturbation term of N ( 0 , 0.1 2 ) . The position change in the search space is shown in Figure 2, and the clear comparison between the soldier with Gaussian perturbation and the soldier without Gaussian perturbation reflects the position of the original BCA exploitation mechanism and the soldiers generated by the sine-guided soft asymmetric Gaussian perturbation mechanism, respectively. The original BCA only relies on the position generated by the sine guidance, which may cause the algorithm to miss more refined solutions. By introducing a small Gaussian perturbation, the mechanism introduces slight oscillations while maintaining stability in the original direction, allowing the population to explore potential optimal solutions in adjacent areas, thereby effectively preventing the population from falling into local optimum and further improving the overall convergence accuracy.

3.2. Exponentially Modulated Spiral Perturbation Mechanism

The original BCA algorithm has a very fixed exploitation limit, controlled by a single BCB binary gate. However, if the population enters the exploitation phase without fully completing preliminary exploration, convergence often stalls, limiting further improvement in algorithm performance. Therefore, this section details the Exponentially Modulated Spiral Perturbation Mechanism. This mechanism adopts an adaptive spiral perturbation factor [27] to moderately enhance the population’s exploration capabilities, thereby increasing its distribution diversity in the search space, ensuring that the population can escape local optima and prevent premature convergence.
S p i r a l _ F a c t o r = e b l × c o s ( 2 π l )
As shown in Equation (10), the main improvement of this paper compared to the original BCA mechanism is that the original cosine coefficient is replaced by a spiral perturbation factor S p i r a l _ f a c t o r , the mathematical definition of which is shown in Equation (11). This factor is not a linear spiral, but is an improvement based on the logarithmic spiral. Unlike traditional algorithms that modulate only the exponential parameter l [28], the spiral factor proposed in this paper dynamically updates both parameters b and l simultaneously during the exponential modulation stage to enhance the flexibility and global exploration capability of the search process. The specific update method of parameters b and l is shown in Equation (12).
S j , d t + 1 = A i , d t + S p i r a l _ F a c t o r × | A r , d t A i , d t |
As shown in Figure 3, we compared the changing trends of two spiral factors: one is a fixed spiral factor with b = 1 [28], and the other is a spiral factor with b dynamically updated according to Equation (12). It can be observed in the green dashed box in the figure that the spiral factor with dynamic b exhibits a higher peak amplitude, which means that it provides a wider exploration space for the population during the search process. Furthermore, from the variation curve of the 200 to 250 iterations in the red dashed box in the figure, it can be seen that the spiral factor with dynamic parameter b can rapidly jump from a value close to 0 to a peak value of approximately 3 within a very short iteration interval. This characteristic fully demonstrates that the proposed exponentially modulated spiral factor has a stronger ability to change instantaneously and escape local extrema, thereby significantly improving the global search performance of the algorithm.
Figure 4 is a schematic diagram of the mechanism, which shows the value changes of S p i r a l _ F a c t o r in 500 iterations and the position distribution of soldiers under spiral perturbation. It can be seen that the value of S p i r a l _ F a c t o r is mainly concentrated in the range of [−2, 3], while the value range of sin( α ) and cos( β ) used in the original BCA is [−1, 1]. By comparison, it can be found that the value range of S p i r a l _ F a c t o r is approximately twice or more than that of the original BCA sine and cosine factors.
l = 1 ( t M a x _ G e n + 2 ) × r a n d b = 1 + 0.5 × r a n d × 1 t / M a x _ G e n 2
In fact, the core of the improved Spiral Perturbation Mechanism lies in its introduction of an exponentially modulated adaptive spiral perturbation factor. This factor exhibits exponential variation, allowing the S p i r a l _ F a c t o r to rapidly increase from a small value near 0 to 2 or even 3, thereby prompting the population to conduct large-step global exploration and preventing premature convergence. The iterative update method for the parameters b and l in S p i r a l _ F a c t o r is shown in Equation (12), where b is in the range [1, 1.5] and l is a linearly decreasing variable with a range of [−2, 1].

3.3. Nonlinear Cognitive Coefficient-Driven Velocity Update Mechanism

The exploration mechanism of the original BCA is driven by the cosine factor, but the long-term use of fixed cosine exploration may lead to the population missing the optimal solution, which makes it difficult to achieve a good balance between exploration and exploitation of the original BCA. To this end, this section adopts the Nonlinear Cognitive Coefficient-Driven Velocity Update Mechanism. By introducing and improving the concept of speed in PSO, this mechanism constructs a method based on random army position [29,30] reference and combined with a dynamically changing cognitive coefficient to drive the soldier position update so that the algorithm can better balance the exploration and exploitation process.
S j , d t + 1 = A i , d t + v i , d t
As shown in Equation (13), compared to the original BCA, this mechanism removes the position update term driven by the cosine factor and instead drives the position update with the velocity term v i , d t . The specific calculation method of velocity is shown in Equation (14). The velocity formula in Equation (14) refers to the velocity update concept of PSO but is not identical. It mainly improves the cognitive part of PSO while retaining the inertial and social components.
v i , d t + 1 = w × v i , d t + c 1 × r 1 × ( A r , d t A i , d t ) + c 2 × r 2 × ( B d t A i , d t )
The reason for retaining the inertial and social components is that they can provide a directional basis for the search and maintain global convergence while considering the global optimal solution. The improvement of the cognitive part of PSO is mainly reflected in the design of the cognitive coefficient c 1 and its reference position. Specifically, the mechanism replaces the individual optimal position p i ( t ) in the traditional PSO with a random army position, thereby enhancing the exploration ability of the population. The setting of the cognitive coefficient c 1 is shown in Equation (15), and its value shows a nonlinear decreasing trend, ranging from 0.2 to 0.3. The change pattern of c 1 can be clearly seen from Figure 5.
c 1 = 0.2 + 0.1 × r a n d × 1 ( t / M a x _ G e n ) 2 t M a x _ G e n
The entire Nonlinear Cognitive Coefficient-Driven Velocity Update Mechanism is triggered by a binary gate probability, whose probability expression is shown in Equation (16) and has a value range of [0.1, 0.15]. The purpose of the design of this binary gate is to adopt a small probability PSO velocity update method, so that when the algorithm enters the exploration branch, it can not only maintain the overall exploration capability but also generate “fine search seeds” at a specific moment. This ensures that when the population shows an early convergence trend, some individuals can still quickly approach the best optimal solution through the velocity accumulation effect of PSO. The introduction of this mechanism significantly enhances the exploration–exploitation balance of the algorithm, making the convergence process more stable and efficient.
p 2 = 0.1 + 0.05 × r a n d × ( 1 t M a x _ G e n ) t M a x _ G e n 10

3.4. FBCA: Flexible Besiege and Conquer Algorithm

To achieve superior algorithmic performance, the Flexible Besiege and Conquer Algorithm (FBCA) was proposed by combining the three aforementioned innovative mechanisms. This section, using flowcharts and algorithm pseudocode, will detail how these three new mechanisms are integrated to form FBCA and how FBCA’s flexibility is demonstrated. Furthermore, the comprehensive improvement effect of FBCA in escaping local optimum, improving convergence capacity, and balancing exploration and exploitation will be illustrated.
When r 1 < B C B : S j , d t + 1 = B d t + | A r , d t A i , d t | × s i n ( α ) + N ( 0 , 0.1 2 ) , if r 2 < p 1 A i , d t + S p i r a l _ F a c t o r × | A r , d t A i , d t | , else
When r 1 B C B : S j , d t + 1 = A i , d t + v i , d t + 1 , if r 3 < p 2 A r , d t + | A r , d t A i , d t | × c o s ( β ) , else
First, the overall mathematical expression of FBCA can be seen from Equations (17) and (18), where r 1 , r 2 and r 3 are different random values from 0 to 1. Its main improvement lies in further refining the two branches controlled by BCB in the original BCA, introducing probability p 1 in the original exploitation branch and probability p 2 in the original exploration branch. When r 1 < B C B , a branch of r 2 < p 1 will be triggered; when r 1 B C B , a branch of r 3 < p 2 will be triggered. For the handling of the soldier positions that exceed the search boundaries, FBCA still follows the original BCA method: if an out-of-bounds error occurs when r 1 <   B C B , it is updated according to Equation (3); if an out-of-bounds error occurs when r 1 B C B , it is updated according to Equation (4).
The control relationship between the two probabilities on the specific mechanism is shown in Figure 6. This flowchart clearly shows the operation logic of FBCA. Among them, p 1 is the alternating control probability of the Gaussian perturbation mechanism and spiral perturbation mechanism, and its value is fixed at 0.5. When r 2 < p 1 , the algorithm enters the Gaussian perturbation mechanism stage. First, the Gaussian perturbation term N ( 0 , 0.1 2 ) is initialized. Then, under the guidance of the sine factor, the perturbation term N is introduced as a correction term into the update of the soldier’s position. When r 2 p 1 , the spiral perturbation mechanism branch is entered. First, the exponential parameters b and l of the spiral factor are updated according to Equation (12). Then, the spiral factor is calculated using Equation (10). Finally, it is introduced into Equation (11) to complete the update of the soldier’s position.
With the participation of probability p 2 , the original exploration branch can be divided into two stages: one is to retain the cosine-driven exploration mechanism of the original BCA, and the other is to introduce a cognitive coefficient-driven velocity update mechanism. Before judging whether r 3 is less than p 2 , the value of p 2 will be updated according to Equation (16); if r 3 < p 2 , the cognitive coefficient-driven velocity update mechanism is triggered, and the velocity term with nonlinear coefficient c 1 is updated through Equation (14), and the velocity term is introduced into Equation (13) to update the soldier position; if r 3 p 2 , the cosine-driven exploration mechanism is used to update the soldier position.
The pseudocode of FBCA is shown in Algorithm 2. As can be seen from the algorithm flow, FBCA will trigger and experience three new mechanisms with probability during each iteration. From the perspective of creation 1, the adopted Gaussian perturbation term N is a very small asymmetric perturbation, which makes FBCA have slight flexibility and local exploration ability on the basis of maintaining sine-guided exploitation. From the perspective of creation 2, the spiral perturbation factor under exponential modulation is an adaptive change mechanism, and its exponential characteristics drive the population to carry out local fine exploitation, but also realize large-step global exploration, so as to effectively avoid falling into local optimum. From the perspective of creation 3, the nonlinear cognitive coefficient-driven speed update mechanism dynamically generates the position of the soldier so that the algorithm can quickly converge when excellent individuals appear, thereby improving the global convergence performance. In summary, compared with the original BCA, FBCA has adaptive regulation characteristics, which can flexibly switch between exploration and exploitation mechanisms in different iteration stages to achieve a more balanced and efficient global optimized search.
Algorithm 2 The pseudocode of the FBCA
  1:
Input: Population Size: N, Problem Dimension: D, Max Iteration: Max_Gen, The Number Of Soldiers: nSoldiers
  2:
Output: Obtained best solution
  3:
Initialize the solutions’ positions randomly
  4:
Initialize the velocities and BCB parameters
  5:
while t ← 1 to Max_Gen do
  6:
     for i ← 1 to nArmies do
  7:
           for j ← 1 to nSoldiers do
  8:
                 for  d 1 to D do
  9:
                       if rand < BCB then
10:
                            if rand < p 1  then
11:
                                 Creation1: Sine-Guided Soft Asymmetric Gaussian Perturbation
12:
                                 initial Gaussian perturbation item N
13:
                                 Update the soldier position by Equation (9)
14:
                                 End Creation1
15:
                            else
16:
                                 Creation2: Exponentially Modulated Spiral Perturbation
17:
                                 Update S p i r a l _ F a c t o r by Equation (10)
18:
                                 Update the soldier position by Equation (11)
19:
                                 End Creation2
20:
                            end if
21:
                            Update the soldier position that exceeds the search boundaries By Equation (3)
22:
                       else
23:
                            Update p 2 by Equation (16)
24:
                            if rand < p 2  then
25:
                                 Creation3: Nonlinear Cognitive Velocity Update
26:
                                 Update v i , d by Equation (14)
27:
                                 Update the position by Equation (13)
28:
                                 End Creation3
29:
                            else
30:
                                 Update the position by Equation (2)
31:
                            end if
32:
                            Update the soldier position that exceeds the search boundaries By Equation (4)
33:
                       end if
34:
                 end for
35:
           end for
36:
     end for
37:
     Update gBest and gBestPos.
38:
end while

3.5. Analyzing the Computational Complexity of FBCA

To evaluate the computational complexity of FBCA, we define the following parameters: number of iterations T, problem dimension D, population size N, number of soldiers in each army n S o l d i e r s , and function evaluation cost c. The overall computational complexity of FBCA mainly consists of the following parts: problem definition, initialization, soldier position update, and population evaluation.
First, the computational complexity of the problem definition phase is O(1). In the initialization phase, the algorithm needs to initialize soldier positions for N n S o l d i e r armies in each dimension, thus the complexity is O( N n S o l d i e r × D ). In subsequent iterations, the main computational overhead is concentrated in the soldier update and population evaluation phases. The impact of the introduction of the three new mechanisms on the complexity of a single iteration is as follows:
  • Sine-Guided Soft Asymmetric Gaussian Perturbation Mechanism: Each dimension needs to perform a sine operation and Gaussian perturbation correction term N ( 0 , 0.1 2 ) once. The complexity of a single operation is O(1), so the complexity of a single soldier in a single dimension is O(1).
  • Exponentially Modulated Spiral Perturbation Mechanism: including the calculation of exponential function (exp), cosine function (cos) and linear parameters b and l, the single-dimensional operation complexity is also O(1).
  • Nonlinear Cognitive Coefficient-Driven Velocity Update Mechanism: It involves the calculation of the nonlinear cognitive coefficient ( c 1 ) and the solution of speed update formula. The single-dimensional computational complexity is also O(1).
Because each generation of soldiers selects and executes only one of the three mechanisms, each mechanism maintains O(1) one-dimensional complexity. Therefore, the generation complexity of a single soldier is O(D). Considering all soldiers and the number of iterations, the overall complexity of the soldier update stage can be expressed as O( n S o l d i e r s × T × D ). In the evaluation stage, the algorithm needs to evaluate the fitness of all armies and select the best army in each iteration. At initialization, a total of N n S o l d i e r armies are generated, and the evaluation cost of each army is c. Therefore, the overall evaluation complexity is O( T × N n S o l d i e r × c ).
O ( F B C A ) = O ( p r o b l e m d e f i n i t i o n ) + O ( i n i t i a l i z a t i o n ) + O ( s o l d i e r u p d a t e ) + O ( p o p u l a t i o n e v a l u a t i o n ) = O ( 1 ) + O ( N n S o l d i e r ) + O ( n S o l d i e r s × T × D ) + O ( T × N n S o l d i e r × c ) = O ( 1 + N n S o l d i e r + n S o l d i e r s × T × D + T × N n S o l d i e r × c )
Therefore, the overall complexity metric of FBCA is consistent with the baseline algorithm BCA. The three new mechanisms only introduce constant-level additional computations during the soldier update stage, without increasing the overall complexity cost. This indicates that FBCA successfully improves algorithm performance while maintaining computational efficiency. In summary, the overall computational complexity of FBCA is shown in Equation (19).

4. Experiments and Analysis

In this section, we evaluate the performance of FBCA using the IEEE CEC 2017 benchmark function. First, Section 4.1 describes the experimental environment, dataset, and experimental parameter settings. Subsequently, Section 4.2, Section 4.3, Section 4.4, Section 4.5 systematically compare FBCA with 12 other state-of-the-art algorithms and provide a comprehensive performance evaluation of the test results. Section 4.2 conducts qualitative analysis, Section 4.3 conducts quantitative evaluation, Section 4.4 completes statistical testing, and Section 4.5 analyzes the stability of the algorithm. The participating comparison algorithms include the original BCA, the classical algorithms PSO, GA, SCA, the emerging algorithms HOA, COA, PO, HHO, and the high-performance algorithms SMA, CPO, BKA, etc. Section 4.6 performs parameter sensitivity analysis, ablation experiments, and comparison with state-of-the-art (SOTA) algorithms.

4.1. Experiment Settings

Environment: The hardware environment used in this study is a computer configured with Intel (R) Core (TM) i7-11800H CPU 2.30 GHz. The code writing was done in the Matlab R2024a environment under the Windows 10 operating system.
Datasets: We used the IEEE CEC 2017 benchmark function set to verify the effectiveness of our algorithm. This function set covers unimodal functions (F1–F2), multimodal functions (F3–F9), hybrid functions (F10–F19), and composite functions (F20–F29). Its global optimal values are known, making it a standard tool for evaluating algorithm performance.
Experiment parameters: To ensure fair comparison between algorithms, we uniformly set the population size to 30 and the maximum number of iterations to 500 in the benchmark function experiments with different dimensions (30D, 50D, and 100D). This parameter setting is intended to ensure the reliability and validity of the experimental results. The detailed parameter configurations of the compared algorithms are shown in Table 1.

4.2. Qualitative Analysis

In order to validate and evaluate the performance of FBCA, this section selects the unimodal function F1 to examine the exploitation ability of the algorithm, and the multimodal functions F10, F13, and F17 to evaluate the performance of the algorithm in terms of the exploration–exploitation balance [41]. The search space shape is shown in the first column of Figure 7.
In the search history diagram, the blue points represent the distribution of particles in the population, while contour lines are the projections of the three-dimensional search space onto a two-dimensional plane, reflecting different values of the objective function. The color of the contour lines changes from yellow to green, then to blue and purple, corresponding to the improvement process of fitness; the darker the color, the higher the quality of the solution. Blue points represent the locations searched by particles, while red points represent the current location of the global optimum. Observing the population distribution reveals that, in most cases, blue points are concentrated near red points, indicating that the population is gradually approaching the optimum, fully demonstrating the fine-grained search capability of FBCA during the exploitation phase, as shown in Figure 7a. At the same time, apart from the dense area near the red points, a certain degree of clustering can still be observed in other locations, indicating that FBCA can still maintain good population diversity, as shown in Figure 7c.
In the average fitness graph and the convergence curve graph, both the average fitness graph and the convergence curve graph reflect the fitness changes in the algorithm during the search process. Average fitness represents the average fitness value of all particles in the population, while the convergence curve represents the fitness trend of the optimal particle. As shown in Figure 7, the algorithm follows a search pattern of exploration followed by exploitation in all four test functions. But the convergence of the two curves is different. Average fitness, as a macroscopic representation of overall fitness, converges quickly in the initial stage due to the large differences between particles during initialization. As iterations progress, FBCA mainly performs local searches around the current optimal soldier particle, so the overall fitness value stabilizes after a sharp decrease. The convergence curve, on the other hand, shows a more detailed view of the fitness changes of the optimal particle, exhibiting a monotonically decreasing trend, indicating that FBCA continuously optimizes the quality of the solution. In summary, combined with the search history plot, it can be seen that FBCA not only accurately locates the optimal solution but also demonstrates good global convergence characteristics.
The trajectory history graph shows the trajectory of the optimal solution for the particle during the iteration process. The horizontal axis represents the number of iterations, and the vertical axis represents the change in the value of the optimal particle in the first dimension. The graph reflects the dynamic changes in the optimal solution during 500 iterations. The large fluctuations in the initial stage of FBCA indicate its ability to quickly explore the global region; as iterations progress, the algorithm exhibits differentiated search and convergence characteristics on different functions. For example, in Figure 7b, the algorithm completes extensive exploration and quickly locates the center of the multimodal region within about 150 iterations, and then approaches the optimal solution through fine exploitation; Figure 7d shows that exploration is the main activity during iterations 0 to 200, continuous exploitation is the stage from 200 to 400, and finally, it converges to the global optimum during iterations 400 to 500. This demonstrates that FBCA can adaptively and dynamically adjust the exploration and exploitation mechanisms to achieve an effective balance of exploration–exploitation.

4.3. Quantitative Analysis

This section shows the convergence curves in different dimensions, as shown in Figure 8, Figure 9 and Figure 10. Table 2 summarizes the comprehensive rankings of each algorithm in different dimensions. As shown in Table 3, we rank the algorithms on each test function one by one, their detailed mean and standard deviation data are listed in Table A1, Table A2, Table A3, and the best results of each function are marked with a gray background. At the same time, Figure 11 visualizes and compares the algorithm performance through radar maps. Finally, the experimental results show that FBCA has excellent performance in terms of convergence accuracy, avoidance of premature convergence, convergence speed, and exploration–exploitation balance ability.
In terms of convergence speed and convergence accuracy, Figure 8, Figure 9 and Figure 10 show that FBCA finds an optimal solution in almost every iteration from 0 to 500, demonstrating its excellent convergence speed. Comparing the final result of the 500 t h iteration, as shown in Figure 9d, we can see that FBCA’s solution is optimal, and its convergence accuracy is significantly improved compared to BCA. Figure 9a,b also show that the optimal solution found is significantly superior to that of other algorithms, demonstrating that FBCA has excellent convergence accuracy.
In terms of avoiding premature convergence, as shown in Figure 8e and Figure 9e, the curve slope is generally small during the period from 0 to 100 iterations, that is, the downward trend is not obvious, but the curve suddenly drops rapidly when it tends to equilibrium, and a better solution is found. This shows that FBCA’s excellent exploration ability can get rid of the extreme value trap in time when the algorithm falls into the local optimum, thereby effectively avoiding premature convergence.
The exploration–exploitation balance is a crucial metric for evaluating an algorithm’s overall performance. From this perspective, the detailed and comprehensive rankings in Table 2 and Table 3 can be combined to assess FBCA’s exploration–exploitation capabilities. Composite functions are among the most complex types of functions in the IEEE CEC 2017 test suite, and their multimodal extreme value traps are well-suited for evaluating FBCA’s exploration–exploitation balance. Table 3 shows that FBCA’s winning the first place gradually increases in three dimensions. In the 100-dimensional problem, 6 out of 10 combination functions won first place, which fully indicates that FBCA has significantly improved its exploration–exploitation capabilities.
Further combined with Table 2, it can be seen that FBCA ranks first in the comprehensive ranking of all three dimensions. The radar map in Figure 11 also visually shows that FBCA is in the lead, with the smallest closed graphic area. This demonstrates FBCA’s excellent combined ability to balance exploration and exploitation and its excellent performance on the IEEE CEC 2017 test suite.

4.4. Statistical Testing

In this section, the Wilcoxon rank sum test was employed to evaluate the statistical significance of the algorithmic performance [42]. Specifically, for each benchmark function and dimensional setting, FBCA and the comparative algorithms were each executed 30 independent times, and the differences in their errors were ranked by sign. When the sum of positive ranks was significantly greater than that of the negative ranks (p < 0.05), the result was marked as “+”, indicating that FBCA performed significantly better than the compared algorithm. Conversely, a “–” sign indicated that FBCA performed significantly worse, while “=” denoted no statistically significant difference. The “(+/=/−)” column in Table 4 reports the win–tie–loss outcomes of this test under 30D, 50D, and 100D dimensions.
Experimental results demonstrate that FBCA significantly outperforms most compared algorithms. Across three dimensions, FBCA achieves a net win of more than 80 compared with most algorithms, demonstrating its robust performance across a wide range of problem sizes. Further comparisons with the original BCA and the high-performance algorithms SMA and CPO reveal that FBCA’s win rate steadily increases with increasing dimensionality. In particular, at 100 dimensions, FBCA achieves a significant win rate of 62% and 68% compared to BCA and CPO, respectively, demonstrating its significantly enhanced optimization capabilities for complex, high-dimensional problems.

4.5. Stability Analysis

Figure 12, Figure 13 and Figure 14 show the boxplot results for all algorithms across various dimensions. The horizontal axis represents the compared algorithms, and the vertical axis represents the evaluation results of each algorithm on the test function. Each algorithm’s corresponding boxplot includes statistical features such as the median, quartiles, maximum, minimum, and outliers, comprehensively reflecting the distribution characteristics of the algorithm’s performance [43]. As can be seen from the figures, FBCA demonstrates excellent performance in both result stability and robustness.
From the perspective of the robustness of FBCA, as shown in Figure 12, Figure 13 and Figure 14, it can be observed that the overall results of FBCA are generally at a lower level than other algorithms on the three test functions of F1, F15 and F24, and the upper and lower boundary spacing of the box plot is small, indicating that the result fluctuation is small and the stability is high. Especially on the F1 function, the value of results are almost stable around 0, as shown in Figure 12a and Figure 13a, which further verify that FBCA has good robustness in different dimensions. In addition, FBCA achieves excellent performance results in all three dimensions and four types of functions covered by the same dimension. This shows that the algorithm not only has cross-dimensional robustness but also shows good cross-function robustness.
From the perspective of the stability performance of FBCA, with the increase in dimensions, the optimization results of FBCA are more significant when dealing with hybrid functions and composite functions with higher diversity and complexity, as shown in Figure 13e and Figure 14f. This shows that FBCA has the ability to jump out of the local optimal and find a better global solution. At the same time, it can still maintain high stability on these complex and diverse functions, indicating that the algorithm not only has good stability, but also verifies that the exploration and exploitation strategy proposed in this paper has been effectively played, so that the balance between exploration and exploitation of FBCA has been improved.

4.6. Parameter Sensitivity and Mechanism Validation

In order to verify the rationality of FBCA, this section presents the parameter sensitivity analysis, ablation experiments, and performance comparisons with SOTA algorithms. As is well known, the IEEE CEC 2017 test suites become more difficult to optimize as the dimensions increase. Therefore, this paper selects 29 functions of dimension 100 from these test suites as evaluation benchmarks. In terms of experimental design, Section 4.6.1 conducts sensitivity analysis on changes in the probability parameter p 1 ; Section 4.6.2 examines the impact of different Gaussian perturbation terms N on algorithm performance; Section 4.6.3 conducts ablation experiments by replacing the spiral perturbation mechanism; Section 4.6.4 compares the performance differences between the original PSO velocity update method and the cognitively driven velocity update strategy in FBCA. Finally, Section 4.6.5 provides a comprehensive performance comparison with SaDE [44], L-SHADE [45], and L-SHADE_EpSin [46] algorithms.

4.6.1. Influence of the Probability Parameter p 1

To verify the rationality of the parameter p 1 setting, we conducted parameter sensitivity analysis for different values of p 1 . Algorithms involved in the comparison included BCA, FBCA ( p 1 = 0.2), FBCA ( p 1 = 0.5), and FBCA ( p 1 = 0.8). As shown in the Table 5, when p 1 = 0.5, the algorithm performs best out of 29 test functions, ranking first. Further analysis revealed that FBCA with p 1 = 0.2 performs exceptionally well on composite functions, achieving first place in 6 out of 10 such functions; while FBCA with p 1 = 0.8 exhibits better convergence performance on unimodal functions. Comparatively, FBCA with p 1 = 0.5 demonstrates stronger versatility and adaptability. The results show that FBCA ( p 1 = 0.5) achieves optimal results on three multimodal functions, two mixed functions, and four combined functions. Figure 15 shows the convergence curves on the multimodal function F 3 , the hybrid function F 11 , and the composite functions F 24 and F 27 . Overall, setting p 1 = 0.5 enables FBCA to demonstrate excellent comprehensive adaptability in various types of optimization functions.

4.6.2. Effect of Gaussian Perturbation Variance

To further verify the rationality of the Gaussian perturbation term parameter N , we compared FBCA versions using N ( 0 , 0.05 2 ) , N ( 0 , 0.1 2 ) and N ( 0 , 0.2 2 ) . As shown in the Table 6, the N ( 0 , 0.1 2 ) configuration exhibits the best overall performance, especially in hybrid and composite functions containing numerous local extremum traps. As shown in Figure 16b, the FBCA with the N ( 0 , 0.1 2 ) configuration converges significantly faster and with higher accuracy. Although the FBCA( N ( 0 , 0.05 2 ) ) and FBCA( N ( 0 , 0.2 2 ) ) also achieved some results on different functions, they did not show significant optimal characteristics in any type of function. In summary, when the Gaussian perturbation term is set to N ( 0 , 0.1 2 ) , FBCA not only achieves the best overall optimization performance but also demonstrates good adaptability to complex functions.

4.6.3. Comparison of Spiral Perturbation Mechanisms

To verify the effectiveness of the proposed exponentially modulated spiral perturbation mechanism, we designed ablation experiments for four spiral mechanisms, named S p i r a l 1 to S p i r a l 4 . S p i r a l 1 and S p i r a l 2 are two variants of the logarithmic spiral: S p i r a l 1 corresponds to the proposed mechanism with a dynamic exponential parameter b, while S p i r a l 2 has a fixed b = 1; S p i r a l 3 is the Archimedean Spiral, and S p i r a l 4 is the Rose Spiral.
ψ = r a n d · 4 π l i n e a r _ t e r m = c + d a r c h · ψ A r c h _ f a c t o r = l i n e a r _ t e r m · cos ( ψ )
The mathematical formula for the Archimedean Spiral is defined in Equation (20), where c = 0.5 and d a r c h = 0.3. The mathematical formula for Rose Spiral is defined in Equation (21), where e r o s e = 1.2, n = 3. In Equations (20) and (21), r a n d is a random value that takes values in the range [0, 1].
k ( t ) = 1 + 0.3 t M a x _ G e n ξ = r a n d · 2 π · k ( t ) R o s e _ f a c t o r = e r o s e · cos n · ξ · cos ξ
As shown in the Table 7, the proposed S p i r a l 1 has the best optimization performance, with an average ranking of 2.28, leading the second-place mechanism by 0.31. As shown in Figure 17b,c, the FBCA (red curve) using exponentially modulated spiral perturbation significantly outperforms other mechanisms in global convergence. In detail, the Archimedean Spiral has a certain advantage in the composite function, while the Rose spiral exhibits relatively stable performance and lacks obvious characteristics. Comparing the results of the two logarithmic spirals, it can be seen that although S p i r a l 1 did not achieve the optimal solution on unimodal functions, it outperformed S p i r a l 2 on the other three types of functions, especially showing a more outstanding optimization ability on hybrid functions.

4.6.4. Validation of the Velocity Update Mechanism

To verify the difference between the cognitive-driven velocity update mechanism and the original PSO velocity update mechanism, we tested FBCA under both mechanisms, named FBCA ( p s o - v e l o c i t y ) and FBCA ( n e w - v e l o c i t y ), respectively. In the Table 8, v e l o c i t y 1 is FBCA ( n e w - v e l o c i t y ) and v e l o c i t y 2 is FBCA ( p s o - v e l o c i t y ). Experimental results show that the improved velocity update strategy significantly enhances the quality of solution. As shown in Figure 18, FBCA ( n e w - v e l o c i t y ) achieves better results on functions F 1 , F 10 , F 15 , and F 24 , demonstrating stronger robustness. Particularly in the F 1 function of Figure 18a, the convergence curve of FBCA ( n e w - v e l o c i t y ) is significantly better than that of FBCA ( p s o - v e l o c i t y ). Overall, FBCA ( n e w - v e l o c i t y ) achieved first place in 15 out of 29 test functions, while FBCA ( p s o - v e l o c i t y ) only performed the best in seven functions, validating the effectiveness of the improved mechanism.

4.6.5. Comparison with SOTA Optimizers

To verify the leading performance of our proposed FBCA, we compared it with SaDE, L-SHADE, and L-SHADE_EpSin cutting-edge algorithms. Looking at the rankings of the algorithms across the four function types, SaDE showed no significant advantage. L-SHADE_EpSin excels at optimizing hybrid functions, achieving a strong 12 first-place results. However, L-SHADE also has some limitations. While FBCA only achieved seven first-place results, further analysis reveals its relative superiority. For example, in Figure 19a, L-SHADE exhibits significantly weaker optimization capabilities on F4, while FBCA easily outperforms L-SHADE, SaDE, and L-SHADE_EpSin, achieving first place. Furthermore, although FBCA did not achieve first place on F 13 , F 16 , and F 22 , its second-place ranking still significantly surpassed the fourth-place L-SHADE. Therefore, as shown in the Table 9, FBCA ultimately achieved the overall first-place ranking.

5. MLP Optimization Problems

In this section, we evaluate and validate the feasibility of the FBCA algorithm using six MLP optimization problems, including three classification problems and three function approximation problems. First, Section 5.1 introduces how to train an MLP using FBCA. Subsequently, Section 5.2, Section 5.3, Section 5.4 present experimental results and analysis for the three classification problems: MLP_XOR, MLP_Iris and MLP_Heart. Section 5.5, Section 5.6Section 5.7 demonstrate the optimization performance for the three function approximation problems: MLP_Sigmoid, MLP_Cosine and MLP_Sine. Section 5.8 compares the experimental results of FBCA and gradient-based optimizers. Section 5.9 discusses the performance of FBCA on MLP optimization problems, limitations, and future work.

5.1. Training MLPs Using FBCA

Figure 20 shows an optimization model that combines FBCA with MLP, referred to as FBCA-MLP. In this model, FBCA is used to optimize the weight and bias parameters of the MLP to minimize the mean squared error (MSE) on the training dataset.
Specifically, the training process begins by preparing training samples and test samples. The training samples are then fed into the FBCA−MLP model, where iterative training searches for optimal weight and bias parameters. Once the optimal parameters are obtained, they are applied to the MLP and evaluated on the test samples. Depending on the problem type, the model’s accuracy or test error is ultimately output.
Before applying FBCA to the MLP optimization problem, we have detailed definitions of the relevant parameters required for training, as shown in Table 10 and Table 11. The table clearly shows the number of training samples, test samples, MLP structure, and problem dimensions for two different types of problems.
According to the No Free Lunch Theorem (NFL) [47], there is no universal algorithm that can perform best on all problems, which has become an academic consensus. Accordingly, this paper compares the experimental results of eight cutting-edge algorithms, including FBCA and SMA, Guided Learning Strategy (GLS) [48], and the Osprey Optimization Algorithm (OOA) [49], on three MLP classification optimization problems and three function approximation problems, to verify their respective areas of strength [50].

5.2. MLP_XOR Problem

This study uses a three-bit XOR dataset [51], with input features as three-bit binary numbers and output results as the parity check values of these three features. The comparative results in Table 12 show that the classification accuracy of FBCA not only reached 100%, but also, from the mean results of MSE, the optimization results of FBCA are in the order of 10 6 . While other compared algorithms only reached 10 1 10 3 , the optimization performance was improved by about three to five orders of magnitude. This indicates that FBCA demonstrates strong exploration capabilities in MLP optimization.

5.3. MLP_Iris Problem

The Iris dataset is for a three-class classification problem [52,53], with its four input features being sepal length, sepal width, petal length, and petal width, and the output results corresponding to the three iris flower categories. As shown in Table 13, the mean MSE of FBCA performs the best. Meanwhile, the MLP model optimized by FBCA also ranks first in the best accuracy in prediction, indicating that compared to other algorithms, FBCA has superior global convergence ability.

5.4. MLP_Heart Problem

Compared with the previous two classification problems, the dimension of the Heart dataset [54,55] has increased from 10 to 1000, an increase of about 102, and the number of input features has also increased from 3 or 4 to 22, making FBCA face higher-dimensional and more complex optimization challenges. From the results of Table 14, it can be seen that the mean of MSE of other algorithms is in the order of 10 1 , while the mean of FBCA reaches 10 2 , showing a new breakthrough in the order of magnitude. At the same time, its accuracy rate is also the first, indicating that FBCA can balance exploration and exploitation to find a better solution.

5.5. MLP_Sigmoid Problem

The Sigmoid dataset is the simplest function approximation problem in this article. Its specific expression is listed in Table 11, along with the mathematical expressions for other function approximation problems [50]. Unlike classification problems, function approximation problems are evaluated not by accuracy but by test error. Table 15 shows that FBCA achieves the best performance in both mean and test error. In particular, the test error result shows a 0.8 reduction for FBCA compared to BCA, demonstrating a significant improvement in FBCA’s optimization performance for the Sigmoid problem.

5.6. MLP_Cosine Problem

The optimization difficulty for the Cosine dataset is higher than that for the Sigmoid problem. As shown in Table 16, FBCA maintains the best performance among the compared algorithms, demonstrating its ability to achieve higher-precision solutions during exploitation. Furthermore, FBCA achieves the lowest test error, surpassing not only BCA but also GLS, SMA, and HHO, all of which have lower test errors than BCA. This result further highlights the significant improvements and advantages of FBCA in function approximation.

5.7. MLP_Sine Problem

The Sine dataset is the most complex function approximation problem in this paper. However, the MLP optimization model based on FBCA maintains stable performance and significantly improves performance. As shown in Table 17, FBCA achieves the best mean MSE metric and the lowest test error. This demonstrates that FBCA effectively avoids falling into local optima and finds more suitable weight and bias parameters for MLP training.

5.8. Comparison with Gradient-Based Optimizers

In the experiments in Section 5.2, Section 5.3, Section 5.4, Section 5.5, Section 5.6, Section 5.7, we employed various metaheuristic algorithms as optimizers to adjust the weights and biases of the MLP. To further verify the effectiveness and persuasiveness of this optimization approach, this section selects a representative dataset each from the classification problems and the function approximation problems, and compares it with four traditional gradient-based optimization algorithms [56]. These four optimization algorithms are SGD [57], Adam [58], RMSprop [59], and Adagrad [60].
In the classification problem, we chose the MLP_Heart dataset for our experiments. This dataset has a high dimension, and the MLP model structure is relatively complex, making it a challenging task to optimize. In the function approximation problem, we selected MLP_Sine, which is one of the most difficult test functions to converge among similar problems.
As shown in the Table 18, the proposed FBCA can more effectively adjust weights and biases during MLP training, enabling the model to achieve better generalization performance. Compared with four traditional gradient optimizers, FBCA achieves the highest classification accuracy and the smallest test error on both problems, verifying its significant role in improving MLP performance within a non-gradient optimization framework.

5.9. Discussion

To address the issue of nonlinear MLPs easily getting trapped in saddle points and local optima in nonconvex optimization, we not only systematically evaluated the optimization capability of FBCA using the IEEE CEC 2017 benchmark function but also validated it using six MLP optimization problems. FBCA’s hierarchical structure, perturbation updates, and velocity-driven search fully leverage its global optimization capabilities, enabling it to achieve more stable convergence and lower errors in small-to-medium-scale MLP tasks, demonstrating its effectiveness in nonconvex optimization scenarios for nonlinear MLPs. This study systematically verifies the role of the improved FBCA mechanism in enhancing search balance and convergence performance through parameter sensitivity analysis, ablation experiments, and complexity comparison. These results collectively support FBCA’s cutting-edge advantage in six MLP optimization problems. Moreover, FBCA not only achieved first place in metaheuristic algorithms, but also demonstrated a representative lead compared to traditional metaheuristic algorithms and mainstream gradient optimizers. It showed outstanding performance in the most complex MLP_Heart classification problem and MLP_Sine problem. Not only did it outperform Adam and RMSprop by more than 10% in MLP_Heart accuracy but it also reduced the test error by more than 10 compared to SGD and Adagrad in MLP_Sine.
While this paper mentions the potential applicability of FBCA in deep networks such as CNNs and Transformers, it does not provide corresponding empirical analysis and experimental verification. This limitation stems primarily from the fact that this study focuses on the weight and bias optimization of MLPs without directly comparing the optimization performance of other deep learning. However, as a global optimization framework, FBCA possesses scalability for adapting to different network depths due to its hierarchical population–army–soldier particle generation and global optimization mechanism. For CNNs, FBCA can effectively model the convolutional kernel structure through block-level parameter mapping and weight sharing constraints. Specifically, each convolutional kernel can be regarded as an army, with multiple soldiers within it performing searches in local parameter subspaces; during the search process, the updated sub-blocks of the soldiers can synchronously act on the shared weight structure, thereby achieving the overall optimization of the convolutional kernel. For Transformer architectures, FBCA can be combined with its multi-head self-attention mechanism to design a parallel evolutionary process. Each attention head can be regarded as a relatively independent optimization subspace, corresponding to an independent army in FBCA. The overall fitness is determined by the joint performance of all attention heads, ensuring global coordination and consistency in the distribution of multi-head attention. In summary, FBCA provides a new theoretical perspective and research direction for the generalization of metaheuristic algorithms in complex deep networks such as CNNs and Transformers, laying the foundation for future optimization methods in structured parameter spaces.
Meanwhile, due to computational limitations, this study did not test large-scale datasets and deep MLP architectures; therefore, its performance and scalability in these scenarios remain to be verified. Future work will focus on the performance of FBCA in high-dimensional neural network parameter spaces, and evaluate its hybrid optimization strategy combined with gradient methods, further exploring its application potential in complex deep learning models.

6. Conclusions and Future Work

The proposed FBCA integrates soft Gaussian perturbation asymmetry, adaptively modulated spiral perturbation factors, and dynamically decreasing nonlinear cognitive coefficients, enabling the algorithm to achieve rapid detection, fast adaptation, and fast convergence. This demonstrates the algorithm’s flexibility and dynamic control during the search process. Its outstanding performance on the IEEE CEC 2017 benchmark suite, particularly in high-dimensional and complex composite optimization problems, demonstrates that FBCA not only maintains stable convergence speed and high optimization accuracy but also achieves notable improvements in six MLP optimization problems. These results validate the algorithm’s robustness and generalization ability in high-dimensional nonconvex optimization scenarios. More importantly, FBCA flexibly controls the proposed mechanism through refined binary gates, achieving an adaptive exploration–exploitation balance search, demonstrating a novel design approach for swarm intelligence algorithms that emphasizes structural flexibility and search balance.
Although FBCA demonstrates strong performance across multiple problems, several promising directions remain for future research. Theoretically, developing a multi-objective extension of FBCA could enhance its adaptability to problems involving multiple or conflicting constraints. Additionally, integrating strategic military decision-making models may inspire interdisciplinary advancements in swarm-based intelligent optimization. The population initialization strategy also warrants further exploration—approaches such as latin hypercube sampling, good-point sets, or customized initialization schemes could improve search space coverage and convergence behavior. At the algorithmic structural level, future work could focus on novel exploration–exploitation balance mechanisms to address the global convergence limitations of the original BCA. Finally, from an application perspective, FBCA’s performance in MLP optimization lays the foundation for its broader neural network field, and its application potential in complex intelligent tasks such as medical image analysis [61], financial timing prediction [62], and natural language emotion recognition [63] can be further explored in the future.
Finally, we have made BCA-related research materials publicly available at www.jianhuajiang.com, accessed on 1 January 2025, and we welcome researchers interested in exploring and studying the theoretical innovations and migration applications of BCA and FBCA.

Author Contributions

Conceptualization, S.G. and C.G.; methodology, C.G. and J.J.; software, C.G.; validation, C.G.; resources, S.G. and J.J.; writing—original draft preparation, C.G.; writing—review and editing, S.G. and J.J.; visualization, C.G.; supervision, S.G. and J.J. All authors have read and agreed to the published version of the manuscript.

Funding

The authors thank the financial support from the Scientific Research Project of Jilin Provincial Department of Education (No. JJKH20240171SK).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets analyzed during the current research can be obtained from the University of California, Irvine (UCI) Machine Learning Repository (https://archive.ics.uci.edu/, accessed on 1 January 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

IEEE CEC 2017 test suite detailed experimental data.
Table A1. Comparison of the FA-BCA with other algorithms on IEEE CEC 2017 benchmark functions with D = 30.
Table A1. Comparison of the FA-BCA with other algorithms on IEEE CEC 2017 benchmark functions with D = 30.
FItemFBCABCASCAGARSAPSODBOBKAHHOHOACOACPOPOSMA
F1Std6.44 × 10 3 5.91 × 10 3 3.47 × 10 9 2.06 × 10 10 7.83 × 10 9 2.37 × 10 9 2.32 × 10 8 6.8 × 10 9 2.39 × 10 8 8.68 × 10 9 5.96 × 10 9 8.43 × 10 5 1.31 × 10 9 6.95 × 10 4
Mean1.13 × 10 4 6.64 × 10 3 2.07 × 10 10 5.30 × 10 10 4.80 × 10 10 4.22 × 10 9 2.75 × 10 8 9.35 × 10 9 4.72 × 10 8 3.71 × 10 10 5.70 × 10 10 7.47 × 10 5 3.23 × 10 9 1.33 × 10 5
F2Std2.69 × 10 4 3.64 × 10 4 1.70 × 10 4 6.02 × 10 4 5.99 × 10 3 7.69 × 10 4 2.64 × 10 4 1.88 × 10 4 6.81 × 10 3 6.74 × 10 3 5.87 × 10 3 1.10 × 10 4 8.57 × 10 3 1.35 × 10 4
Mean7.88 × 10 4 1.40 × 10 5 8.19 × 10 4 2.59 × 10 5 8.10 × 10 4 1.76 × 10 5 8.97 × 10 4 3.74 × 10 4 5.76 × 10 4 7.33 × 10 4 8.49 × 10 4 6.18 × 10 4 6.81 × 10 4 3.42 × 10 4
F3Std3.89 × 10 1 2.64 × 10 1 8.15 × 10 2 4.64 × 10 3 3.61 × 10 3 4.42 × 10 2 1.18 × 10 2 7.73 × 10 2 1.02 × 10 2 1.74 × 10 3 2.24 × 10 3 1.67 × 10 1 1.08 × 10 2 2.38 × 10 1
Mean4.93 × 10 2 5.06 × 10 2 2.91 × 10 3 9.24 × 10 3 1.09 × 10 4 1.24 × 10 3 6.44 × 10 2 1.27 × 10 3 7.23 × 10 2 7.95 × 10 3 1.57 × 10 4 5.24 × 10 2 7.51 × 10 2 5.12 × 10 2
F4Std3.23 × 10 1 7.64 × 10 1 2.73 × 10 1 7.15 × 10 1 2.85 × 10 1 4.11 × 10 1 5.37 × 10 1 4.88 × 10 1 3.78 × 10 1 3.15 × 10 1 4.00 × 10 1 1.58 × 10 1 3.79 × 10 1 3.94 × 10 1
Mean6.31 × 10 2 6.56 × 10 2 8.29 × 10 2 9.94 × 10 2 9.22 × 10 2 8.07 × 10 2 7.73 × 10 2 7.52 × 10 2 7.75 × 10 2 7.99 × 10 2 9.10 × 10 2 6.91 × 10 2 7.88 × 10 2 6.43 × 10 2
F5Std9.77 × 10 0 8.97 × 10 0 5.03 × 10 0 1.40 × 10 1 4.68 × 10 0 1.55 × 10 1 1.23 × 10 1 7.04 × 10 0 6.25 × 10 0 7.89 × 10 0 6.08 × 10 0 7.25 × 10 0 7.62 × 10 0 9.20 × 10 0
Mean6.18 × 10 2 6.01 × 10 2 6.65 × 10 2 7.20 × 10 2 6.92 × 10 2 6.58 × 10 2 6.49 × 10 2 6.60 × 10 2 6.67 × 10 2 6.66 × 10 2 6.88 × 10 2 6.02 × 10 2 6.69 × 10 2 6.18 × 10 2
F6Std1.71 × 10 2 7.99 × 10 1 6.91 × 10 1 3.17 × 10 2 3.99 × 10 1 6.58 × 10 1 7.36 × 10 1 8.16 × 10 1 5.76 × 10 1 5.32 × 10 1 5.29 × 10 1 2.02 × 10 1 8.33 × 10 1 4.88 × 10 1
Mean1.02 × 10 3 9.49 × 10 2 1.24 × 10 3 2.07 × 10 3 1.38 × 10 3 1.14 × 10 3 1.01 × 10 3 1.23 × 10 3 1.30 × 10 3 1.26 × 10 3 1.42 × 10 3 9.39 × 10 2 1.23 × 10 3 8.92 × 10 2
F7Std5.31 × 10 1 6.03 × 10 1 2.15 × 10 1 8.58 × 10 1 1.85 × 10 1 4.10 × 10 1 5.77 × 10 1 4.94 × 10 1 2.63 × 10 1 2.63 × 10 1 2.36 × 10 1 1.44 × 10 1 3.80 × 10 1 3.86 × 10 1
Mean9.32 × 10 2 9.82 × 10 2 1.09 × 10 3 1.24 × 10 3 1.14 × 10 3 1.07 × 10 3 1.03 × 10 3 9.87 × 10 2 9.89 × 10 2 1.06 × 10 3 1.15 × 10 3 9.87 × 10 2 1.03 × 10 3 9.39 × 10 2
F8Std3.50 × 10 3 9.05 × 10 2 1.75 × 10 3 2.73 × 10 3 1.01 × 10 3 3.02 × 10 3 1.72 × 10 3 1.41 × 10 3 1.32 × 10 3 1.00 × 10 3 1.44 × 10 3 4.27 × 10 2 1.47 × 10 3 1.31 × 10 3
Mean4.59 × 10 3 1.53 × 10 3 9.14 × 10 3 1.05 × 10 4 1.14 × 10 4 7.75 × 10 3 6.19 × 10 3 5.73 × 10 3 8.69 × 10 3 6.74 × 10 3 1.11 × 10 4 1.35 × 10 3 7.64 × 10 3 4.57 × 10 3
F9Std1.31 × 10 3 1.13 × 10 3 3.30 × 10 2 6.14 × 10 2 4.49 × 10 2 6.63 × 10 2 1.03 × 10 3 1.28 × 10 3 8.04 × 10 2 6.67 × 10 2 4.53 × 10 2 2.33 × 10 2 7.52 × 10 2 6.18 × 10 2
Mean4.52 × 10 3 8.83 × 10 3 8.86 × 10 3 8.52 × 10 3 8.49 × 10 3 7.67 × 10 3 6.42 × 10 3 5.86 × 10 3 6.05 × 10 3 7.56 × 10 3 8.89 × 10 3 7.63 × 10 3 7.10 × 10 3 4.65 × 10 3
F10Std4.05 × 10 1 1.04 × 10 2 1.42 × 10 3 1.29 × 10 4 2.53 × 10 3 3.08 × 10 3 1.28 × 10 3 1.21 × 10 3 2.83 × 10 2 1.47 × 10 3 2.21 × 10 3 2.78 × 10 1 6.46 × 10 2 5.58 × 10 1
Mean1.21 × 10 3 1.26 × 10 3 4.11 × 10 3 2.44 × 10 4 9.06 × 10 3 4.41 × 10 3 1.97 × 10 3 1.87 × 10 3 1.63 × 10 3 5.99 × 10 3 9.17 × 10 3 1.28 × 10 3 2.57 × 10 3 1.29 × 10 3
F11Std8.96 × 10 5 1.25 × 10 6 7.78 × 10 8 4.71 × 10 9 3.25 × 10 9 6.09 × 10 8 1.16 × 10 8 1.42 × 10 9 9.04 × 10 7 1.94 × 10 9 3.95 × 10 9 8.74 × 10 5 2.76 × 10 8 3.98 × 10 6
Mean1.05 × 10 6 1.29 × 10 6 2.64 × 10 9 7.15 × 10 9 1.49 × 10 10 5.58 × 10 8 8.64 × 10 7 3.64 × 10 8 8.74 × 10 7 7.03 × 10 9 1.32 × 10 10 1.32 × 10 6 3.22 × 10 8 5.21 × 10 6
F12Std2.05 × 10 4 3.14 × 10 4 4.04 × 10 8 4.79 × 10 9 7.28 × 10 9 1.94 × 10 9 1.19 × 10 7 4.20 × 10 8 1.18 × 10 6 1.27 × 10 9 4.48 × 10 9 1.23 × 10 4 2.66 × 10 7 5.73 × 10 4
Mean1.57 × 10 4 1.92 × 10 4 1.11 × 10 9 4.69 × 10 9 1.26 × 10 10 8.49 × 10 8 4.70 × 10 6 1.41 × 10 8 1.20 × 10 6 2.27 × 10 9 1.03 × 10 10 2.27 × 10 4 1.14 × 10 7 7.87 × 10 4
F13Std2.04 × 10 5 3.89 × 10 5 1.03 × 10 6 1.10 × 10 7 6.65 × 10 6 3.04 × 10 6 7.61 × 10 5 1.33 × 10 5 1.82 × 10 6 8.72 × 10 5 2.85 × 10 6 8.48 × 10 2 6.30 × 10 5 1.64 × 10 5
Mean1.70 × 10 5 2.95 × 10 5 1.17 × 10 6 7.17 × 10 6 7.99 × 10 6 1.55 × 10 6 4.78 × 10 5 3.72 × 10 4 1.82 × 10 6 1.42 × 10 6 3.68 × 10 6 2.20 × 10 3 6.89 × 10 5 2.01 × 10 5
F14Std1.06 × 10 4 8.57 × 10 3 4.13 × 10 7 4.57 × 10 8 1.07 × 10 8 4.60 × 10 8 6.10 × 10 4 1.73 × 10 5 7.60 × 10 4 1.30 × 10 8 5.49 × 10 8 2.75 × 10 3 1.42 × 10 6 1.58 × 10 4
Mean1.28 × 10 4 8.97 × 10 3 5.91 × 10 7 3.77 × 10 8 5.48 × 10 8 1.23 × 10 8 8.34 × 10 4 9.08 × 10 4 1.33 × 10 5 1.35 × 10 8 6.68 × 10 8 4.65 × 10 3 9.81 × 10 5 2.93 × 10 4
F15Std3.68 × 10 2 6.71 × 10 2 2.19 × 10 2 7.08 × 10 2 6.62 × 10 2 5.32 × 10 2 4.92 × 10 2 4.53 × 10 2 4.25 × 10 2 6.54 × 10 2 1.06 × 10 3 2.17 × 10 2 2.96 × 10 2 2.66 × 10 2
Mean2.65 × 10 3 3.21 × 10 3 4.13 × 10 3 4.58 × 10 3 5.39 × 10 3 3.93 × 10 3 3.47 × 10 3 3.13 × 10 3 3.82 × 10 3 4.76 × 10 3 6.28 × 10 3 3.10 × 10 3 3.76 × 10 3 2.68 × 10 3
F16Std2.85 × 10 2 2.53 × 10 2 1.88 × 10 2 3.27 × 10 2 5.30 × 10 3 2.94 × 10 2 2.48 × 10 2 2.98 × 10 2 3.10 × 10 2 5.07 × 10 2 3.22 × 10 3 1.32 × 10 2 2.69 × 10 2 2.43 × 10 2
Mean2.28 × 10 3 2.04 × 10 3 2.77 × 10 3 3.15 × 10 3 7.29 × 10 3 2.62 × 10 3 2.66 × 10 3 2.44 × 10 3 2.63 × 10 3 3.06 × 10 3 5.50 × 10 3 2.04 × 10 3 2.68 × 10 3 2.40 × 10 3
F17Std1.63 × 10 6 3.87 × 10 6 5.88 × 10 6 6.92 × 10 7 2.53 × 10 7 3.54 × 10 7 6.54 × 10 6 1.94 × 10 5 4.35 × 10 6 1.49 × 10 7 4.72 × 10 7 7.34 × 10 4 9.27 × 10 6 3.52 × 10 6
Mean1.50 × 10 6 3.46 × 10 6 1.17 × 10 7 4.56 × 10 7 3.74 × 10 7 1.53 × 10 7 3.90 × 10 6 1.98 × 10 5 3.90 × 10 6 1.53 × 10 7 5.31 × 10 7 1.44 × 10 5 7.66 × 10 6 2.82 × 10 6
F18Std1.65 × 10 4 1.41 × 10 4 5.44 × 10 7 2.63 × 10 8 3.44 × 10 8 5.00 × 10 7 2.46 × 10 6 2.17 × 10 7 1.21 × 10 6 4.64 × 10 7 4.96 × 10 8 5.46 × 10 3 4.94 × 10 6 2.23 × 10 4
Mean1.64 × 10 4 1.16 × 10 4 9.55 × 10 7 2.62 × 10 8 6.87 × 10 8 2.89 × 10 7 1.58 × 10 6 4.62 × 10 6 1.47 × 10 6 4.10 × 10 7 7.67 × 10 8 6.09 × 10 3 6.42 × 10 6 2.72 × 10 4
F19Std2.33 × 10 2 3.72 × 10 2 1.52 × 10 2 2.47 × 10 2 1.34 × 10 2 1.66 × 10 2 2.31 × 10 2 1.91 × 10 2 2.21 × 10 2 1.65 × 10 2 2.20 × 10 2 1.45 × 10 2 1.94 × 10 2 1.84 × 10 2
Mean2.54 × 10 3 2.62 × 10 3 2.98 × 10 3 3.25 × 10 3 3.10 × 10 3 2.85 × 10 3 2.80 × 10 3 2.65 × 10 3 2.83 × 10 3 2.70 × 10 3 3.02 × 10 3 2.50 × 10 3 2.72 × 10 3 2.60 × 10 3
F20Std4.36 × 10 1 7.50 × 10 1 1.80 × 10 1 7.05 × 10 1 4.56 × 10 1 5.14 × 10 1 5.85 × 10 1 5.60 × 10 1 5.76 × 10 1 4.03 × 10 1 4.34 × 10 1 1.73 × 10 1 7.01 × 10 1 4.03 × 10 1
Mean2.42 × 10 3 2.47 × 10 3 2.61 × 10 3 2.84 × 10 3 2.73 × 10 3 2.60 × 10 3 2.56 × 10 3 2.56 × 10 3 2.60 × 10 3 2.60 × 10 3 2.76 × 10 3 2.48 × 10 3 2.55 × 10 3 2.42 × 10 3
F21Std2.46 × 10 3 3.97 × 10 3 2.02 × 10 3 1.31 × 10 3 9.86 × 10 2 2.84 × 10 3 2.51 × 10 3 1.55 × 10 3 1.47 × 10 3 1.30 × 10 3 6.22 × 10 2 3.79 × 10 0 2.35 × 10 3 1.53 × 10 3
Mean5.56 × 10 3 5.75 × 10 3 9.32 × 10 3 9.80 × 10 3 8.90 × 10 3 7.62 × 10 3 4.93 × 10 3 6.70 × 10 3 7.13 × 10 3 7.68 × 10 3 9.66 × 10 3 2.31 × 10 3 4.55 × 10 3 5.81 × 10 3
F22Std8.14 × 10 1 8.33 × 10 1 3.82 × 10 1 1.78 × 10 2 6.94 × 10 1 1.82 × 10 2 8.68 × 10 1 1.11 × 10 2 1.47 × 10 2 1.58 × 10 2 1.59 × 10 2 1.67 × 10 1 7.36 × 10 1 3.76 × 10 1
Mean2.85 × 10 3 2.80 × 10 3 3.08 × 10 3 3.65 × 10 3 3.33 × 10 3 3.30 × 10 3 3.03 × 10 3 3.12 × 10 3 3.24 × 10 3 3.48 × 10 3 3.66 × 10 3 2.85 × 10 3 3.06 × 10 3 2.77 × 10 3
F23Std8.13 × 10 1 8.82 × 10 1 3.51 × 10 1 2.62 × 10 2 7.87 × 10 1 1.93 × 10 2 6.78 × 10 1 1.77 × 10 2 1.60 × 10 2 1.76 × 10 2 1.77 × 10 2 2.15 × 10 1 8.33 × 10 1 4.06 × 10 1
Mean3.02 × 10 3 3.01 × 10 3 3.25 × 10 3 3.93 × 10 3 3.47 × 10 3 3.65 × 10 3 3.19 × 10 3 3.35 × 10 3 3.55 × 10 3 3.83 × 10 3 3.82 × 10 3 3.02 × 10 3 3.20 × 10 3 2.95 × 10 3
F24Std2.18 × 10 1 1.53 × 10 1 2.26 × 10 2 1.20 × 10 3 7.07 × 10 2 9.83 × 10 1 5.34 × 10 1 3.13 × 10 2 3.62 × 10 1 2.04 × 10 2 4.70 × 10 2 1.76 × 10 1 7.78 × 10 1 1.38 × 10 1
Mean2.90 × 10 3 2.90 × 10 3 3.60 × 10 3 6.18 × 10 3 4.84 × 10 3 3.27 × 10 3 2.99 × 10 3 3.14 × 10 3 3.02 × 10 3 3.94 × 10 3 5.18 × 10 3 2.92 × 10 3 3.11 × 10 3 2.90 × 10 3
F25Std1.25 × 10 3 8.60 × 10 2 5.22 × 10 2 1.08 × 10 3 9.74 × 10 2 1.35 × 10 3 6.58 × 10 2 1.63 × 10 3 1.37 × 10 3 9.09 × 10 2 7.97 × 10 2 9.97 × 10 2 1.38 × 10 3 4.61 × 10 2
Mean6.15 × 10 3 4.99 × 10 3 7.79 × 10 3 1.08 × 10 4 1.05 × 10 4 7.30 × 10 3 6.86 × 10 3 7.89 × 10 3 7.84 × 10 3 9.51 × 10 3 1.18 × 10 4 5.36 × 10 3 7.57 × 10 3 5.06 × 10 3
F26Std2.79 × 10 1 1.79 × 10 1 9.42 × 10 1 5.80 × 10 2 3.79 × 10 2 2.72 × 10 2 4.52 × 10 1 8.93 × 10 1 3.29 × 10 2 2.55 × 10 2 4.67 × 10 2 1.42 × 10 1 9.24 × 10 1 2.06 × 10 1
Mean3.28 × 10 3 3.23 × 10 3 3.58 × 10 3 4.49 × 10 3 3.95 × 10 3 3.65 × 10 3 3.32 × 10 3 3.39 × 10 3 3.66 × 10 3 4.25 × 10 3 4.51 × 10 3 3.28 × 10 3 3.41 × 10 3 3.23 × 10 3
F27Std4.65 × 10 1 2.13 × 10 1 4.52 × 10 2 1.39 × 10 3 7.00 × 10 2 1.18 × 10 3 7.50 × 10 2 7.72 × 10 2 9.13 × 10 1 6.74 × 10 2 6.58 × 10 2 2.85 × 10 1 9.08 × 10 1 4.43 × 10 1
Mean2.75 × 10 2 2.47 × 10 2 4.47 × 10 3 7.75 × 10 3 6.37 × 10 3 4.06 × 10 3 3.71 × 10 3 3.82 × 10 3 3.49 × 10 3 5.68 × 10 3 7.57 × 10 3 1.69 × 10 2 3.57 × 10 3 2.11 × 10 2
F28Std3.90 × 10 3 3.77 × 10 3 3.71 × 10 2 1.40 × 10 3 2.35 × 10 3 5.70 × 10 2 4.03 × 10 2 6.91 × 10 2 3.76 × 10 2 7.75 × 10 2 2.00 × 10 3 4.05 × 10 3 4.79 × 10 2 4.03 × 10 3
Mean4.07 × 10 3 3.79 × 10 3 5.29 × 10 3 6.68 × 10 3 7.28 × 10 3 4.81 × 10 3 4.46 × 10 3 4.90 × 10 3 4.97 × 10 3 6.13 × 10 3 8.59 × 10 3 4.00 × 10 3 5.01 × 10 3 4.03 × 10 3
F29Std9.59 × 10 3 5.73 × 10 3 8.34 × 10 7 3.68 × 10 8 1.14 × 10 9 5.13 × 10 7 5.02 × 10 6 4.98 × 10 7 9.39 × 10 6 4.74 × 10 8 9.10 × 10 8 7.81 × 10 4 4.22 × 10 7 8.46 × 10 4
Mean1.70 × 10 4 1.33 × 10 4 2.03 × 10 8 2.98 × 10 8 3.00 × 10 9 2.99 × 10 7 2.90 × 10 6 1.90 × 10 7 1.13 × 10 7 5.87 × 10 8 1.52 × 10 9 1.35 × 10 5 6.12 × 10 7 1.17 × 10 5
Table A2. Comparison of the FA-BCA with other algorithms on IEEE CEC 2017 benchmark functions with D = 50.
Table A2. Comparison of the FA-BCA with other algorithms on IEEE CEC 2017 benchmark functions with D = 50.
FItemFBCABCASCAGARSAPSODBOBKAHHOHOACOACPOPOSMA
F1Std2.73 × 10 5 8.68 × 10 6 8.35 × 10 9 2.54 × 10 10 7.82 × 10 9 5.19 × 10 9 1.95 × 10 10 2.11 × 10 10 1.81 × 10 9 8.39 × 10 9 6.1 × 10 9 1.43 × 10 8 4.08 × 10 9 2.26 × 10 6
Mean4.95 × 10 5 1.06 × 10 7 6.58 × 10 10 1.69 × 10 11 10.00 × 10 10 1.90 × 10 10 1.02 × 10 10 4.44 × 10 10 5.25 × 10 9 8.54 × 10 10 1.15 × 10 11 1.88 × 10 8 1.58 × 10 10 6.55 × 10 6
F2Std6.26 × 10 4 7.47 × 10 4 3.19 × 10 4 9.81 × 10 4 1.57 × 10 4 1.27 × 10 5 6.75 × 10 4 2.84 × 10 4 1.96 × 10 4 1.39 × 10 4 2.07 × 10 4 1.99 × 10 4 2.94 × 10 4 6.25 × 10 4
Mean2.34 × 10 5 3.26 × 10 5 2.20 × 10 5 4.68 × 10 5 1.77 × 10 5 3.96 × 10 5 2.69 × 10 5 9.58 × 10 4 1.72 × 10 5 1.59 × 10 5 2.01 × 10 5 1.78 × 10 5 1.94 × 10 5 1.82 × 10 5
F3Std6.55 × 10 1 5.72 × 10 1 3.50 × 10 3 1.57 × 10 4 5.72 × 10 3 2.43 × 10 3 7.37 × 10 2 6.84 × 10 3 6.01 × 10 2 5.01 × 10 3 5.17 × 10 3 4.94 × 10 1 9.82 × 10 2 6.74 × 10 1
Mean6.08 × 10 2 6.29 × 10 2 1.45 × 10 4 4.46 × 10 4 2.92 × 10 4 3.60 × 10 3 1.43 × 10 3 7.39 × 10 3 2.00 × 10 3 2.38 × 10 4 3.81 × 10 4 7.01 × 10 2 2.67 × 10 3 6.31 × 10 2
F4Std8.03 × 10 1 1.57 × 10 2 3.34 × 10 1 9.83 × 10 1 3.08 × 10 1 5.03 × 10 1 1.05 × 10 2 7.48 × 10 1 3.30 × 10 1 4.20 × 10 1 3.11 × 10 1 2.17 × 10 1 4.94 × 10 1 5.01 × 10 1
Mean7.88 × 10 2 8.48 × 10 2 1.14 × 10 3 1.51 × 10 3 1.17 × 10 3 1.11 × 10 3 9.88 × 10 2 9.07 × 10 2 9.43 × 10 2 1.08 × 10 3 1.19 × 10 3 9.29 × 10 2 1.02 × 10 3 8.16 × 10 2
F5Std6.69 × 10 0 2.75 × 10 0 6.21 × 10 0 1.35 × 10 1 4.90 × 10 0 1.55 × 10 1 1.22 × 10 1 9.83 × 10 0 5.00 × 10 0 6.97 × 10 0 2.35 × 10 0 3.19 × 10 0 7.41 × 10 0 1.18 × 10 1
Mean6.26 × 10 2 6.07 × 10 2 6.84 × 10 2 7.39 × 10 2 7.04 × 10 2 6.79 × 10 2 6.66 × 10 2 6.71 × 10 2 6.80 × 10 2 6.85 × 10 2 7.04 × 10 2 6.11 × 10 2 6.83 × 10 2 6.47 × 10 2
F6Std1.78 × 10 2 9.88 × 10 1 1.07 × 10 2 6.29 × 10 2 5.82 × 10 1 1.04 × 10 2 1.47 × 10 2 9.43 × 10 1 9.91 × 10 1 7.00 × 10 1 4.47 × 10 1 4.36 × 10 1 1.03 × 10 2 1.01 × 10 2
Mean1.35 × 10 3 1.28 × 10 3 1.91 × 10 3 4.50 × 10 3 1.96 × 10 3 1.73 × 10 3 1.41 × 10 3 1.76 × 10 3 1.88 × 10 3 1.85 × 10 3 2.06 × 10 3 1.26 × 10 3 1.79 × 10 3 1.17 × 10 3
F7Std6.34 × 10 1 1.34 × 10 2 3.22 × 10 1 1.01 × 10 2 2.15 × 10 1 6.37 × 10 1 9.63 × 10 1 1.09 × 10 2 3.82 × 10 1 4.27 × 10 1 2.97 × 10 1 2.25 × 10 1 5.95 × 10 1 5.27 × 10 1
Mean1.08 × 10 3 1.20 × 10 3 1.44 × 10 3 1.79 × 10 3 1.51 × 10 3 1.41 × 10 3 1.31 × 10 3 1.26 × 10 3 1.24 × 10 3 1.43 × 10 3 1.50 × 10 3 1.22 × 10 3 1.34 × 10 3 1.10 × 10 3
F8Std1.01 × 10 4 3.92 × 10 3 4.25 × 10 3 6.13 × 10 3 2.18 × 10 3 8.53 × 10 3 7.80 × 10 3 4.59 × 10 3 2.69 × 10 3 3.55 × 10 3 2.75 × 10 3 2.57 × 10 3 4.89 × 10 3 3.46 × 10 3
Mean1.99 × 10 4 6.73 × 10 3 3.38 × 10 4 4.84 × 10 4 3.91 × 10 4 2.93 × 10 4 2.96 × 10 4 1.74 × 10 4 3.23 × 10 4 2.82 × 10 4 3.77 × 10 4 7.83 × 10 3 2.74 × 10 4 1.70 × 10 4
F9Std3.40 × 10 3 9.01 × 10 2 3.83 × 10 2 9.06 × 10 2 4.83 × 10 2 1.00 × 10 3 2.18 × 10 3 1.61 × 10 3 1.16 × 10 3 7.75 × 10 2 3.76 × 10 2 5.37 × 10 2 1.02 × 10 3 1.01 × 10 3
Mean8.88 × 10 3 1.57 × 10 4 1.54 × 10 4 1.52 × 10 4 1.52 × 10 4 1.42 × 10 4 1.17 × 10 4 9.47 × 10 3 1.06 × 10 4 1.31 × 10 4 1.55 × 10 4 1.36 × 10 4 1.24 × 10 4 8.30 × 10 3
F10Std1.59 × 10 3 1.12 × 10 3 3.11 × 10 3 2.10 × 10 4 2.25 × 10 3 1.26 × 10 4 3.09 × 10 3 4.58 × 10 3 7.43 × 10 2 2.27 × 10 3 2.44 × 10 3 2.77 × 10 2 2.60 × 10 3 7.39 × 10 1
Mean2.06 × 10 3 2.31 × 10 3 1.26 × 10 4 6.26 × 10 4 2.18 × 10 4 1.93 × 10 4 4.19 × 10 3 5.63 × 10 3 3.10 × 10 3 1.89 × 10 4 2.67 × 10 4 1.84 × 10 3 7.94 × 10 3 1.44 × 10 3
F11Std7.59 × 10 6 5.38 × 10 6 6.77 × 10 9 2.59 × 10 10 1.95 × 10 10 7.50 × 10 9 5.78 × 10 8 1.56 × 10 10 6.82 × 10 8 1.09 × 10 10 1.36 × 10 10 1.05 × 10 7 1.19 × 10 9 1.92 × 10 7
Mean1.29 × 10 7 8.83 × 10 6 2.24 × 10 10 6.73 × 10 10 7.46 × 10 10 9.63 × 10 9 8.85 × 10 8 1.25 × 10 10 1.03 × 10 9 5.24 × 10 10 9.30 × 10 10 2.01 × 10 7 2.54 × 10 9 3.84 × 10 7
F12Std8.06 × 10 3 7.78 × 10 3 3.15 × 10 9 1.56 × 10 10 1.50 × 10 10 6.04 × 10 9 1.37 × 10 8 5.49 × 10 9 6.32 × 10 7 8.06 × 10 9 1.29 × 10 10 3.46 × 10 4 2.13 × 10 8 8.08 × 10 4
Mean1.44 × 10 4 9.44 × 10 3 7.19 × 10 9 2.91 × 10 10 4.80 × 10 10 5.83 × 10 9 9.34 × 10 7 1.71 × 10 9 4.08 × 10 7 2.24 × 10 10 5.38 × 10 10 2.58 × 10 4 3.47 × 10 8 1.32 × 10 5
F13Std8.22 × 10 5 6.68 × 10 5 5.86 × 10 6 1.30 × 10 8 5.40 × 10 7 4.76 × 10 7 3.85 × 10 6 1.04 × 10 5 7.33 × 10 6 2.99 × 10 7 9.57 × 10 7 1.66 × 10 5 2.87 × 10 6 7.45 × 10 5
Mean6.52 × 10 5 7.46 × 10 5 8.64 × 10 6 9.82 × 10 7 7.07 × 10 7 2.13 × 10 7 3.62 × 10 6 1.35 × 10 5 6.22 × 10 6 4.69 × 10 7 1.18 × 10 8 1.31 × 10 5 4.94 × 10 6 9.28 × 10 5
F14Std8.19 × 10 3 6.77 × 10 3 4.72 × 10 8 4.41 × 10 9 2.46 × 10 9 8.04 × 10 7 7.61 × 10 7 8.93 × 10 8 5.69 × 10 6 2.35 × 10 9 4.17 × 10 9 1.22 × 10 4 2.39 × 10 7 1.94 × 10 4
Mean1.08 × 10 4 8.76 × 10 3 1.15 × 10 9 7.82 × 10 9 5.96 × 10 9 7.06 × 10 7 2.03 × 10 7 2.17 × 10 8 3.32 × 10 6 4.43 × 10 9 1.03 × 10 10 1.44 × 10 4 1.83 × 10 7 4.98 × 10 4
F15Std7.32 × 10 2 1.13 × 10 3 3.58 × 10 2 1.29 × 10 3 1.58 × 10 3 9.60 × 10 2 6.64 × 10 2 1.10 × 10 3 8.06 × 10 2 9.78 × 10 2 1.39 × 10 3 2.69 × 10 2 7.42 × 10 2 4.05 × 10 2
Mean3.68 × 10 3 4.86 × 10 3 6.26 × 10 3 7.85 × 10 3 8.33 × 10 3 5.84 × 10 3 4.93 × 10 3 4.46 × 10 3 4.95 × 10 3 7.08 × 10 3 1.06 × 10 4 4.53 × 10 3 5.56 × 10 3 3.70 × 10 3
F16Std3.43 × 10 2 4.92 × 10 2 3.13 × 10 2 4.79 × 10 4 4.73 × 10 3 4.46 × 10 2 4.82 × 10 2 7.39 × 10 2 2.98 × 10 2 5.67 × 10 2 6.02 × 10 3 2.36 × 10 2 4.22 × 10 2 2.92 × 10 2
Mean3.17 × 10 3 4.17 × 10 3 5.05 × 10 3 2.32 × 10 4 1.34 × 10 4 4.58 × 10 3 4.40 × 10 3 3.82 × 10 3 3.90 × 10 3 4.84 × 10 3 1.16 × 10 4 3.47 × 10 3 4.22 × 10 3 3.35 × 10 3
F17Std2.30 × 10 6 1.32 × 10 7 3.07 × 10 7 1.48 × 10 8 8.23 × 10 7 3.99 × 10 7 1.61 × 10 7 1.14 × 10 7 1.40 × 10 7 4.56 × 10 7 1.00 × 10 8 9.88 × 10 5 2.63 × 10 7 5.40 × 10 6
Mean3.61 × 10 6 1.09 × 10 7 6.09 × 10 7 1.59 × 10 8 1.91 × 10 8 2.66 × 10 7 1.37 × 10 7 5.01 × 10 6 1.13 × 10 7 1.00 × 10 8 2.02 × 10 8 1.7 × 10 6 3.66 × 10 7 7.09 × 10 6
F18Std1.45 × 10 4 1.30 × 10 4 3.91 × 10 8 2.21 × 10 9 1.44 × 10 9 3.37 × 10 8 1.28 × 10 7 8.63 × 10 7 2.50 × 10 6 7.18 × 10 8 1.51 × 10 9 7.71 × 10 3 1.60 × 10 7 1.56 × 10 4
Mean2.60 × 10 4 1.62 × 10 4 7.85 × 10 8 3.44 × 10 9 4.68 × 10 9 1.33 × 10 8 9.62 × 10 6 1.69 × 10 7 2.27 × 10 6 1.00 × 10 9 3.19 × 10 9 2.21 × 10 4 2.21 × 10 7 2.30 × 10 4
F19Std5.27 × 10 2 5.22 × 10 2 2.21 × 10 2 3.47 × 10 2 1.69 × 10 2 3.12 × 10 2 3.76 × 10 2 3.40 × 10 2 2.74 × 10 2 2.85 × 10 2 1.85 × 10 2 2.34 × 10 2 3.76 × 10 2 2.24 × 10 2
Mean3.28 × 10 3 4.21 × 10 3 4.36 × 10 3 4.79 × 10 3 4.25 × 10 3 4.14 × 10 3 3.82 × 10 3 3.29 × 10 3 3.65 × 10 3 3.75 × 10 3 4.15 × 10 3 3.58 × 10 3 3.70 × 10 3 3.32 × 10 3
F20Std7.19 × 10 1 1.49 × 10 2 3.84 × 10 1 1.31 × 10 2 7.90 × 10 1 7.72 × 10 1 9.81 × 10 1 1.18 × 10 2 9.74 × 10 1 6.23 × 10 1 8.97 × 10 1 2.67 × 10 1 7.90 × 10 1 6.36 × 10 1
Mean2.56 × 10 3 2.65 × 10 3 2.96 × 10 3 3.38 × 10 3 3.12 × 10 3 2.93 × 10 3 2.85 × 10 3 2.88 × 10 3 2.98 × 10 3 2.98 × 10 3 3.31 × 10 3 2.70 × 10 3 2.90 × 10 3 2.58 × 10 3
F21Std3.32 × 10 3 2.88 × 10 3 4.53 × 10 2 7.90 × 10 2 5.24 × 10 2 2.17 × 10 3 2.27 × 10 3 1.22 × 10 3 1.09 × 10 3 8.03 × 10 2 6.09 × 10 2 5.62 × 10 3 9.99 × 10 2 8.72 × 10 2
Mean1.17 × 10 4 1.67 × 10 4 1.73 × 10 4 1.75 × 10 4 1.74 × 10 4 1.51 × 10 4 1.29 × 10 4 1.11 × 10 4 1.25 × 10 4 1.52 × 10 4 1.69 × 10 4 1.19 × 10 4 1.42 × 10 4 9.77 × 10 3
F22Std1.23 × 10 2 1.68 × 10 2 9.17 × 10 1 3.20 × 10 2 1.90 × 10 2 3.54 × 10 2 1.58 × 10 2 1.67 × 10 2 2.32 × 10 2 2.67 × 10 2 2.23 × 10 2 3.02 × 10 1 2.03 × 10 2 6.20 × 10 1
Mean3.13 × 10 3 3.09 × 10 3 3.73 × 10 3 4.65 × 10 3 4.08 × 10 3 4.14 × 10 3 3.61 × 10 3 3.78 × 10 3 4.04 × 10 3 4.52 × 10 3 4.60 × 10 3 3.18 × 10 3 3.67 × 10 3 3.03 × 10 3
F23Std1.22 × 10 2 1.21 × 10 2 9.37 × 10 1 3.67 × 10 2 6.44 × 10 2 3.01 × 10 2 1.56 × 10 2 2.26 × 10 2 2.47 × 10 2 2.61 × 10 2 2.27 × 10 2 3.12 × 10 1 1.17 × 10 2 1.03 × 10 2
Mean3.31 × 10 3 3.36 × 10 3 3.88 × 10 3 5.16 × 10 3 4.42 × 10 3 4.63 × 10 3 3.71 × 10 3 3.90 × 10 3 4.47 × 10 3 5.00 × 10 3 4.88 × 10 3 3.35 × 10 3 3.80 × 10 3 3.19 × 10 3
F24Std3.38 × 10 1 3.63 × 10 1 1.20 × 10 3 7.21 × 10 3 1.42 × 10 3 1.00 × 10 3 1.70 × 10 3 2.40 × 10 3 2.45 × 10 2 9.93 × 10 2 1.07 × 10 3 4.87 × 10 1 4.58 × 10 2 4.22 × 10 1
Mean3.08 × 10 3 3.11 × 10 3 9.68 × 10 3 2.99 × 10 4 1.34 × 10 4 5.85 × 10 3 3.91 × 10 3 6.13 × 10 3 3.83 × 10 3 1.19 × 10 4 1.56 × 10 4 3.23 × 10 3 4.51 × 10 3 3.10 × 10 3
F25Std1.65 × 10 3 1.61 × 10 3 7.94 × 10 2 2.33 × 10 3 7.26 × 10 2 2.19 × 10 3 1.60 × 10 3 2.20 × 10 3 1.40 × 10 3 7.39 × 10 2 6.25 × 10 2 1.74 × 10 3 2.34 × 10 3 2.28 × 10 3
Mean8.06 × 10 3 7.33 × 10 3 1.41 × 10 4 2.13 × 10 4 1.65 × 10 4 1.27 × 10 4 1.13 × 10 4 1.25 × 10 4 1.19 × 10 4 1.55 × 10 4 1.76 × 10 4 7.99 × 10 3 1.14 × 10 4 4.87 × 10 3
F26Std1.39 × 10 2 8.58 × 10 1 2.52 × 10 2 7.19 × 10 2 1.13 × 10 3 8.53 × 10 2 3.16 × 10 2 5.00 × 10 2 5.65 × 10 2 5.97 × 10 2 8.51 × 10 2 7.25 × 10 1 2.90 × 10 2 8.15 × 10 1
Mean3.65 × 10 3 3.47 × 10 3 4.97 × 10 3 6.98 × 10 3 6.14 × 10 3 5.12 × 10 3 4.04 × 10 3 4.35 × 10 3 5.21 × 10 3 6.98 × 10 3 7.14 × 10 3 3.74 × 10 3 4.25 × 10 3 3.55 × 10 3
F27Std4.18 × 10 1 8.34 × 10 1 1.10 × 10 3 3.23 × 10 3 1.28 × 10 3 1.60 × 10 3 2.38 × 10 3 2.26 × 10 3 3.99 × 10 2 8.84 × 10 2 1.63 × 10 3 1.24 × 10 2 4.19 × 10 2 4.88 × 10 1
Mean3.39 × 10 3 3.45 × 10 3 8.69 × 10 3 1.60 × 10 4 1.17 × 10 4 6.50 × 10 3 6.32 × 10 3 6.82 × 10 3 4.81 × 10 3 1.07 × 10 4 1.37 × 10 4 3.70 × 10 3 5.32 × 10 3 3.38 × 10 3
F28Std3.67 × 10 2 8.21 × 10 2 9.39 × 10 2 6.29 × 10 4 8.47 × 10 4 9.26 × 10 3 8.47 × 10 2 5.28 × 10 3 8.38 × 10 2 2.20 × 10 4 1.24 × 10 5 2.50 × 10 2 1.69 × 10 3 4.67 × 10 2
Mean4.52 × 10 3 4.65 × 10 3 8.86 × 10 3 3.31 × 10 4 6.00 × 10 4 9.51 × 10 3 6.45 × 10 3 8.27 × 10 3 6.99 × 10 3 2.26 × 10 4 1.33 × 10 5 5.16 × 10 3 8.16 × 10 3 4.88 × 10 3
F29Std4.56 × 10 5 4.39 × 10 5 5.00 × 10 8 2.99 × 10 9 2.40 × 10 9 6.39 × 10 8 4.60 × 10 7 6.15 × 10 7 3.83 × 10 7 1.21 × 10 9 2.92 × 10 9 3.10 × 10 6 1.27 × 10 8 4.43 × 10 6
Mean1.75 × 10 6 1.2 × 10 6 1.42 × 10 9 5.64 × 10 9 7.92 × 10 9 5.30 × 10 8 5.18 × 10 7 7.25 × 10 7 1.34 × 10 8 3.02 × 10 9 8.81 × 10 9 9.53 × 10 6 3.12 × 10 8 1.28 × 10 7
Table A3. Comparison of the FA-BCA with other algorithms on IEEE CEC 2017 benchmark functions with D = 100.
Table A3. Comparison of the FA-BCA with other algorithms on IEEE CEC 2017 benchmark functions with D = 100.
FItemFBCABCASCAGARSAPSODBOBKAHHOHOACOACPOPOSMA
F1Std1.87 × 10 8 7.9 × 10 9 1.31 × 10 10 7.1 × 10 10 6.9 × 10 9 1.88 × 10 10 7.29 × 10 10 4.57 × 10 10 7.91 × 10 9 1.27 × 10 10 1.21 × 10 10 3.26 × 10 9 1.09 × 10 10 9.06 × 10 7
Mean2.00 × 10 8 1.08 × 10 10 2.19 × 10 11 5.55 × 10 11 2.46 × 10 11 1.16 × 10 11 8.25 × 10 10 1.53 × 10 11 5.15 × 10 10 2.25 × 10 11 2.72 × 10 11 1.40 × 10 10 8.96 × 10 10 4.70 × 10 8
F2Std9.09 × 10 4 2.65 × 10 5 9.83 × 10 4 1.47 × 10 5 1.82 × 10 4 2.72 × 10 5 2.23 × 10 5 4.75 × 10 4 2.03 × 10 5 1.07 × 10 4 1.37 × 10 4 6.25 × 10 4 6.67 × 10 4 3.19 × 10 5
Mean6.97 × 10 5 8.59 × 10 5 6.20 × 10 5 9.21 × 10 5 3.48 × 10 5 1.02 × 10 6 6.16 × 10 5 2.72 × 10 5 4.12 × 10 5 3.28 × 10 5 3.55 × 10 5 4.60 × 10 5 3.91 × 10 5 7.78 × 10 5
F3Std1.01 × 10 2 4.73 × 10 2 8.51 × 10 3 4.82 × 10 4 1.31 × 10 4 5.16 × 10 3 1.46 × 10 4 1.48 × 10 4 1.98 × 10 3 1.07 × 10 4 1.15 × 10 4 4.95 × 10 2 2.15 × 10 3 1.03 × 10 2
Mean1.01 × 10 3 1.85 × 10 3 5.16 × 10 4 2.01 × 10 5 8.57 × 10 4 1.86 × 10 4 1.63 × 10 4 2.43 × 10 4 9.53 × 10 3 6.93 × 10 4 1.10 × 10 5 2.58 × 10 3 1.25 × 10 4 1.02 × 10 3
F4Std1.86 × 10 2 2.46 × 10 2 5.83 × 10 1 1.54 × 10 2 4.03 × 10 1 9.36 × 10 1 2.18 × 10 2 1.56 × 10 2 6.31 × 10 1 6.33 × 10 1 4.40 × 10 1 6.34 × 10 1 6.78 × 10 1 1.06 × 10 2
Mean1.26 × 10 3 1.57 × 10 3 2.04 × 10 3 2.92 × 10 3 2.06 × 10 3 2.02 × 10 3 1.71 × 10 3 1.55 × 10 3 1.67 × 10 3 1.92 × 10 3 2.13 × 10 3 1.64 × 10 3 1.81 × 10 3 1.38 × 10 3
F5Std6.18 × 10 0 1.18 × 10 1 4.61 × 10 0 6.83 × 10 0 4.13 × 10 0 9.24 × 10 0 1.32 × 10 1 6.33 × 10 0 4.23 × 10 0 5.47 × 10 0 4.20 × 10 0 5.87 × 10 0 4.68 × 10 0 5.81 × 10 0
Mean6.40 × 10 2 6.33 × 10 2 7.05 × 10 2 7.61 × 10 2 7.13 × 10 2 7.03 × 10 2 6.79 × 10 2 6.78 × 10 2 6.90 × 10 2 6.97 × 10 2 7.13 × 10 2 6.41 × 10 2 6.96 × 10 2 6.65 × 10 2
F6Std3.34 × 10 2 2.11 × 10 2 3.06 × 10 2 9.98 × 10 2 8.14 × 10 1 2.06 × 10 2 2.64 × 10 2 1.90 × 10 2 1.06 × 10 2 1.25 × 10 2 7.80 × 10 1 1.16 × 10 2 1.26 × 10 2 2.69 × 10 2
Mean2.63 × 10 3 2.54 × 10 3 4.09 × 10 3 1.17 × 10 4 3.84 × 10 3 3.68 × 10 3 2.98 × 10 3 3.42 × 10 3 3.76 × 10 3 3.69 × 10 3 4.04 × 10 3 2.35 × 10 3 3.66 × 10 3 2.37 × 10 3
F7Std1.86 × 10 2 1.82 × 10 2 7.82 × 10 1 1.79 × 10 2 4.08 × 10 1 1.14 × 10 2 2.21 × 10 2 1.19 × 10 2 5.78 × 10 1 8.18 × 10 1 5.16 × 10 1 5.98 × 10 1 7.18 × 10 1 1.08 × 10 2
Mean1.53 × 10 3 1.94 × 10 3 2.41 × 10 3 3.33 × 10 3 2.54 × 10 3 2.40 × 10 3 2.15 × 10 3 2.00 × 10 3 2.12 × 10 3 2.36 × 10 3 2.59 × 10 3 1.98 × 10 3 2.27 × 10 3 1.68 × 10 3
F8Std1.79 × 10 4 1.49 × 10 4 1.09 × 10 4 2.55 × 10 4 3.31 × 10 3 1.28 × 10 4 1.19 × 10 4 1.07 × 10 4 4.99 × 10 3 6.22 × 10 3 3.98 × 10 3 6.60 × 10 3 5.54 × 10 3 3.51 × 10 3
Mean7.05 × 10 4 4.42 × 10 4 9.48 × 10 4 1.54 × 10 5 8.27 × 10 4 9.02 × 10 4 7.99 × 10 4 3.85 × 10 4 6.93 × 10 4 6.80 × 10 4 7.87 × 10 4 4.87 × 10 4 6.45 × 10 4 3.78 × 10 4
F9Std2.64 × 10 3 6.73 × 10 2 6.39 × 10 2 8.89 × 10 2 9.94 × 10 2 1.30 × 10 3 4.65 × 10 3 1.44 × 10 3 1.78 × 10 3 1.35 × 10 3 5.20 × 10 2 6.21 × 10 2 1.68 × 10 3 1.13 × 10 3
Mean3.12 × 10 4 3.38 × 10 4 3.30 × 10 4 3.40 × 10 4 3.21 × 10 4 3.15 × 10 4 2.93 × 10 4 1.98 × 10 4 2.45 × 10 4 2.93 × 10 4 3.29 × 10 4 3.07 × 10 4 2.73 × 10 4 1.90 × 10 4
F10Std1.68 × 10 4 3.85 × 10 4 2.66 × 10 4 1.58 × 10 5 3.97 × 10 4 4.92 × 10 5 5.37 × 10 4 3.46 × 10 4 3.88 × 10 4 3.60 × 10 4 5.00 × 10 4 1.56 × 10 4 2.16 × 10 4 8.64 × 10 3
Mean5.05 × 10 4 1.93 × 10 5 1.75 × 10 5 5.36 × 10 5 2.13 × 10 5 5.29 × 10 5 2.18 × 10 5 7.35 × 10 4 1.52 × 10 5 1.86 × 10 5 2.66 × 10 5 9.05 × 10 4 1.62 × 10 5 2.39 × 10 4
F11Std4.98 × 10 7 4.91 × 10 8 1.27 × 10 10 6.92 × 10 10 1.62 × 10 10 1.20 × 10 10 2.75 × 10 9 3.56 × 10 10 3.72 × 10 9 1.74 × 10 10 1.49 × 10 10 3.02 × 10 8 3.94 × 10 9 2.31 × 10 8
Mean1.02 × 10 8 5.75 × 10 8 9.45 × 10 10 2.91 × 10 11 1.85 × 10 11 3.00 × 10 10 7.94 × 10 9 6.44 × 10 10 1.15 × 10 10 1.52 × 10 11 2.06 × 10 11 8.36 × 10 8 1.82 × 10 10 4.46 × 10 8
F12Std1.13 × 10 5 7.19 × 10 3 2.87 × 10 9 1.39 × 10 10 5.43 × 10 9 2.13 × 10 9 2.48 × 10 8 6.26 × 10 9 1.42 × 10 8 3.91 × 10 9 5.92 × 10 9 1.01 × 10 5 1.15 × 10 9 5.31 × 10 6
Mean5.69 × 10 4 1.38 × 10 4 1.77 × 10 10 6.86 × 10 10 4.60 × 10 10 3.37 × 10 9 3.42 × 10 8 6.98 × 10 9 2.73 × 10 8 3.12 × 10 10 4.76 × 10 10 1.85 × 10 5 1.76 × 10 9 1.71 × 10 6
F13Std1.41 × 10 6 6.59 × 10 6 2.20 × 10 7 1.66 × 10 8 4.40 × 10 7 2.28 × 10 7 9.15 × 10 6 6.41 × 10 6 3.83 × 10 6 2.21 × 10 7 5.02 × 10 7 1.77 × 10 6 6.73 × 10 6 3.54 × 10 6
Mean3.02 × 10 6 7.05 × 10 6 5.71 × 10 7 2.29 × 10 8 8.72 × 10 7 2.98 × 10 7 1.54 × 10 7 3.18 × 10 6 1.16 × 10 7 4.31 × 10 7 1.14 × 10 8 4.39 × 10 6 1.72 × 10 7 5.40 × 10 6
F14Std1.95 × 10 4 6.59 × 10 3 1.50 × 10 9 1.09 × 10 10 4.40 × 10 9 1.79 × 10 9 9.64 × 10 7 3.29 × 10 9 1.35 × 10 7 3.60 × 10 9 4.48 × 10 9 3.88 × 10 3 2.20 × 10 8 2.41 × 10 6
Mean1.65 × 10 4 7.37 × 10 3 5.62 × 10 9 3.01 × 10 10 2.22 × 10 10 1.65 × 10 9 6.39 × 10 7 1.28 × 10 9 1.86 × 10 7 1.76 × 10 10 2.63 × 10 10 1.01 × 10 4 2.50 × 10 8 9.25 × 10 5
F15Std1.83 × 10 3 1.46 × 10 3 1.12 × 10 3 5.51 × 10 3 2.64 × 10 3 1.25 × 10 3 1.55 × 10 3 3.67 × 10 3 1.29 × 10 3 1.97 × 10 3 2.31 × 10 3 5.54 × 10 2 1.64 × 10 3 5.85 × 10 2
Mean6.16 × 10 3 1.13 × 10 4 1.51 × 10 4 3.00 × 10 4 2.20 × 10 4 1.23 × 10 4 9.41 × 10 3 1.13 × 10 4 1.07 × 10 4 1.76 × 10 4 2.51 × 10 4 1.04 × 10 4 1.33 × 10 4 6.50 × 10 3
F16Std6.63 × 10 2 5.52 × 10 2 4.27 × 10 4 1.25 × 10 7 8.43 × 10 6 6.52 × 10 4 1.29 × 10 3 1.45 × 10 6 2.40 × 10 3 2.20 × 10 6 1.54 × 10 7 4.29 × 10 2 1.15 × 10 4 7.25 × 10 2
Mean4.77 × 10 3 8.46 × 10 3 5.76 × 10 4 7.37 × 10 6 8.79 × 10 6 4.32 × 10 4 9.63 × 10 3 3.90 × 10 5 8.85 × 10 3 2.26 × 10 6 1.63 × 10 7 7.02 × 10 3 1.50 × 10 4 5.65 × 10 3
F17Std3.79 × 10 6 1.27 × 10 7 6.34 × 10 7 3.49 × 10 8 9.00 × 10 7 3.11 × 10 7 1.33 × 10 7 1.61 × 10 7 4.27 × 10 6 3.14 × 10 7 1.54 × 10 8 2.45 × 10 6 9.85 × 10 6 4.15 × 10 6
Mean6.74 × 10 6 1.74 × 10 7 1.34 × 10 8 4.72 × 10 8 1.74 × 10 8 5.06 × 10 7 2.56 × 10 7 6.58 × 10 6 8.87 × 10 6 7.15 × 10 7 3.24 × 10 8 5.45 × 10 6 2.22 × 10 7 1.00 × 10 7
F18Std7.09 × 10 3 6.12 × 10 3 1.49 × 10 9 9.60 × 10 9 4.56 × 10 9 4.99 × 10 8 1.65 × 10 8 4.50 × 10 9 5.13 × 10 7 3.29 × 10 9 5.20 × 10 9 9.26 × 10 3 2.23 × 10 8 4.39 × 10 5
Mean1.12 × 10 4 7.31 × 10 3 5.33 × 10 9 3.02 × 10 10 2.50 × 10 10 1.18 × 10 9 1.37 × 10 8 2.27 × 10 9 4.91 × 10 7 1.53 × 10 10 2.64 × 10 10 1.19 × 10 4 2.77 × 10 8 6.90 × 10 5
F19Std1.37 × 10 3 3.20 × 10 2 3.30 × 10 2 3.29 × 10 2 2.07 × 10 2 3.56 × 10 2 7.95 × 10 2 9.30 × 10 2 4.99 × 10 2 5.12 × 10 2 2.40 × 10 2 2.80 × 10 2 4.98 × 10 2 6.70 × 10 2
Mean6.33 × 10 3 8.31 × 10 3 8.11 × 10 3 8.83 × 10 3 7.90 × 10 3 7.91 × 10 3 7.21 × 10 3 6.06 × 10 3 6.25 × 10 3 6.95 × 10 3 8.00 × 10 3 7.36 × 10 3 6.70 × 10 3 5.80 × 10 3
F20Std2.15 × 10 2 2.36 × 10 2 1.01 × 10 2 2.43 × 10 2 2.28 × 10 2 2.31 × 10 2 2.12 × 10 2 2.96 × 10 2 2.44 × 10 2 1.82 × 10 2 2.44 × 10 2 4.70 × 10 1 1.27 × 10 2 1.46 × 10 2
Mean3.10 × 10 3 3.42 × 10 3 4.18 × 10 3 5.34 × 10 3 4.61 × 10 3 4.29 × 10 3 4.07 × 10 3 4.15 × 10 3 4.44 × 10 3 4.51 × 10 3 5.05 × 10 3 3.40 × 10 3 4.20 × 10 3 3.21 × 10 3
F21Std5.64 × 10 3 8.54 × 10 2 7.43 × 10 2 1.47 × 10 3 8.58 × 10 2 1.81 × 10 3 4.62 × 10 3 2.73 × 10 3 1.93 × 10 3 8.77 × 10 2 7.06 × 10 2 7.02 × 10 2 1.49 × 10 3 1.21 × 10 3
Mean2.99 × 10 4 3.61 × 10 4 3.52 × 10 4 3.65 × 10 4 3.48 × 10 4 3.27 × 10 4 3.04 × 10 4 2.39 × 10 4 2.73 × 10 4 3.22 × 10 4 3.54 × 10 4 3.31 × 10 4 3.04 × 10 4 2.08 × 10 4
F22Std1.25 × 10 2 2.00 × 10 2 1.43 × 10 2 6.14 × 10 2 1.80 × 10 2 4.56 × 10 2 2.41 × 10 2 3.10 × 10 2 5.48 × 10 2 4.85 × 10 2 3.16 × 10 2 5.22 × 10 1 2.18 × 10 2 1.25 × 10 2
Mean3.58 × 10 3 3.59 × 10 3 5.26 × 10 3 8.03 × 10 3 5.72 × 10 3 6.32 × 10 3 4.82 × 10 3 5.24 × 10 3 5.89 × 10 3 7.33 × 10 3 6.74 × 10 3 4.00 × 10 3 5.09 × 10 3 3.59 × 10 3
F23Std2.35 × 10 2 2.62 × 10 2 2.63 × 10 2 1.54 × 10 3 2.65 × 10 3 9.98 × 10 2 5.28 × 10 2 8.76 × 10 2 7.31 × 10 2 7.44 × 10 2 8.58 × 10 2 6.81 × 10 1 3.85 × 10 2 1.73 × 10 2
Mean4.36 × 10 3 4.34 × 10 3 7.43 × 10 3 1.31 × 10 4 9.65 × 10 3 1.02 × 10 4 6.19 × 10 3 7.00 × 10 3 8.47 × 10 3 1.20 × 10 4 1.04 × 10 4 4.56 × 10 3 6.60 × 10 3 4.26 × 10 3
F24Std1.44 × 10 2 2.89 × 10 2 2.94 × 10 3 1.46 × 10 4 2.35 × 10 3 2.27 × 10 3 5.56 × 10 3 4.47 × 10 3 5.11 × 10 2 2.08 × 10 3 1.22 × 10 3 3.87 × 10 2 9.58 × 10 2 7.28 × 10 1
Mean3.70 × 10 3 4.46 × 10 3 2.26 × 10 4 9.62 × 10 4 2.59 × 10 4 1.44 × 10 4 9.79 × 10 3 1.34 × 10 4 6.81 × 10 3 2.41 × 10 4 3.03 × 10 4 5.01 × 10 3 9.01 × 10 3 3.70 × 10 3
F25Std2.19 × 10 3 2.47 × 10 3 3.38 × 10 3 9.66 × 10 3 2.20 × 10 3 1.52 × 10 4 3.97 × 10 3 7.23 × 10 3 2.65 × 10 3 3.14 × 10 3 1.85 × 10 3 1.49 × 10 3 3.52 × 10 3 2.64 × 10 3
Mean1.59 × 10 4 1.63 × 10 4 4.22 × 10 4 7.74 × 10 4 5.02 × 10 4 3.85 × 10 4 2.69 × 10 4 3.67 × 10 4 3.23 × 10 4 4.80 × 10 4 5.34 × 10 4 2.11 × 10 4 3.35 × 10 4 1.60 × 10 4
F26Std8.15 × 10 1 1.41 × 10 2 5.68 × 10 2 1.79 × 10 3 2.31 × 10 3 1.92 × 10 3 4.64 × 10 2 1.48 × 10 3 1.13 × 10 3 1.13 × 10 3 1.57 × 10 3 9.73 × 10 1 4.87 × 10 2 1.05 × 10 2
Mean3.78 × 10 3 3.79 × 10 3 8.70 × 10 3 1.53 × 10 4 1.16 × 10 4 7.87 × 10 3 4.62 × 10 3 5.95 × 10 3 7.02 × 10 3 1.32 × 10 4 1.50 × 10 4 4.23 × 10 3 5.31 × 10 3 3.85 × 10 3
F27Std3.48 × 10 2 1.27 × 10 3 3.65 × 10 3 8.84 × 10 3 2.32 × 10 3 2.40 × 10 3 7.08 × 10 3 6.83 × 10 3 8.50 × 10 2 2.19 × 10 3 1.62 × 10 3 4.81 × 10 2 1.63 × 10 3 7.13 × 10 1
Mean4.07 × 10 3 6.29 × 10 3 2.76 × 10 4 6.31 × 10 4 2.89 × 10 4 1.82 × 10 4 1.70 × 10 4 2.00 × 10 4 9.49 × 10 3 3.13 × 10 4 3.02 × 10 4 5.98 × 10 3 1.19 × 10 4 3.73 × 10 3
F28Std6.90 × 10 2 1.91 × 10 3 1.10 × 10 4 1.46 × 10 6 4.84 × 10 5 2.16 × 10 3 2.30 × 10 3 9.79 × 10 4 1.83 × 10 3 1.48 × 10 5 5.99 × 10 5 4.70 × 10 2 2.95 × 10 3 5.35 × 10 2
Mean6.98 × 10 3 9.31 × 10 3 3.20 × 10 4 1.26 × 10 6 6.66 × 10 5 1.61 × 10 4 1.23 × 10 4 5.46 × 10 4 1.30 × 10 4 1.67 × 10 5 8.15 × 10 5 1.03 × 10 4 1.80 × 10 4 8.28 × 10 3
F29Std5.40 × 10 5 5.33 × 10 5 2.37 × 10 9 1.32 × 10 10 4.79 × 10 9 3.63 × 10 9 1.23 × 10 8 6.08 × 10 9 3.69 × 10 8 6.38 × 10 9 6.77 × 10 9 3.42 × 10 6 1.12 × 10 9 8.65 × 10 6
Mean8.27 × 10 5 7.05 × 10 5 1.21 × 10 10 5.49 × 10 10 4.26 × 10 10 4.03 × 10 9 2.68 × 10 8 5.75 × 10 9 8.27 × 10 8 2.76 × 10 10 4.21 × 10 10 7.67 × 10 6 2.13 × 10 9 1.64 × 10 7

References

  1. Zhang, W.; Shen, X.; Zhang, H.; Yin, Z.; Sun, J.; Zhang, X.; Zou, L. Feature importance measure of a multilayer perceptron based on the presingle-connection layer. Knowl. Inf. Syst. 2024, 66, 511–533. [Google Scholar] [CrossRef]
  2. Chan, K.Y.; Abu-Salih, B.; Qaddoura, R.; Al-Zoubi, A.; Palade, V.; Pham, D.S.; Del Ser, J.; Muhammad, K. Deep neural networks in the cloud: Review, applications, challenges and research directions. Neurocomputing 2023, 545, 126327. [Google Scholar] [CrossRef]
  3. Zhao, X.; Wang, L.; Zhang, Y.; Han, X.; Deveci, M.; Parmar, M. A review of convolutional neural networks in computer vision. Artif. Intell. Rev. 2024, 57, 99. [Google Scholar] [CrossRef]
  4. Sajun, A.R.; Zualkernan, I.; Sankalpa, D. A historical survey of advances in transformer architectures. Appl. Sci. 2024, 14, 4316. [Google Scholar] [CrossRef]
  5. Terven, J.; Córdova-Esparza, D.M.; Romero-González, J.A. A comprehensive review of yolo architectures in computer vision: From yolov1 to yolov8 and yolo-nas. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
  6. Deng, Z.; Ma, W.; Han, Q.L.; Zhou, W.; Zhu, X.; Wen, S.; Xiang, Y. Exploring DeepSeek: A Survey on Advances, Applications, Challenges and Future Directions. IEEE/CAA J. Autom. Sin. 2025, 12, 872–893. [Google Scholar] [CrossRef]
  7. Jin, C.; Netrapalli, P.; Ge, R.; Kakade, S.M.; Jordan, M.I. On nonconvex optimization for machine learning: Gradients, stochasticity, and saddle points. J. ACM (JACM) 2021, 68, 1–29. [Google Scholar] [CrossRef]
  8. Swenson, B.; Murray, R.; Poor, H.V.; Kar, S. Distributed stochastic gradient descent: Nonconvexity, nonsmoothness, and convergence to local minima. J. Mach. Learn. Res. 2022, 23, 1–62. [Google Scholar]
  9. Van Thieu, N.; Mirjalili, S.; Garg, H.; Hoang, N.T. MetaPerceptron: A standardized framework for metaheuristic-driven multi-layer perceptron optimization. Comput. Stand. Interfaces 2025, 93, 103977. [Google Scholar] [CrossRef]
  10. Mirjalili, S. How effective is the Grey Wolf optimizer in training multi-layer perceptrons. Appl. Intell. 2015, 43, 150–161. [Google Scholar] [CrossRef]
  11. Hai, T.; Li, H.; Band, S.S.; Shadkani, S.; Samadianfard, S.; Hashemi, S.; Chau, K.W.; Mousavi, A. Comparison of the efficacy of particle swarm optimization and stochastic gradient descent algorithms on multi-layer perceptron model to estimate longitudinal dispersion coefficients in natural streams. Eng. Appl. Comput. Fluid Mech. 2022, 16, 2207–2221. [Google Scholar] [CrossRef]
  12. Hameed, F.; Alkhzaimi, H. Hybrid genetic algorithm and deep learning techniques for advanced side-channel attacks. Sci. Rep. 2025, 15, 25728. [Google Scholar] [CrossRef]
  13. Gurgenc, E.; Altay, O.; Altay, E.V. AOSMA-MLP: A novel method for hybrid metaheuristics artificial neural networks and a new approach for prediction of geothermal reservoir temperature. Appl. Sci. 2024, 14, 3534. [Google Scholar] [CrossRef]
  14. Lu, Y.; Zhao, H. Research on Slope Stability Prediction Based on MC-BKA-MLP Mixed Model. Appl. Sci. 2025, 15, 3158. [Google Scholar] [CrossRef]
  15. Abu-Doush, I.; Ahmed, B.; Awadallah, M.A.; Al-Betar, M.A.; Rababaah, A.R. Enhancing multilayer perceptron neural network using archive-based harris hawks optimizer to predict gold prices. J. King Saud Univ.-Comput. Inf. Sci. 2023, 35, 101557. [Google Scholar] [CrossRef]
  16. Mohammadi, B.; Guan, Y.; Moazenzadeh, R.; Safari, M.J.S. Implementation of hybrid particle swarm optimization-differential evolution algorithms coupled with multi-layer perceptron for suspended sediment load estimation. Catena 2021, 198, 105024. [Google Scholar] [CrossRef]
  17. Ehteram, M.; Panahi, F.; Ahmed, A.N.; Huang, Y.F.; Kumar, P.; Elshafie, A. Predicting evaporation with optimized artificial neural network using multi-objective salp swarm algorithm. Environ. Sci. Pollut. Res. 2022, 29, 10675–10701. [Google Scholar] [CrossRef]
  18. Yang, Z.; Jiang, Y.; Yeh, W.C. Self-learning salp swarm algorithm for global optimization and its application in multi-layer perceptron model training. Sci. Rep. 2024, 14, 27401. [Google Scholar] [CrossRef] [PubMed]
  19. Liu, Z.; Cui, Z.; Wang, M.; Liu, B.; Tian, W. A machine learning proxy based multi-objective optimization method for low-carbon hydrogen production. J. Clean. Prod. 2024, 445, 141377. [Google Scholar] [CrossRef]
  20. Jiang, J.; Wu, J.; Luo, J.; Yang, X.; Huang, Z. MOBCA: Multi-Objective Besiege and Conquer Algorithm. Biomimetics 2024, 9, 316. [Google Scholar] [CrossRef]
  21. Jiang, J.; Meng, X.; Wu, J.; Tian, J.; Xu, G.; Li, W. BCA: Besiege and Conquer Algorithm. Symmetry 2025, 17, 217. [Google Scholar] [CrossRef]
  22. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  23. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  24. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; IEEE: New York, NY, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  25. Qamar, R.; Zardari, B.A. Artificial Neural Networks: An Overview. Mesopotamian J. Comput. Sci. 2023, 2023, 124–133. [Google Scholar] [CrossRef] [PubMed]
  26. Xi, F. Stability for a random evolution equation with Gaussian perturbation. J. Math. Anal. Appl. 2002, 272, 458–472. [Google Scholar] [CrossRef]
  27. Omar, M.B.; Bingi, K.; Prusty, B.R.; Ibrahim, R. Recent advances and applications of spiral dynamics optimization algorithm: A review. Fractal Fract. 2022, 6, 27. [Google Scholar] [CrossRef]
  28. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  29. Yang, Q.; Hua, L.; Gao, X.; Xu, D.; Lu, Z.; Jeon, S.W.; Zhang, J. Stochastic cognitive dominance leading particle swarm optimization for multimodal problems. Mathematics 2022, 10, 761. [Google Scholar] [CrossRef]
  30. Yang, Q.; Jing, Y.; Gao, X.; Xu, D.; Lu, Z.; Jeon, S.W.; Zhang, J. Predominant cognitive learning particle swarm optimization for global numerical optimization. Mathematics 2022, 10, 1620. [Google Scholar] [CrossRef]
  31. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  32. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  33. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  34. Wang, J.; Wang, W.C.; Hu, X.X.; Qiu, L.; Zang, H.F. Black-winged kite algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev. 2024, 57, 98. [Google Scholar] [CrossRef]
  35. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  36. Oladejo, S.O.; Ekwe, S.O.; Mirjalili, S. The Hiking Optimization Algorithm: A novel human-based metaheuristic approach. Knowl.-Based Syst. 2024, 296, 111880. [Google Scholar] [CrossRef]
  37. Dehghani, M.; Montazeri, Z.; Trojovská, E.; Trojovskỳ, P. Coati Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl.-Based Syst. 2023, 259, 110011. [Google Scholar] [CrossRef]
  38. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M. Crested Porcupine Optimizer: A new nature-inspired metaheuristic. Knowl.-Based Syst. 2024, 284, 111257. [Google Scholar] [CrossRef]
  39. Lian, J.; Hui, G.; Ma, L.; Zhu, T.; Wu, X.; Heidari, A.A.; Chen, Y.; Chen, H. Parrot optimizer: Algorithm and applications to medical problems. Comput. Biol. Med. 2024, 172, 108064. [Google Scholar] [CrossRef] [PubMed]
  40. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  41. Zheng, B.; Chen, Y.; Wang, C.; Heidari, A.A.; Liu, L.; Chen, H. The moss growth optimization (MGO): Concepts and performance. J. Comput. Des. Eng. 2024, 11, 184–221. [Google Scholar] [CrossRef]
  42. Cai, X.; Zhang, C. An Innovative Differentiated Creative Search Based on Collaborative Development and Population Evaluation. Biomimetics 2025, 10, 260. [Google Scholar] [CrossRef]
  43. Potter, K.; Hagen, H.; Kerren, A.; Dannenmann, P. Methods for presenting statistical information: The box plot. In Proceedings of the VLUDS, Seoul, Republic of Korea, 11 September 2006; pp. 97–106. Available online: https://api.semanticscholar.org/CorpusID:1344717 (accessed on 28 October 2025).
  44. Brest, J.; Zumer, V.; Maucec, M.S. Self-adaptive differential evolution algorithm in constrained real-parameter optimization. In Proceedings of the 2006 IEEE International Conference on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006; IEEE: New York, NY, USA, 2006; pp. 215–222. [Google Scholar] [CrossRef]
  45. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; IEEE: New York, NY, USA, 2014; pp. 1658–1665. [Google Scholar] [CrossRef]
  46. Awad, N.H.; Ali, M.Z.; Suganthan, P.N. Ensemble sinusoidal differential covariance matrix adaptation with Euclidean neighborhood for solving CEC2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastian, Spain, 5–8 June 2017; IEEE: New York, NY, USA, 2017; pp. 372–379. [Google Scholar] [CrossRef]
  47. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 2002, 1, 67–82. [Google Scholar] [CrossRef]
  48. Jia, H.; Lu, C. Guided learning strategy: A novel update mechanism for metaheuristic algorithms design and improvement. Knowl.-Based Syst. 2024, 286, 111402. [Google Scholar] [CrossRef]
  49. Dehghani, M.; Trojovskỳ, P. Osprey optimization algorithm: A new bio-inspired metaheuristic algorithm for solving engineering optimization problems. Front. Mech. Eng. 2023, 8, 1126450. [Google Scholar] [CrossRef]
  50. Meng, X.; Jiang, J.; Wang, H. AGWO: Advanced GWO in multi-layer perception optimization. Expert Syst. Appl. 2021, 173, 114676. [Google Scholar] [CrossRef]
  51. McGarry, K.J.; Wermter, S.; MacIntyre, J. Knowledge extraction from radial basis function networks and multilayer perceptrons. In Proceedings of the IJCNN’99. International Joint Conference on Neural Networks. Proceedings (Cat. No. 99CH36339), Washington, DC, USA, 10–16 July 1999; IEEE: New York, NY, USA, 1999; Volume 4, pp. 2494–2497. [Google Scholar] [CrossRef]
  52. Daniel, W.B.; Yeung, E. A constructive approach for one-shot training of neural networks using hypercube-based topological coverings. arXiv 2019, arXiv:1901.02878. [Google Scholar] [CrossRef]
  53. Fisher, R.A. Iris. UCI Machine Learning Repository. 1936. Available online: https://archive.ics.uci.edu/dataset/53/iris (accessed on 28 October 2025).
  54. Ay, Ş.; Ekinci, E.; Garip, Z. A comparative analysis of meta-heuristic optimization algorithms for feature selection on ML-based classification of heart-related diseases. J. Supercomput. 2023, 79, 11797. [Google Scholar] [CrossRef]
  55. Janosi, A.; Steinbrunn, W.; Pfisterer, M.; Detrano, R. Heart Disease. UCI Machine Learning Repository. 1989. Available online: https://archive.ics.uci.edu/dataset/45/heart+disease (accessed on 28 October 2025).
  56. Ruder, S. An overview of gradient descent optimization algorithms. arXiv 2016, arXiv:1609.04747. [Google Scholar] [CrossRef]
  57. Robbins, H.; Monro, S. A stochastic approximation method. Ann. Math. Stat. 1951, 22, 400–407. [Google Scholar] [CrossRef]
  58. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. CoRR 2014, abs/1412.6980. Available online: https://api.semanticscholar.org/CorpusID:6628106 (accessed on 28 October 2025).
  59. Cao, J.; Qian, C.; Huang, Y.; Chen, D.; Gao, Y.; Dong, J.; Guo, D.; Qu, X. A Dynamics Theory of RMSProp-Based Implicit Regularization in Deep Low-Rank Matrix Factorization. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 18750–18764. [Google Scholar] [CrossRef] [PubMed]
  60. Ward, R.; Wu, X.; Bottou, L. Adagrad stepsizes: Sharp convergence over nonconvex landscapes. J. Mach. Learn. Res. 2020, 21, 1–30. [Google Scholar]
  61. Oyelade, O.N.; Aminu, E.F.; Wang, H.; Rafferty, K. An adaptation of hybrid binary optimization algorithms for medical image feature selection in neural network for classification of breast cancer. Neurocomputing 2025, 617, 129018. [Google Scholar] [CrossRef]
  62. Fu, Y. Research on Financial Time Series Prediction Model Based on Multifractal Trend Cross Correlation Removal and Deep Learning. Procedia Comput. Sci. 2025, 261, 217–226. [Google Scholar] [CrossRef]
  63. Wafa, A.A.; Eldefrawi, M.M.; Farhan, M.S. Advancing multimodal emotion recognition in big data through prompt engineering and deep adaptive learning. J. Big Data 2025, 12, 210. [Google Scholar] [CrossRef]
Figure 1. A simple MLP neural network model.
Figure 1. A simple MLP neural network model.
Biomimetics 10 00787 g001
Figure 2. Schematic of the soldier position update mechanism with soft Gaussian perturbation.
Figure 2. Schematic of the soldier position update mechanism with soft Gaussian perturbation.
Biomimetics 10 00787 g002
Figure 3. Comparison of spiral factors with constant b and dynamic b.
Figure 3. Comparison of spiral factors with constant b and dynamic b.
Biomimetics 10 00787 g003
Figure 4. Variation in S p i r a l _ F a c t o r and the soldier position updating with spiral perturbation.
Figure 4. Variation in S p i r a l _ F a c t o r and the soldier position updating with spiral perturbation.
Biomimetics 10 00787 g004
Figure 5. Schematic of nonlinear cognitive coefficient-driven velocity update mechanism.
Figure 5. Schematic of nonlinear cognitive coefficient-driven velocity update mechanism.
Biomimetics 10 00787 g005
Figure 6. Flow chart of the FBCA.
Figure 6. Flow chart of the FBCA.
Biomimetics 10 00787 g006
Figure 7. Qualitative analysis experiment of FBCA.
Figure 7. Qualitative analysis experiment of FBCA.
Biomimetics 10 00787 g007
Figure 8. Convergence curve of the FBCA and its comparative algorithms with 30D.
Figure 8. Convergence curve of the FBCA and its comparative algorithms with 30D.
Biomimetics 10 00787 g008
Figure 9. Convergence curve of the FBCA and its comparative algorithms with 50D.
Figure 9. Convergence curve of the FBCA and its comparative algorithms with 50D.
Biomimetics 10 00787 g009
Figure 10. Convergence curve of the FBCA and its comparative algorithms with 100D.
Figure 10. Convergence curve of the FBCA and its comparative algorithms with 100D.
Biomimetics 10 00787 g010
Figure 11. Radar chart of FBCA and other algorithms’ rankings.
Figure 11. Radar chart of FBCA and other algorithms’ rankings.
Biomimetics 10 00787 g011
Figure 12. Boxplots of the FBCA and its comparative algorithms with 30D.
Figure 12. Boxplots of the FBCA and its comparative algorithms with 30D.
Biomimetics 10 00787 g012
Figure 13. Boxplots of the FBCA and its comparative algorithms with 50D.
Figure 13. Boxplots of the FBCA and its comparative algorithms with 50D.
Biomimetics 10 00787 g013
Figure 14. Boxplots of the FBCA and its comparative algorithms with 100D.
Figure 14. Boxplots of the FBCA and its comparative algorithms with 100D.
Biomimetics 10 00787 g014
Figure 15. Convergence curve of the BCA, FBCA ( p 1 = 0.2), FBCA ( p 1 = 0.5) and FBCA ( p 1 = 0.8).
Figure 15. Convergence curve of the BCA, FBCA ( p 1 = 0.2), FBCA ( p 1 = 0.5) and FBCA ( p 1 = 0.8).
Biomimetics 10 00787 g015
Figure 16. Convergence curve of the FBCA with variations in the Gaussian item N .
Figure 16. Convergence curve of the FBCA with variations in the Gaussian item N .
Biomimetics 10 00787 g016
Figure 17. Convergence curve of the FBCA with various spiral mechanisms.
Figure 17. Convergence curve of the FBCA with various spiral mechanisms.
Biomimetics 10 00787 g017
Figure 18. Convergence curve of the BCA, FBCA ( n e w - v e l o c i t y ) and FBCA ( p s o - v e l o c i t y ).
Figure 18. Convergence curve of the BCA, FBCA ( n e w - v e l o c i t y ) and FBCA ( p s o - v e l o c i t y ).
Biomimetics 10 00787 g018
Figure 19. Convergence curve of the FBCA and other SOTA optimizers.
Figure 19. Convergence curve of the FBCA and other SOTA optimizers.
Biomimetics 10 00787 g019
Figure 20. The FBCA−MLP training model.
Figure 20. The FBCA−MLP training model.
Biomimetics 10 00787 g020
Table 1. Parameter settings for each algorithm.
Table 1. Parameter settings for each algorithm.
AlgorithmsParametersValuesReference
SCAa2[31]
rLinearly decreased from a to 0
GACrossPercent70%[22]
MutatPercent20%
ElitPercent10%
RSAEvolutionary sense randomly decreasing values between 2 and −2 [32]
Sensitive parameter β = 0.005
Sensitive parameter α = 0.1
PSOCognitive component2[24]
Social component2
DBOk and λ 0.1[33]
b0.3
S0.5
BKAp0.9[34]
rrange from [0, 1]
HHO E 0 range from [−1, 1][35]
β 1.5 π
HOAAngle of inclination of the trailrange from [0, 50°][36]
Sweep Factor (SF) of the hikerrange from [1, 3]
COAI1 or 2[37]
CPO α 0.1[38]
T f 0.5
T2
POp[0, 1][39]
SMA v c Linearly decreased from 1 to 0[40]
Table 2. The overall ranking results of FA-BCA and other algorithms.
Table 2. The overall ranking results of FA-BCA and other algorithms.
IndexFBCABCASCAGARSAPSODBOBKAHHOHOACOACPOPOSMA
D = 30Average ranking2.723.109.7212.9312.288.905.796.247.3410.1013.102.797.142.83
Total ranking1410131295681114283
D = 50Average ranking2.453.869.9713.3111.868.936.1766.5210.1012.793.417.072.55
Total ranking1410141296571113382
D = 100Average ranking2.524.5510.1013.9011.179.246.246.035.939.7912.1746.692.66
Total ranking1411141297651013382
The bold part indicates the optimal result.
Table 3. The detailed ranking results of all algorithms on CEC2017 test functions.
Table 3. The detailed ranking results of all algorithms on CEC2017 test functions.
FFBCABCASCAGARSAPSODBOBKAHHOHOACOACPOPOSMA
30 50 100 30 50 100 30 50 100 30 50 100 30 50 100 30 50 100 30 50 100 30 50 100 30 50 100 30 50 100 30 50 100 30 50 100 30 50 100 30 50 100
F1211133101010131414121212888566999655111111141313444777322
F2710101212129991414138431313141111821133662210844575751611
F3111223101010121414131212888557999665111111141313444776332
F4111334111111141414131212101010677543766999121313455888222
F5432111810111414141312136710556765108791191213122231198344
F6544333911131414141212116684558761110101099131312221787112
F7111333111111141414121312109107774656569109131213544888222
F8458213111113121414141311981269105421010777613129124865331
F9128121413131212111114101010999556332443775141311887664211
F10132249997141414121210101113761167355511108131312324886411
F11121213101010121214141312988555899666111111131413334777442
F12122211101010121214141312998666889555111111131413333777444
F13331545891113131414121210109667222118691110121413113778454
F14323211910101213141312121089576698755111111141413132867444
F15111557101010111214131312998663436875121111141413344789222
F16311264101191214121413136988865410755111011131214133977422
F17323556910111312141213121089678232764111110141413111897445
F18342211111010121314131412998666779555101111141213123887434
F19214411131113121414141312910910887522953676121011148765331
F20111334119714141412121210896557669111081011131313443578222
F21434510131212111414141113109883657228531097131112149276611
F22431222778131414111091011115558879910121213141312344666113
F23323242778141414991011111155588710109131313121212434666111
F24211133101010141414121212989567898655111111131313444776322
F25441123810101314141212126995551088976111111141313334767212
F26431112881013121411111199955567710108121312141413344766223
F27422334101010141414121211988777899555111113131312143666211
F28411123109912121413131261075557810866111111141413244978332
F29222111101010111214141313898555769676121111131412433987344
Table 4. Results of the Wilcoxon rank sum test of FBCA and other algorithms.
Table 4. Results of the Wilcoxon rank sum test of FBCA and other algorithms.
AlgorithmsD = 30D = 50D = 100Total
(+/=/−) (+/=/−) (+/=/−) (+/=/−)
FBCA vs. BCA7/16/613/8/818/6/538/30/19
FBCA vs. SCA28/1/028/1/028/0/184/2/1
FBCA vs. GA29/0/029/0/029/0/087/0/0
FBCA vs. RSA28/1/028/0/127/1/183/2/2
FBCA vs. PSO29/0/029/0/027/2/085/2/0
FBCA vs. DBO25/3/127/2/026/2/178/7/2
FBCA vs. BKA26/0/323/3/322/1/671/4/12
FBCA vs. HHO28/0/128/0/124/2/380/2/5
FBCA vs. HOA28/0/128/0/124/3/280/3/4
FBCA vs. COA28/1/028/0/128/0/184/1/2
FBCA vs. CPO12/6/1117/6/620/5/449/17/21
FBCA vs. PO27/1/128/0/124/3/279/4/4
FBCA vs. SMA11/12/610/13/615/7/736/32/19
Table 5. Results of FBCA with various p 1 values.
Table 5. Results of FBCA with various p 1 values.
FBCAFBCA ( p 1 = 0.2 )FBCA ( p 1 = 0.5 )FBCA ( p 1 = 0.8 )
F18.63  × 10 9 1.21 × 10 9 2.39 × 10 8 1.92 × 10 8
F28.22  × 10 5 6.76  × 10 5 7.20  × 10 5 6.63 × 10 5
F31.76  × 10 3 1.16  × 10 3 1.04 × 10 3 1.07  × 10 3
F41.63  × 10 3 1.30  × 10 3 1.29 × 10 3 1.32  × 10 3
F5 6.33 × 10 2 6.37  × 10 2 6.40  × 10 2 6.48  × 10 2
F6 2.55 × 10 3 2.67  × 10 3 2.75  × 10 3 3.31  × 10 3
F71.83  × 10 3 1.56  × 10 3 1.53 × 10 3 1.62  × 10 3
F8 4.69 × 10 4 6.88  × 10 4 7.71  × 10 4 8.82  × 10 4
F93.36  × 10 4 3.03  × 10 4 3.05  × 10 4 2.50 × 10 4
F101.94  × 10 5 6.24  × 10 4 5.70  × 10 4 4.83 × 10 4
F115.23  × 10 8 1.09 × 10 8 9.65  × 10 7 1.38  × 10 8
F12 1.24 × 10 4 3.18  × 10 4 5.12  × 10 4 1.89  × 10 5
F135.34  × 10 6 3.88  × 10 6 3.03 × 10 6 3.35  × 10 6
F141.80  × 10 4 6.65 × 10 3 2.25  × 10 4 2.66  × 10 4
F151.13  × 10 4 7.83  × 10 3 6.44  × 10 3 6.23 × 10 3
F168.20  × 10 3 6.15  × 10 3 5.05 × 10 3 5.12  × 10 3
F171.59  × 10 7 9.53  × 10 6 8.04  × 10 6 4.62 × 10 6
F18 7.62 × 10 3 1.16  × 10 4 1.21  × 10 4 4.32  × 10 4
F198.28  × 10 3 7.16  × 10 3 7.04  × 10 3 6.93 × 10 3
F203.43  × 10 3 3.12  × 10 3 3.11 × 10 3 3.17  × 10 3
F213.60  × 10 4 3.31  × 10 4 3.01 × 10 4 3.06  × 10 4
F223.64  × 10 3 3.45 × 10 3 3.55  × 10 3 3.78  × 10 3
F234.43  × 10 3 4.14 × 10 3 4.28  × 10 3 4.80  × 10 3
F244.74  × 10 3 3.85  × 10 3 3.71 × 10 3 3.79  × 10 3
F251.80  × 10 4 1.54 × 10 4 1.77  × 10 4 2.17  × 10 4
F263.78  × 10 3 3.76 × 10 3 3.84  × 10 3 4.06  × 10 3
F275.77  × 10 3 4.46  × 10 3 4.17 × 10 3 4.77  × 10 3
F281.02  × 10 4 6.81 × 10 3 6.87  × 10 3 7.46  × 10 3
F298.33  × 10 5 4.23 × 10 5 8.30  × 10 5 2.52  × 10 6
Average ranking3.212.101.972.72
Total ranking4213
The bold part indicates the optimal result.
Table 6. Results of FBCA with variations in the Gaussian item N .
Table 6. Results of FBCA with variations in the Gaussian item N .
FBCAFBCA ( N ( 0 , 0.05 2 ))FBCA ( N ( 0 , 0.1 2 ))FBCA ( N ( 0 , 0.2 2 ))
F19.34 × 10 9 4.95 × 10 8 3.25 × 10 8 2.63 × 10 8
F21.22 × 10 6 6.82 × 10 5 6.36 × 10 5 6.87 × 10 5
F31.72 × 10 3 1.06 × 10 3 1.03 × 10 3 9.62 × 10 2
F41.62 × 10 3 1.25 × 10 3 1.25 × 10 3 1.24 × 10 3
F5 6.31 × 10 2 6.39 × 10 2 6.40 × 10 2 6.42 × 10 2
F6 2.54 × 10 3 2.77 × 10 3 2.63 × 10 3 2.78 × 10 3
F71.84 × 10 3 1.61 × 10 3 1.56 × 10 3 1.57 × 10 3
F8 4.00 × 10 4 6.84 × 10 4 7.08 × 10 4 6.45 × 10 4
F93.37 × 10 4 2.80 × 10 4 2.93 × 10 4 2.84 × 10 4
F101.97 × 10 5 5.31 × 10 4 5.89 × 10 4 5.35 × 10 4
F115.26 × 10 8 9.98 × 10 7 1.18 × 10 8 1.27 × 10 8
F12 1.60 × 10 4 4.10 × 10 4 4.69 × 10 4 6.32 × 10 4
F135.30 × 10 6 3.71 × 10 6 2.85 × 10 6 2.26 × 10 6
F14 5.63 × 10 3 2.07 × 10 4 1.65 × 10 4 1.30 × 10 4
F151.16 × 10 4 7.53 × 10 3 5.70 × 10 3 6.57 × 10 3
F168.38 × 10 3 5.42 × 10 3 5.41 × 10 3 5.85 × 10 3
F172.33 × 10 7 8.70 × 10 6 7.47 × 10 6 7.92 × 10 6
F18 6.30 × 10 3 8.28 × 10 3 1.23 × 10 4 1.24 × 10 4
F198.27 × 10 3 6.81 × 10 3 6.79 × 10 3 7.15 × 10 3
F203.41 × 10 3 3.11 × 10 3 3.10 × 10 3 3.14 × 10 3
F213.61 × 10 4 3.16 × 10 4 3.02 × 10 4 3.11 × 10 4
F223.60 × 10 3 3.53 × 10 3 3.60 × 10 3 3.54 × 10 3
F234.45 × 10 3 4.38 × 10 3 4.34 × 10 3 4.31 × 10 3
F244.64 × 10 3 3.86 × 10 3 3.72 × 10 3 3.67 × 10 3
F251.74 × 10 4 1.72 × 10 4 1.74 × 10 4 1.63 × 10 4
F26 3.77 × 10 3 3.83 × 10 3 3.79 × 10 3 3.80 × 10 3
F275.98 × 10 3 4.35 × 10 3 4.01 × 10 3 4.03 × 10 3
F289.16 × 10 3 6.98 × 10 3 7.00 × 10 3 7.04 × 10 3
F298.00 × 10 5 5.21 × 10 5 9.02 × 10 5 1.26 × 10 6
Average ranking3.142.382.142.34
Total ranking4312
The bold part indicates the optimal result.
Table 7. Results of FBCA with various spiral mechanisms.
Table 7. Results of FBCA with various spiral mechanisms.
FBCAFBCA ( Spiral 1 )FBCA ( Spiral 2 )FBCA ( Spiral 3 )FBCA ( Spiral 4 )
F19.41 × 10 9 3.34 × 10 8 2.82 × 10 8 2.53 × 10 9 7.29 × 10 8
F21.72 × 10 6 6.79 × 10 5 6.91 × 10 5 6.62 × 10 5 6.61 × 10 5
F31.74 × 10 3 9.98 × 10 2 1.02 × 10 3 1.43 × 10 3 1.11 × 10 3
F41.59 × 10 3 1.23 × 10 3 1.27 × 10 3 1.39 × 10 3 1.28 × 10 3
F5 6.32 × 10 2 6.39 × 10 2 6.41 × 10 2 6.38 × 10 2 6.41 × 10 2
F6 2.50 × 10 3 2.63 × 10 3 2.68 × 10 3 2.61 × 10 3 3.15 × 10 3
F71.80 × 10 3 1.59 × 10 3 1.55 × 10 3 1.69 × 10 3 1.57 × 10 3
F8 3.86 × 10 4 7.30 × 10 4 7.51 × 10 4 6.76 × 10 4 7.88 × 10 4
F93.37 × 10 4 3.00 × 10 4 2.92 × 10 4 3.08 × 10 4 2.79 × 10 4
F102.14 × 10 5 5.01 × 10 4 5.09 × 10 4 6.90 × 10 4 5.34 × 10 4
F114.22 × 10 8 1.09 × 10 8 1.11 × 10 8 2.11 × 10 8 2.00 × 10 8
F12 1.47 × 10 4 7.00 × 10 4 1.31 × 10 5 1.22 × 10 5 7.91 × 10 6
F134.32 × 10 6 2.93 × 10 6 3.90 × 10 6 2.75 × 10 6 3.74 × 10 6
F14 8.42 × 10 3 2.09 × 10 4 1.13 × 10 4 2.12 × 10 4 1.75 × 10 4
F151.13 × 10 4 6.01 × 10 3 5.80 × 10 3 7.56 × 10 3 6.87 × 10 3
F168.33 × 10 3 4.94 × 10 3 5.48 × 10 3 5.60 × 10 3 5.63 × 10 3
F172.38 × 10 7 6.50 × 10 6 5.60 × 10 6 7.75 × 10 6 6.48 × 10 6
F18 7.42 × 10 3 1.15 × 10 4 1.92 × 10 4 1.05 × 10 4 2.49 × 10 4
F198.27 × 10 3 6.80 × 10 3 6.43 × 10 3 6.99 × 10 3 6.32 × 10 3
F203.39 × 10 3 3.11 × 10 3 3.13 × 10 3 3.20 × 10 3 3.12 × 10 3
F213.62 × 10 4 2.99 × 10 4 3.14 × 10 4 3.32 × 10 4 2.72 × 10 4
F223.64 × 10 3 3.58 × 10 3 3.62 × 10 3 3.51 × 10 3 3.67 × 10 3
F234.40 × 10 3 4.32 × 10 3 4.49 × 10 3 4.20 × 10 3 4.63 × 10 3
F244.68 × 10 3 3.75 × 10 3 3.70 × 10 3 4.10 × 10 3 3.79 × 10 3
F251.67 × 10 4 1.69 × 10 4 1.77 × 10 4 1.54 × 10 4 2.08 × 10 4
F263.82 × 10 3 3.82 × 10 3 3.86 × 10 3 3.79 × 10 3 4.04 × 10 3
F275.86 × 10 3 4.23 × 10 3 4.31 × 10 3 4.68 × 10 3 4.68 × 10 3
F289.04 × 10 3 7.23 × 10 3 7.04 × 10 3 7.19 × 10 3 7.57 × 10 3
F29 6.45 × 10 5 9.12 × 10 5 7.95 × 10 5 8.41 × 10 5 2.11 × 10 6
Average ranking3.762.282.592.973.41
Total ranking51234
The bold part indicates the optimal result.
Table 8. Results of FBCA with various velocity update mechanisms.
Table 8. Results of FBCA with various velocity update mechanisms.
FBCAFBCA ( velocity 1 )FBCA ( velocity 2 )
F19.11 × 10 9 2.11 × 10 8 4.05 × 10 8
F29.05 × 10 5 6.76 × 10 5 6.42 × 10 5
F31.92 × 10 3 9.74 × 10 2 1.01 × 10 3
F41.62 × 10 3 1.22 × 10 3 1.23 × 10 3
F5 6.32 × 10 2 6.41 × 10 2 6.43 × 10 2
F6 2.58 × 10 3 2.78 × 10 3 2.71 × 10 3
F71.89 × 10 3 1.54 × 10 3 1.60 × 10 3
F8 4.24 × 10 4 7.03 × 10 4 7.28 × 10 4
F93.34 × 10 4 2.79 × 10 4 2.39 × 10 4
F101.95 × 10 5 5.39 × 10 4 5.86 × 10 4
F116.07 × 10 8 9.92 × 10 7 1.09 × 10 8
F122.82 × 10 5 4.22 × 10 4 5.38 × 10 4
F136.83 × 10 6 3.12 × 10 6 2.36 × 10 6
F14 7.07 × 10 3 1.50 × 10 4 1.17 × 10 4
F151.15 × 10 4 6.53 × 10 3 7.03 × 10 3
F168.24 × 10 3 5.67 × 10 3 5.04 × 10 3
F171.90 × 10 7 6.91 × 10 6 5.39 × 10 6
F18 6.01 × 10 3 1.35 × 10 4 1.17 × 10 4
F198.30 × 10 3 6.40 × 10 3 7.07 × 10 3
F203.42 × 10 3 3.09 × 10 3 3.17 × 10 3
F213.57 × 10 4 3.03 × 10 4 2.61 × 10 4
F223.65 × 10 3 3.57 × 10 3 3.59 × 10 3
F234.35 × 10 3 4.29 × 10 3 4.34 × 10 3
F244.76 × 10 3 3.73 × 10 3 3.74 × 10 3
F25 1.61 × 10 4 1.77 × 10 4 1.75 × 10 4
F263.80 × 10 3 3.81 × 10 3 3.79 × 10 3
F275.85 × 10 3 4.13 × 10 3 4.14 × 10 3
F289.33 × 10 3 6.92 × 10 3 7.28 × 10 3
F29 9.00 × 10 5 9.07 × 10 5 9.53 × 10 5
Average ranking2.481.661.86
Total ranking312
The bold part indicates the optimal result.
Table 9. Results of FBCA versus L-SHADE and other SOTA optimizers.
Table 9. Results of FBCA versus L-SHADE and other SOTA optimizers.
FFBCABCASaDEL-SHADEL-SHADE_EpSin
F13.29 × 10 8 8.02 × 10 9 9.24 × 10 9 2.82 × 10 8 5.07 × 10 10
F26.90 × 10 5 1.21 × 10 6 3.54 × 10 5 5.85 × 10 5 3.85 × 10 5
F39.97 × 10 2 1.88 × 10 3 2.11 × 10 3 9.61 × 10 2 6.22 × 10 3
F4 1.21 × 10 3 1.55 × 10 3 1.37 × 10 3 1.51 × 10 3 1.23 × 10 3
F56.40 × 10 2 6.31 × 10 2 6.49 × 10 2 6.14 × 10 2 6.46 × 10 2
F62.66 × 10 3 2.47 × 10 3 2.53 × 10 3 2.05 × 10 3 2.56 × 10 3
F7 1.56 × 10 3 1.85 × 10 3 1.69 × 10 3 1.81 × 10 3 1.57 × 10 3
F87.07 × 10 4 4.35 × 10 4 5.61 × 10 4 1.66 × 10 4 2.69 × 10 4
F92.73 × 10 4 3.34 × 10 4 3.26 × 10 4 3.20 × 10 4 2.22 × 10 4
F10 6.00 × 10 4 2.15 × 10 5 6.63 × 10 4 1.20 × 10 5 7.87 × 10 4
F111.09 × 10 8 3.92 × 10 8 5.11 × 10 8 4.75 × 10 7 4.72 × 10 9
F124.09 × 10 4 1.37 × 10 4 9.59 × 10 4 1.46 × 10 4 8.09 × 10 7
F133.04 × 10 6 5.61 × 10 6 5.40 × 10 5 4.53 × 10 6 3.30 × 10 6
F141.24 × 10 4 7.58 × 10 3 2.14 × 10 5 5.07 × 10 3 3.60 × 10 5
F156.89 × 10 3 1.16 × 10 4 6.91 × 10 3 9.73 × 10 3 6.31 × 10 3
F165.21 × 10 3 8.13 × 10 3 6.70 × 10 3 7.18 × 10 3 5.09 × 10 3
F176.75 × 10 6 1.97 × 10 7 1.52 × 10 6 7.74 × 10 6 4.08 × 10 6
F181.25 × 10 4 1.10 × 10 4 4.10 × 10 5 4.66 × 10 3 7.06 × 10 6
F197.01 × 10 3 8.27 × 10 3 7.63 × 10 3 7.49 × 10 3 5.61 × 10 3
F20 3.07 × 10 3 3.47 × 10 3 3.30 × 10 3 3.36 × 10 3 3.15 × 10 3
F213.09 × 10 4 3.61 × 10 4 3.44 × 10 4 3.42 × 10 4 2.44 × 10 4
F223.61 × 10 3 3.60 × 10 3 3.69 × 10 3 3.83 × 10 3 3.99 × 10 3
F23 4.30 × 10 3 4.39 × 10 3 4.48 × 10 3 4.40 × 10 3 5.18 × 10 3
F243.68 × 10 3 4.54 × 10 3 4.82 × 10 3 3.66 × 10 3 7.38 × 10 3
F25 1.70 × 10 4 1.73 × 10 4 1.73 × 10 4 1.72 × 10 4 2.41 × 10 4
F263.83 × 10 3 3.81 × 10 3 3.87 × 10 3 3.60 × 10 3 4.82 × 10 3
F274.27 × 10 3 5.95 × 10 3 5.95 × 10 3 4.03 × 10 3 1.10 × 10 4
F28 6.93 × 10 3 9.42 × 10 3 8.30 × 10 3 9.42 × 10 3 9.72 × 10 3
F297.57 × 10 5 7.75 × 10 5 5.53 × 10 6 1.14 × 10 5 1.72 × 10 8
Average ranking2.243.513.342.413.48
Total ranking15324
The bold part indicates the optimal result.
Table 10. Classification problems.
Table 10. Classification problems.
DatasetsFeature NumbersTraining SamplesTest SamplesNumber of ClassesMLP StructureDimension
XOR38823-7-136
Iris415015034-9-375
Heart228080222-45-11081
Table 11. Function-approximation problems.
Table 11. Function-approximation problems.
DatasetsTraining SamplesTest SamplesMLP StructureDimension
Sigmoid: y = 1 1 + e ( x ) 61: x in [−3:0.1:3]121: x in [−3:0.05:3]1-15-146
Cosine: y = cos ( x π / 2 ) 7 31: x in [1.25:0.05:2.75]38: x in [1.25:0.04:2.75]1-15-146
Sine: y = sin ( 2 x ) 126: x in [−2 π :0.1:2 π ]252: x in [−2 π :0.05:2 π ]1-15-146
Table 12. Comparison optimization results of M L P _ X O R problem.
Table 12. Comparison optimization results of M L P _ X O R problem.
FBCABCAGASMAHHOOOACOAGLSHOARSA
Mean 9.589 × 10 6 2.93 × 10 3 6.863 × 10 2 2.003 × 10 1 8.166 × 10 5 1.779 × 10 1 1.663 × 10 1 3.567 × 10 2 1.149 × 10 1 1.555 × 10 1
Std1.573 × 10 5 8.72 × 10 3 6.513 × 10 2 4.472 × 10 2 1.244 × 10 4 4.815 × 10 2 6.867 × 10 2 4.549 × 10 2 5.087 × 10 2 3.516 × 10 2
Accuracy100%100%62.5%12.5%100%37.5%37.5%100%25%12.5%
The bold part indicates the optimal result.
Table 13. Comparison optimization results of M L P _ I r i s problem.
Table 13. Comparison optimization results of M L P _ I r i s problem.
FBCABCAGASMAHHOOOACOAGLSHOARSA
Mean 2.672 × 10 2 2.89 × 10 2 2.946 × 10 1 6.596 × 10 2 6.572 × 10 2 4.342 × 10 1 4.221 × 10 1 1.224 × 10 1 2.507 × 10 1 3.044 × 10 1
Std6.321 × 10 3 1.316 × 10 2 1.878 × 10 1 4.507 × 10 2 1.044 × 10 1 1.009 × 10 1 7.079 × 10 2 4.89 × 10 2 5.174 × 10 2 4.194 × 10 2
Accuracy88.67%86%25.33%43.33%74%6.67%14%54%5.33%7.33%
The bold part indicates the optimal result.
Table 14. Comparison optimization results of M L P _ H e a r t problem.
Table 14. Comparison optimization results of M L P _ H e a r t problem.
FBCABCAGASMAHHOOOACOAGLSHOARSA
Mean 9.779 × 10 2 1.101 × 10 1 2.826 × 10 1 1.681 × 10 1 1.257 × 10 1 1.76 × 10 1 1.718 × 10 1 1.474 × 10 1 1.242 × 10 1 1.626 × 10 1
Std1.179 × 10 2 3.974 × 10 2 3.916 × 10 2 6.858 × 10 3 8.339 × 10 3 6.495 × 10 3 7.723 × 10 3 2.258 × 10 2 1.114 × 10 2 1.167 × 10 2
Accuracy83.75%82.5%52.5%73.75%73.75%32.5%36.25%78.75%67.5%48.75%
The bold part indicates the optimal result.
Table 15. Comparison optimization results of M L P _ S i g m o i d problem.
Table 15. Comparison optimization results of M L P _ S i g m o i d problem.
FBCABCAGASMAHHOOOACOAGLSHOARSA
Mean 2.466 × 10 1 2.482 × 10 1 2.486 × 10 1 2.468 × 10 1 2.467 × 10 1 2.496 × 10 1 2.486 × 10 1 2.469 × 10 1 2.477 × 10 1 2.471 × 10 1
Std1.711 × 10 4 1.759 × 10 3 1.909 × 10 3 2.321 × 10 4 1.503 × 10 4 1.807 × 10 3 1.835 × 10 3 4.225 × 10 4 7.978 × 10 4 3.776 × 10 4
Error17.556418.329019.469017.822517.582720.518317.748718.111817.810618.1837
The bold part indicates the optimal result.
Table 16. Comparison optimization results of M L P _ C o s i n e problem.
Table 16. Comparison optimization results of M L P _ C o s i n e problem.
FBCABCAGASMAHHOOOACOAGLSHOARSA
Mean 1.772 × 10 1 1.826 × 10 1 1.98 × 10 1 1.816 × 10 1 1.774 × 10 1 2.756 × 10 1 2.244 × 10 1 1.79 × 10 1 1.85 × 10 1 2.001 × 10 1
Std4.262 × 10 6 4.262 × 10 6 4.262 × 10 6 4.262 × 10 6 4.262 × 10 6 4.262 × 10 6 4.262 × 10 6 4.262 × 10 6 4.262 × 10 6 4.262 × 10 6
Error4.67925.24498.97204.78394.77416.03267.46085.02995.31836.0737
The bold part indicates the optimal result.
Table 17. Comparison optimization results of M L P _ S i n e problem.
Table 17. Comparison optimization results of M L P _ S i n e problem.
FBCABCAGASMAHHOOOACOAGLSHOARSA
Mean 4.453 × 10 1 4.514 × 10 1 4.655 × 10 1 4.523 × 10 1 4.462 × 10 1 4.649 × 10 1 4.523 × 10 1 4.495 × 10 1 4.611 × 10 1 4.677 × 10 1
Std8.941 × 10 3 4.996 × 10 3 9.217 × 10 3 9.393 × 10 3 2.225 × 10 3 4.332 × 10 3 1.138 × 10 2 9.507 × 10 3 3.623 × 10 3 7.564 × 10 3
Error146.5873147.1405157.9612148.8101147.407414.9740146.9557148.6581151.0190153.4001
The bold part indicates the optimal result.
Table 18. Comparison results of FBCA and the gradient-based optimizers.
Table 18. Comparison results of FBCA and the gradient-based optimizers.
DatasetsItemFBCABCASGDAdamRMSpropAdagrad
M L P _ H e a r t Mean9.779 × 10 2 1.101 × 10 1 9.212 × 10 2 5.580 × 10 2 4.331 × 10 2 5.821 × 10 2
Accuracy83.75%82.5%31.25%71.25%72.5%76.25%
M L P _ S i n e Mean4.453 × 10 1 4.514 × 10 1 4.983 × 10 1 4.453 × 10 1 4.453 × 10 1 4.896 × 10 1
Error146.5873147.1405159.7947148.4509146.7834157.5333
The bold part indicates the optimal result.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, S.; Guo, C.; Jiang, J. FBCA: Flexible Besiege and Conquer Algorithm for Multi-Layer Perceptron Optimization Problems. Biomimetics 2025, 10, 787. https://doi.org/10.3390/biomimetics10110787

AMA Style

Guo S, Guo C, Jiang J. FBCA: Flexible Besiege and Conquer Algorithm for Multi-Layer Perceptron Optimization Problems. Biomimetics. 2025; 10(11):787. https://doi.org/10.3390/biomimetics10110787

Chicago/Turabian Style

Guo, Shuxin, Chenxu Guo, and Jianhua Jiang. 2025. "FBCA: Flexible Besiege and Conquer Algorithm for Multi-Layer Perceptron Optimization Problems" Biomimetics 10, no. 11: 787. https://doi.org/10.3390/biomimetics10110787

APA Style

Guo, S., Guo, C., & Jiang, J. (2025). FBCA: Flexible Besiege and Conquer Algorithm for Multi-Layer Perceptron Optimization Problems. Biomimetics, 10(11), 787. https://doi.org/10.3390/biomimetics10110787

Article Metrics

Back to TopTop