Next Article in Journal
A Two-Level Relative-Entropy Theory for Isotropic Turbulence Spectra: Fokker–Planck Semigroup Irreversibility and WKB Selection of Dissipation Tails
Previous Article in Journal
Double Fourier Series-Based Sideband Harmonic Analysis of a Full-Bridge Converter
Previous Article in Special Issue
A Reference-Point Guided Multi-Objective Crested Porcupine Optimizer for Global Optimization and UAV Path Planning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Strategy Enhanced Connected Banking System Optimizer for Global Optimization and Corporate Bankruptcy Forecasting

1
Adam Smith Business School, University of Glasgow, Gilmorehill, Glasgow G12 8QQ, Scotland, UK
2
Department of Biosystems & Agricultural Engineering, College of Engineering, Michigan State University, 524 S Shaw Ln, East Lansing, MI 48824, USA
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(4), 618; https://doi.org/10.3390/math14040618
Submission received: 8 January 2026 / Revised: 29 January 2026 / Accepted: 6 February 2026 / Published: 10 February 2026
(This article belongs to the Special Issue Advances in Metaheuristic Optimization Algorithms)

Abstract

Metaheuristic optimization algorithms are widely employed to address complex nonlinear and multimodal optimization problems due to their flexibility and strong global search capability. However, the original Connected Banking System Optimizer (CBSO) still exhibits several inherent limitations when handling high-dimensional and highly complex search spaces, including excessive dependence on single global-best guidance, rapid loss of population diversity, weak exploitation ability in later iterations, and inefficient boundary handling. These deficiencies often lead to premature convergence and unstable optimization performance. To overcome these drawbacks, this paper proposes a Multi-Strategy Enhanced Connected Banking System Optimizer (MSECBSO) by systematically enhancing the CBSO framework through multiple complementary mechanisms. First, a multi-elite cooperative guidance strategy is introduced to aggregate information from several high-quality individuals, thereby mitigating search-direction bias and improving population diversity. Second, an embedded differential evolution search strategy is incorporated to strengthen local exploitation accuracy and enhance the ability to escape from local optima. Third, a soft boundary rebound mechanism is designed to replace rigid boundary truncation, improving search stability and preventing boundary aggregation. The proposed MSECBSO is extensively evaluated on the CEC2017 and CEC2022 benchmark suites under different dimensional settings and is statistically compared with nine state-of-the-art metaheuristic algorithms. Experimental results demonstrate that MSECBSO achieves superior convergence accuracy, robustness, and stability across unimodal, multimodal, hybrid, and composition functions. In terms of computational complexity, MSECBSO retains the same order of time complexity as the original CBSO, namely O(N×D×T), while introducing only a marginal increase in constant computational overhead. The space complexity remains O(N×D), indicating good scalability for high-dimensional optimization problems. Furthermore, MSECBSO is applied to corporate bankruptcy forecasting by optimizing the hyperparameters of a K-nearest neighbors (KNN) classifier. The resulting MSECBSO-KNN model achieves higher prediction accuracy and stronger stability than competing optimization-based KNN models, confirming the effectiveness and practical applicability of the proposed algorithm in real-world classification tasks.

1. Introduction

In the era of big data and artificial intelligence, complex optimization problems have arisen in numerous fields, including engineering design, financial risk management, resource scheduling, and machine learning hyperparameter tuning [1,2]. These problems are typically characterized by high dimensionality, strong nonlinearity, multimodality, and complex constraints, which make it difficult for traditional deterministic optimization methods to obtain satisfactory solutions [3,4,5]. As a result, metaheuristic algorithms, which are stochastic search techniques inspired by natural phenomena, biological behaviors, or physical processes, have attracted extensive attention due to their problem-independence, strong global search capability, and ease of implementation [6,7].
Representative metaheuristic algorithms include Particle Swarm Optimization (PSO), inspired by bird flocking behavior [8]; Differential Evolution (DE), based on population-based mutation and recombination strategies [9]; Ant Colony Optimization (ACO), motivated by the foraging behavior of ant colonies [10]; Gray Wolf Optimizer (GWO), simulating the hunting mechanism of gray wolves [11]; and the Whale Optimization Algorithm (WOA), inspired by humpback whales’ hunting behaviors such as encircling prey, bubble-net attacking, and spiral updating [12]. By imitating collective intelligence and cooperative mechanisms observed in natural or social systems, these algorithms are able to achieve a balance between exploration and exploitation [13,14]. However, according to the No Free Lunch (NFL) theorem [15], no single optimization algorithm can consistently outperform others across all optimization problems. Consequently, existing metaheuristic algorithms still commonly encounter challenges such as premature convergence, degraded search efficiency, and insufficient robustness when solving complex, high-dimensional, and multimodal optimization problems [16,17,18].
Driven by these limitations, a large number of novel metaheuristic algorithms have been proposed in recent years. Examples include the Chameleon Swarm Algorithm (CSA), inspired by the dynamic foraging behavior of chameleons in different environments [19]; the Marine Predators Algorithm (MPA), motivated by marine predators’ foraging strategies [20]; the Dung Beetle Optimizer (DBO), simulating dung beetles’ rolling, dancing, foraging, stealing, and breeding behaviors [21]; the Secretary Bird Optimization Algorithm (SBOA), based on secretary birds’ hunting patterns [22]; the Golden Jackal Optimization (GJO), inspired by cooperative hunting strategies of golden jackals [23]; the Red-billed Blue Magpie Optimizer (RBMO), modeling search, pursuit, attack, and food storage behaviors [24]; the Cuckoo Catfish Optimizer (CCO), inspired by searching, predation, and parasitic behaviors observed in cichlid fish [25]; and the Supercell Thunderstorm Algorithm (STA), motivated by the dynamic behavior of supercell thunderstorms [26]. Although these algorithms have achieved promising results in various optimization tasks, their limitations have become increasingly evident as real-world problems have grown more complex [27,28,29]. Specifically, some algorithms tend to lose population diversity rapidly and fall into premature convergence, others lack effective exploitation mechanisms and thus converge slowly in later stages, while certain algorithms exhibit unstable performance when dealing with high-dimensional or constrained search spaces. Table 1 summarizes the inspirations, advantages, and disadvantages of some algorithms.
In the field of corporate financial risk management, corporate bankruptcy forecasting has become an important research topic with significant implications for investors, creditors, and regulatory authorities. Accurate bankruptcy prediction enables early identification of potential financial distress, thereby reducing economic losses and enhancing financial market stability. To this end, various machine learning techniques, such as K-nearest neighbors (KNN) [30], Support Vector Machines (SVM) [31], and Artificial Neural Networks (ANN) [32], have been widely employed. Among them, KNN has gained popularity due to its simplicity, interpretability, and relatively low computational complexity. However, the predictive performance of KNN is highly sensitive to the selection of hyperparameters, particularly the number of neighbors and distance metrics. Improper parameter settings may significantly degrade prediction accuracy and stability [33]. Therefore, utilizing efficient optimization algorithms to automatically search for optimal hyperparameter configurations has become an effective approach for improving bankruptcy prediction performance.
The Connected Banking System Optimizer (CBSO) [34] is a recently proposed metaheuristic algorithm inspired by capital flows, systemic risk propagation, and dynamic interactions among financial institutions in interconnected banking systems. By simulating transaction path selection, system collapse probability, and information encryption mechanisms, CBSO guides search agents to evolve within the solution space and has demonstrated promising performance in certain engineering optimization problems. However, as problem dimensionality and complexity increase, standard CBSO exhibits several limitations. First, it relies heavily on single global-best guidance and fails to sufficiently exploit useful information contained in suboptimal individuals, leading to biased search directions and accelerated loss of population diversity. Second, in the later search stages, CBSO mainly depends on random perturbations and Lévy flights for local exploitation, lacking a clearly directed fine-grained search mechanism, which may result in slow convergence or stagnation. Furthermore, the hard boundary truncation strategy adopted by CBSO often causes individuals to cluster near the boundaries, thereby reducing search efficiency and stability in complex constrained spaces.
To address these shortcomings, this paper proposes a Multi-Strategy Enhanced Connected Banking System Optimizer (MSECBSO) by systematically improving the original CBSO framework through several complementary mechanisms. First, a multi-elite cooperative guidance strategy is developed to construct a dynamic guiding vector by integrating information from multiple high-quality individuals, thereby enhancing search diversity and directional stability. Second, an embedded differential evolution search strategy is incorporated into the CBSO framework to provide a directionally guided local exploitation capability, improving convergence accuracy and enhancing the ability to escape from local optima. Third, a soft boundary rebound mechanism is introduced to replace conventional hard boundary handling, allowing out-of-bound individuals to re-enter the feasible region in a stochastic and adaptive manner, effectively preventing boundary aggregation and maintaining population diversity.
The balance between exploration, which refers to searching new regions of the solution space, and exploitation, which denotes refining solutions in promising regions, is a critical factor for the performance of metaheuristic algorithms, and most such algorithms realize this balance through the design of adaptive mechanisms: PSO adjusts the inertia weight w to balance global exploration with a large w and local exploitation with a small w, DE adopts mutation and crossover operations for exploration and exploitation, respectively, with the scaling factor F regulating the mutation step size, and GWO modifies the parameter a to gradually narrow the search range, thus realizing a smooth transition from exploration to exploitation. By contrast, MSECBSO further enhances the exploration-exploitation balance through three synergistic mechanisms, namely the multi-elite cooperative guidance that integrates global and local elite information to expand the exploration range while preserving search directionality, the embedded DE strategy that leverages a dynamically adjusted scaling factor F ( t ) —set to a large value in early iterations for sufficient exploration and a small value in later stages for in-depth exploitation—to achieve an adaptive exploration–exploitation transition, and the soft boundary rebound mechanism that effectively prevents population aggregation and thus maintains the algorithm’s exploration capability throughout the entire iteration process.
Compared with existing optimization algorithms, the multi-elite cooperative guidance strategy adopted in MSECBSO effectively avoids the bias induced by reliance on a single global best solution. This strategy not only preserves population diversity but also enhances the stability of the search direction. Moreover, the embedded DE strategy in MSECBSO employs a dynamically adjusted scaling factor F ( t ) , which provides a more effective balance between global exploration and local exploitation than the fixed scaling factor used in conventional DE. As a result, both the convergence speed and solution accuracy of the algorithm are significantly improved. In contrast to the original CBSO algorithm, MSECBSO integrates three complementary improvement strategies to address the key limitations of CBSO, including search bias, insufficient local exploitation capability, and improper boundary handling. These enhancements enable MSECBSO to achieve consistently and significantly superior performance across all benchmark test functions as well as in practical optimization tasks.
The main contributions of this study can be summarized as follows:
(1)
A multi-strategy optimization framework is developed by integrating multi-elite cooperative guidance, embedded differential evolution search, and a soft boundary rebound mechanism into CBSO, resulting in the proposed MSECBSO algorithm.
(2)
Extensive experiments on the CEC2017 and CEC2022 benchmark suites, across multiple dimensions and function types, demonstrate that MSECBSO significantly outperforms nine state-of-the-art metaheuristic algorithms in terms of convergence accuracy, speed, and robustness.
(3)
An MSECBSO-KNN corporate bankruptcy prediction model is constructed by optimizing key KNN hyperparameters, achieving superior multi-class classification performance on the Wieslaw and JPNBDS dataset and validating the practical applicability of MSECBSO.
The remainder of this paper is organized as follows. Section 2 introduces the fundamental principles of the Connected Banking System Optimizer (CBSO) and presents the detailed design of the proposed Multi-Strategy Enhanced CBSO (MSECBSO). Section 3 reports comprehensive numerical experiments and comparative analyses conducted on the CEC2017 and CEC2022 benchmark suites to evaluate the optimization performance of the proposed algorithm. Section 4 applies MSECBSO to corporate bankruptcy forecasting by optimizing the hyperparameters of a K-nearest neighbors (KNN) classifier and analyzes the corresponding experimental results. Section 5 provides an in-depth discussion of the experimental findings, emphasizing the implications of the proposed method for large-scale financial optimization problems. Finally, Section 6 concludes the paper and outlines potential directions for future research.

2. Connected Banking System Optimizer and the Proposed Method

2.1. Connected Banking System Optimizer (CBSO)

The Connected Banking System Optimizer (CBSO) [34] is a recently proposed metaheuristic optimization algorithm inspired by the interconnection mechanisms of banking systems, including capital flows, systemic risk propagation, and dynamic interactions among financial institutions. By mimicking transaction processes, path selection strategies, system collapse responses, and the influence diffusion of core nodes in interconnected financial networks, CBSO aims to solve complex continuous and discrete optimization problems through an analogy to the operational principles of banking systems.
Similarly to other population-based metaheuristic algorithms, the initial positions of search agents are randomly generated within the predefined lower and upper bounds of the decision variables. Let the search agents (candidate solutions) be denoted by X i . The initialization of the population is defined as follows [34]:
X i = L B + r a n d U B L B
where X i represents the initial position of the i -th candidate solution, U B and L B denote the lower and upper bounds of the decision variables, respectively, and r a n d is a random number uniformly distributed in the interval (0, 1).
In the context of CBSO and optimization problems, “decision variables” refer to the parameters to be optimized in the target problem, which determine the quality of the candidate solution. For continuous optimization problems (e.g., benchmark functions in Section 3), decision variables are continuous values within the predefined U B and L B . For discrete or mixed-integer optimization problems, decision variables can be adjusted to discrete values through discretization techniques.
The CBSO algorithm mainly consists of three sequential search phases, which can be mathematically expressed as follows [34].
In the first stage, when the current iteration number is less than 20% of the maximum number of iterations, the solution is updated as follows:
X i n e w = X i + R N 1 ( R 1 B B X i ) , i = 1 , , n
where B B denotes the best solution found so far, R 1 is a random number uniformly distributed in the interval [0,1], and R N 1 is a random vector following a normal distribution.
When the search progress exceeds 20% but does not exceed 40% of the maximum number of iterations, the algorithm enters the second stage. In this stage, the population is divided into two subgroups, and different update rules are applied:
X i n e w = B B + r a n d 1 t T R N 2 R 2 B B X S 1 i ,         i f     i > N / 2 X i + L D B B L D X S 2 i   ,                                                                               o t h e r w i s e
where L D is a Lévy-distributed random vector, t and T denote the current iteration number and the maximum number of iterations, respectively, X S 1 i and X S 2 i are randomly selected search agents, R 2 is a random number in [0, 1], and R N 2 is a normally distributed random vector.
In the third stage, the CBSO algorithm focuses on performing local search around the promising regions identified so far in order to obtain better solutions. The update rule is expressed as
X i n e w = B B + r a n d 1 t T L D L D X S 1 i X S 2 i , i = 1 , , n
To further improve the solution quality of the CBSO algorithm when solving optimization problems, an additional complementary update strategy is introduced besides the above three-stage update mechanism, aiming to simulate the collapse and information encryption processes in a banking system. It is assumed that the probability of system collapse is 20%. If no collapse occurs, potential network attacks are considered, and the current solution is altered through an encoding mechanism. The corresponding mathematical formulation is given by
U P X i = X i n e w + r a n d r a n d 1 t T L B + r a n d ( U B L B ) , R U < 0.2 X i n e w + ( r a n d r a n d ) ( X S 1 i X S 2 i ) , o t h e r w i s e
where L B and U B denote the lower and upper bounds of the decision variables, respectively, R U is a random number uniformly distributed in [0, 1], and U P X i represents the final updated solution after considering system collapse and encryption mechanisms.

2.2. Proposed Multi-Strategy Enhanced CBSO (MSECBSO)

Although CBSO demonstrates promising potential in solving certain optimization problems, it still exhibits several shortcomings when dealing with high-dimensional, nonlinear, and multimodal optimization tasks. On the one hand, the algorithm primarily relies on a single global-best individual for guidance, which fails to fully utilize valuable information contained in suboptimal individuals and may lead to biased search directions and rapid loss of population diversity. On the other hand, in the later search stages, CBSO mainly depends on random perturbations and Lévy flights for exploitation, lacking a directionally guided fine-grained local search mechanism. In addition, the conventional hard boundary truncation strategy may cause individuals to accumulate near the search boundaries, thereby reducing overall search efficiency and stability.
To address these issues, this paper proposes a Multi-Strategy Enhanced Connected Banking System Optimizer (MSECBSO) by introducing three complementary mechanisms: multi-elite cooperative guidance, embedded differential evolution search, and a soft boundary rebound constraint mechanism. Through the synergistic integration of these strategies, MSECBSO significantly enhances global exploration capability, local exploitation accuracy, and convergence stability while maintaining population diversity.
The search-direction bias in CBSO mainly arises from its reliance on a single global best individual, which tends to trap the search in local regions. To overcome this limitation, a multi-elite cooperative guidance strategy is introduced, where information from multiple high-quality individuals ( X α ,   X β ,     a n d     X δ ) is fused to construct a composite guidance vector E, thereby balancing the search direction and improving stability. To alleviate the rapid loss of population diversity caused by the single-guidance mode and hard boundary handling, random weight vectors and a soft boundary rebound mechanism are incorporated, enabling adaptive population adjustment and diversity preservation.
Furthermore, an embedded differential evolution (DE) search is integrated into the CBSO framework to enhance local exploitation. The DE mutation operator, V i , j introduces population difference information into the search process, while the dynamically adjusted scaling factor F ( t ) and the exponentially decaying disturbance coefficient σ ( t ) jointly realize a dynamic balance between global exploration and local exploitation, leading to improved convergence speed and solution accuracy.

2.2.1. Multi-Elite Cooperative Guidance Strategy

In standard CBSO, population updates are mainly guided by a single global-best individual, which may result in a narrow search direction and accelerated loss of diversity. To overcome this limitation, a multi-elite cooperative guidance strategy is proposed, which constructs a dynamic guiding vector by integrating information from multiple elite individuals.
At the t -th iteration, all individuals are ranked according to their fitness values, and the top three elite individuals are selected and denoted as X α ,   X β   a n d   X δ , representing the best, second-best, and third-best solutions, respectively.
To avoid rigid search behavior caused by fixed weights, a random weight vector is introduced:
w = [ w 1 , w 2 , w 3 ] , k = 1 3   w k = 1
where w k are randomly generated and normalized weight coefficients used to dynamically adjust the contribution of each elite individual.
Meanwhile, to enhance the exploration capability in the early stage of the search process and to strengthen the convergence performance in the later stage, a Gaussian perturbation term with an exponential decay over the number of iterations is introduced. The variation curve of this term is illustrated in Figure 1, and its mathematical expression is given by
σ ( t ) = e x p 20 t T
where t and T denote the current and maximum iteration numbers, respectively.
By integrating the above components, the multi-elite cooperative guiding vector is defined as
E = w 1 X α + w 2 X β + w 3 X δ + σ ( t ) N ( 0 ,   1 )
where N ( 0 ,   1 ) denotes a standard Gaussian random vector.
This strategy allows each individual to be influenced not only by the global-best solution but also by suboptimal elites, effectively reducing search bias and improving robustness when handling complex multimodal optimization problems.

2.2.2. Embedded Differential Evolution Search Strategy

Although the multi-elite cooperative guidance strategy enhances overall search directionality, local exploitation capability may still be insufficient in highly complex landscapes. Therefore, a classical Differential Evolution (DE) operator is embedded into the CBSO framework to further improve local search accuracy and the ability to escape from local optima.
At iteration t , for the i -th individual, two mutually distinct individuals r 1 and r 2 (where r 1 r 2 i ) are randomly selected from the population. The mutation vector is constructed as follows:
V i = E + F ( t ) ( X r 1 X r 2 )
where F ( t ) denotes a dynamically adjusted scaling factor with respect to the iteration number. As illustrated in Figure 2, F ( t ) takes a relatively large value (close to 0.8) in the early iterations to enhance global exploration, and gradually decreases (approaching 0.2) as the iteration proceeds to focus on local exploitation. In this way, a dynamic balance between exploration and exploitation is achieved. The scaling factor is defined as
F ( t ) = F m i n + F m a x F m i n 1 + e x p 0.02 ( t T 2 )
where F m i n and F m a x denote the minimum and maximum scaling factors, which are set to 0.2 and 0.8, respectively.
Subsequently, a binomial crossover strategy is applied to generate the trial vector U i , j :
U i , j = V i , j , i f     r a n d i , j C R   o r   j = j r a n d X i , j , o t h e r w i s e
where C R = 0.8 denotes the crossover probability, and j r a n d is a randomly selected dimension index.
Finally, a greedy selection mechanism is employed:
X i = U i , i f   f U i <   f ( X i ) X i , o t h e r w i s e
This embedded DE strategy provides a directionally guided local exploitation mechanism, significantly improving convergence precision and enhancing the algorithm’s ability to escape local optima.

2.2.3. Soft Boundary Rebound Mechanism

Boundary constraint handling plays a crucial role in high-dimensional optimization problems. Standard CBSO adopts a hard boundary truncation strategy, which directly clips out-of-bound solutions to boundary values. This approach often leads to population aggregation near boundaries, thereby reducing diversity and search efficiency.
To overcome this issue, a soft boundary rebound mechanism is introduced. When an individual violates the boundary constraints, it is randomly rebounded back into the feasible region based on its previous position.
Let X i , j n e w denote the current position of the i -th individual in the j -th dimension, with lower and upper bounds L B j and U B j , respectively, and let X i , j o l d denote its previous position. The boundary correction is defined as
X i , j = X i , j o l d + r a n d ( U B j X i , j o l d ) , X i , j n e w > U B j X i , j o l d + r a n d ( L B j X i , j o l d ) , X i , j n e w < L B j
where r a n d   ( 0 ,   1 ) is a uniformly distributed random number.
This mechanism enables out-of-bound individuals to re-enter the feasible region with adaptive step sizes, effectively preventing boundary congestion while preserving population diversity and enhancing robustness in constrained optimization spaces.
Based on the preceding analysis, Algorithm 1 outlines the pseudocode of the proposed MSECBSO.
Algorithm 1. Pseudo-Code of MSECBSO
1: Initialize population size N , bounds  u b , l b , Maximum iteration number  T  and Problem dimension dim.
2:  Initialize population X i  by Equation (1).
3: Evaluate fitness  f ( X i )  for all individuals and determine initial global best  X b e s t
4: while  t = 1 : T  do
5:  Sort population according to fitness values and Select elite individuals  X α , X β , X δ .
6:  Generate random weight vector  w , disturbance factor  σ ( t )  and dynamically adjusted scaling factor  F ( t ) .
7:  Construct multi-elite guiding vector  E  using Equation (8).
8:   for  i = 1 : N
9:     if  t < 0.2 T
10:     Update position using exploration strategy (Equation (2)).
11:   else if  0.2 T t 0.4 T
12:    Update position using transition strategy (Equation (3)).
13:   else
14:    Update position using quadratic interpolation strategy (Equation (4)).
15:   end if
16:   Apply soft boundary rebound mechanism (Equation (13)).
17:   Perform random global or local perturbation (Equation (5)).
18:   Embedded differential evolution search strategy (Equations (9)–(12)).
19:   Apply soft boundary rebound mechanism (Equation (13)).
20:   Apply greedy selection and update population
21:  end for
22:  Update global best solution  X b e s t .
23: end while
24: Return  X b e s t .
In summary, the framework of the improvement strategies introduced in MSECBSO compared with the standard CBSO is illustrated in Figure 3. The visualization clearly shows that, while preserving the core structure of CBSO, MSECBSO incorporates three mutually complementary improvement mechanisms, which effectively enhance the algorithm’s exploration capability, exploitation capability, and boundary-handling ability.

3. Numerical Experiments

3.1. Competitor Algorithms and Parameters Setting

This section investigates the effectiveness of the proposed MSECBSO by testing it on two widely recognized and highly demanding numerical optimization benchmark sets, namely CEC2017 [35] and CEC2022 [36]. In the CEC2017 and CEC2022 benchmark suites, the decision variables of each test function are continuous values within the specified UB and LB. Its performance is systematically compared against a range of well-established optimization methods, including Particle Swarm Optimization (PSO) [8], Differential Evolution (DE) [9], Gray Wolf Optimizer (GWO) [11], Salp Swarm Algorithm (SSA) [37], Snake Optimizer (SO) [38], Kangaroo Escape Optimization (KEO) [39], Beaver Behavior Optimizer (BBO) [40], Hannibal Barca optimizer (HBO) [41], and Connected Banking System Optimizer (CBSO) [34]. The parameter settings and experimental configurations adopted for all compared algorithms are summarized in Table 2.

3.2. Ablation Study of Improvement Strategies

To verify the synergistic effect of the three proposed improvement mechanisms (multi-elite cooperative guidance, embedded differential evolution search, and soft boundary rebound), three CBSO variants are constructed by retaining partial strategies: CBSO-S1—CBSO with only a multi-elite cooperative guidance strategy; CBSO-S2—CBSO with only an embedded differential evolution search strategy; CBSO-S3—CBSO with only a soft boundary rebound mechanism. The experimental results are shown in Figure 4 and Figure 5.
Figure 4 illustrates the convergence curves of the CBSO variants with different improvement strategies and MSECBSO on representative benchmark functions. All variants incorporating a single improvement strategy exhibit superior convergence performance compared with the original CBSO, confirming the effectiveness of each proposed mechanism. On the multimodal function F9, after 500 iterations, MSECBSO achieves a fitness value of only 9.5790 × 102, whereas CBSO-S1 (multi-elite cooperative guidance) attains 2.0199 × 103, CBSO-S2 (embedded differential evolution search) 2.5168 × 103, CBSO-S3 (soft boundary rebound mechanism) 2.3946 × 103, and the original CBSO reaches as high as 9.4254 × 103. Compared with the original CBSO, MSECBSO reduces the fitness value by 90.06%. On the unimodal function F3, the final fitness value of MSECBSO is 5.4829 × 103, which is markedly lower than those of CBSO-S1 (4.8829 × 104), CBSO-S2 (7.1202 × 104), CBSO-S3 (6.1239 × 104), and the original CBSO (1.3549 × 105), demonstrating a substantially stronger local exploitation capability. For the boundary-sensitive function F21, MSECBSO converges to a stable fitness value of 2.3676 × 103, outperforming CBSO-S1 (2.4287 × 103), CBSO-S2 (2.3954 × 103), CBSO-S3 (2.4056 × 103), and the original CBSO (2.5319 × 103), which verifies the advantage of the soft boundary rebound mechanism in maintaining population diversity. After integrating all three strategies, MSECBSO consistently yields the lowest convergence curves across all test functions with the fastest descending rates, indicating that the synergistic interaction of multiple strategies achieves a dynamic balance between global exploration and local exploitation.
Figure 5 presents the average ranking values of all algorithms on the CEC2017 test suite, where a lower ranking indicates better overall performance. The original CBSO records the highest average ranking of 4.87, reflecting its pronounced limitations when addressing complex optimization problems. The three single-strategy variants all achieve reduced rankings: CBSO-S1 (multi-elite cooperative guidance) obtains an average ranking of 2.97, CBSO-S2 (embedded differential evolution search) 1.80, and CBSO-S3 (soft boundary rebound mechanism) 3.90. These results demonstrate that each individual improvement strategy can effectively enhance algorithmic performance, with the embedded differential evolution search showing the most prominent standalone optimization capability. Notably, MSECBSO achieves an average ranking of only 1.47, which is significantly lower than those of all variants and the original algorithm. This further confirms that the synergistic effect of the three improvement mechanisms is not a simple superposition, but rather a complementary optimization process that yields a stronger global optimization ability, enabling the algorithm to maintain stable and superior performance across different categories of functions.
In summary, the ablation experimental results indicate that multi-elite cooperative guidance, embedded differential evolution search, and the soft boundary rebound mechanism each effectively address key deficiencies of CBSO. Their organic integration constitutes the critical factor enabling the performance breakthrough of MSECBSO, thereby laying a solid foundation for its application to higher-dimensional problems and real-world optimization tasks.

3.3. Performance Evaluation and Analysis on the CEC2017 and CEC2022 Benchmark Suites

This subsection presents a comprehensive comparative evaluation of the proposed MSECBSO against several representative benchmark algorithms using the CEC2017 and CEC2022 test suites. These benchmark sets comprise four categories of objective functions: unimodal, multimodal, hybrid, and composition. Multimodal functions, characterized by numerous local optima, are mainly utilized to examine the exploration capability and global search effectiveness of optimization algorithms. In contrast, unimodal functions feature a single global optimum and are therefore employed to assess solution precision and exploitation performance. Furthermore, hybrid and composition functions introduce increased complexity by combining different landscapes, thereby providing a rigorous test of algorithm robustness and resistance to premature convergence.
To ensure fairness and reproducibility, all algorithms were evaluated under the same experimental conditions. The population size was set to 30, and the maximum number of iterations was limited to 500. Each algorithm was executed independently 30 times to mitigate the impact of randomness inherent in stochastic optimization processes. Performance outcomes are reported in terms of the mean and standard deviation, with the best results emphasized in bold. All experiments were implemented in MATLAB 2024b and conducted on a Windows 11 platform running on an AMD Ryzen 7 9700X processor (3.80 GHz) with 48 GB of RAM.
Table 3, Table 4, Table 5 and Table 6 systematically present the performance of MSECBSO in comparison with nine benchmark algorithms on the CEC2017 (30-dimensional and 100-dimensional) and CEC2022 (10-dimensional and 20-dimensional) test suites. These benchmarks cover four categories of test functions, namely unimodal, multimodal, hybrid, and composition functions, thereby providing a comprehensive evaluation of the proposed algorithm’s optimization capability across problems with different dimensionalities and levels of complexity. In the tables, bold values are used to indicate the best-performing result in that row or column.
In the 30-dimensional CEC2017 test suite, MSECBSO demonstrates outstanding convergence accuracy on unimodal functions (e.g., F1, F3, and F6). For the F1 function, the average fitness value obtained by MSECBSO is 3.1465 × 103, which is significantly better than those of PSO (2.3227 × 109), DE (2.6671 × 107), and the original CBSO (3.3791 × 109), and is only about 49% of that achieved by SSA (6.4350 × 103). For the F3 function, MSECBSO achieves a low average fitness value of 5.4829 × 103, which is markedly lower than those of BBO (1.9300 × 104) and KEO (4.8829 × 104), indicating its strong local exploitation capability.
On multimodal functions (e.g., F9, F14, and F19), the superiority of MSECBSO is more pronounced. For the F9 function, the average fitness value of MSECBSO is 9.5790 × 102, which is only 10.2% of that of CBSO (9.4254 × 103) and 18.2% of that of HBO (5.2512 × 103). In addition, the corresponding standard deviation is as low as 5.4403 × 101, which is significantly smaller than those of the other algorithms, demonstrating excellent robustness in complex multimodal search spaces. For the F14 and F15 functions, MSECBSO achieves average fitness values of 1.4452 × 103 and 1.5588 × 103, respectively, representing reductions of more than 99.5% compared with CBSO (2.9666 × 105 and 1.6480 × 105). These results clearly confirm that the multi-elite cooperative guidance strategy effectively enhances global search capability.
For hybrid and composition functions (e.g., F12, F18, and F30), MSECBSO also exhibits competitive performance. Specifically, for the F12 function, the average fitness value of MSECBSO is 7.9176 × 105, which is only 0.79% of that of PSO (1.0047 × 108) and 0.73% of that of GWO (1.0845 × 108), demonstrating its strong ability to solve complex hybrid optimization problems.
When the dimensionality increases to 100, the advantage of MSECBSO in high-dimensional optimization becomes more evident. For the F2 function, MSECBSO achieves an average fitness value of 1.0690 × 10105, which is much lower than those of DE (1.0401 × 10146) and CBSO (5.1814 × 10154), with performance differences exceeding 1040 in magnitude. This indicates that MSECBSO effectively alleviates the tendency of algorithms to fall into local optima in high-dimensional search spaces. For the F7 function, the average fitness value of MSECBSO is 1.5928 × 103, which is reduced by 20.5% compared with BBO (2.0032 × 103) and by 29.3% compared with SO (2.2528 × 103). For the F30 function, MSECBSO obtains an average fitness value of 6.6981 × 104, which is only 0.0064% of that of PSO (1.0479 × 109), further verifying its stability in high-dimensional complex constrained optimization problems.
On the CEC2022 test suite, MSECBSO continues to exhibit superior performance. In the 10-dimensional case, the average fitness value for the F6 function is 1.8005 × 103, which is only 13.1% of that of PSO (1.3687 × 104) and 29.4% of that of GWO (6.1221 × 103). Meanwhile, the standard deviation is as low as 4.5815 × 10−1, indicating extremely high convergence precision. For the F11 function, MSECBSO achieves an average fitness value of 2.6000 × 103, which is lower than those of BBO (2.6784 × 103) and SO (2.6853 × 103), with a near-zero standard deviation, demonstrating excellent stability on low-dimensional complex functions.
In the 20-dimensional scenario, the average fitness value of MSECBSO for the F5 function is 9.0444 × 102, which is significantly lower than those of SSA (2.3845 × 103) and HBO (2.3944 × 103), and represents a reduction of 73.6% compared with CBSO (3.4295 × 103). For the F6 function, MSECBSO achieves an average fitness value of 1.9209 × 103, which is markedly lower than those of BBO (3.5115 × 103) and KEO (6.4145 × 103), demonstrating its strong capability in solving dynamically complex optimization problems.
The convergence curves illustrated in Figure 6 clearly demonstrate the fast convergence behavior of MSECBSO under different test suites and dimensional settings. For the CEC2017 30-dimensional F5 function, the convergence curve shows that the fitness value of MSECBSO drops below 700 within the first 50 iterations, whereas PSO, GWO, and other algorithms remain above 900. After 100 iterations, the fitness value of MSECBSO stabilizes at around 600, which is approximately 25% lower than that of CBSO (above 800). For the F14 function, MSECBSO rapidly approaches the optimal solution after about 150 iterations, while CBSO, HBO, and other algorithms still maintain fitness values on the order of 106 even after 500 iterations, indicating a substantial performance gap.
In the convergence curves of the CEC2017 100-dimensional F1 function, the fitness value of MSECBSO continuously decreases at a fast rate as the iteration proceeds and stabilizes below the 108 level after 200 iterations. In contrast, PSO, GWO, and other algorithms consistently remain above the 1010 level, indicating an improvement of two orders of magnitude in convergence speed. For the F18 function, MSECBSO reduces the fitness value below 107 at around 300 iterations, reaching a stable convergence state approximately 100 iterations earlier than CBSO, whose fitness value remains on the order of 108.
The convergence curves on the CEC2022 test suite further validate the superiority of MSECBSO. For the 10-dimensional F6 function, MSECBSO rapidly separates from the competing algorithms in the early iterations and reduces the fitness value below 2000 at 100 iterations, whereas PSO, DE, and other algorithms remain above 3000. For the 20-dimensional F1 function, the convergence curve of MSECBSO enters a rapid descending phase after 50 iterations and stabilizes at around 500 by 200 iterations. Although this value is slightly higher than that of BBO (4.3117 × 102), it is significantly lower than those of PSO (6.4860 × 103) and CBSO (2.6423 × 104).
The boxplot analysis shown in Figure 7 statistically confirms the robustness and stability of MSECBSO. For the CEC2017 30-dimensional F9 function, the boxplot of MSECBSO shows that the fitness values are concentrated in the range of 900–1100, with a narrow interquartile range and no outliers. In contrast, the boxplot of CBSO spans a wide range from 5000 to 10,000, exhibiting high dispersion. For the F18 function, the median fitness value of MSECBSO is clearly lower than those of the other algorithms, and its interquartile range is on the order of 104, which is much smaller than those of PSO (on the order of 106) and SSA (on the order of 106).
For the CEC2017 100-dimensional F14 function, the boxplot indicates that the fitness values of MSECBSO are distributed within a very narrow range, with the box height being less than 107. In contrast, the box heights of CBSO, HBO, and other algorithms exceed 108, demonstrating that MSECBSO achieves superior stability in high-dimensional search spaces. Similarly, for the CEC2022 10-dimensional F11 function and the 20-dimensional F6 function, MSECBSO exhibits the narrowest box height and the lowest median among all compared algorithms, further confirming its consistent and stable optimization performance across different dimensions and function types. This robustness can be attributed to the soft boundary rebound mechanism, which effectively maintains population diversity, and the embedded differential evolution strategy, which enhances fine-grained local search capability.

3.4. Statistical Analysis

Statistical evaluation is an essential component of optimization research, as it provides a rigorous and systematic framework for objectively assessing and contrasting the performance of different algorithms. Such analyses enable researchers to draw reliable, data-driven conclusions when identifying the most appropriate method for a given application. In this subsection, the effectiveness of the proposed MSECBSO is examined through two widely adopted nonparametric statistical tests: the Wilcoxon rank-sum test and the Friedman test. A comprehensive description of the testing methodology, along with an interpretation of the obtained results, is presented.

3.4.1. Effectiveness Analysis

In this section, the overall effectiveness (OE) of MSECBSO and the competing algorithms is evaluated using Equation (14). The parameters involved are defined as follows: M denotes the total number of test functions, W i represents the number of test functions on which MSECBSO outperforms the competing algorithm, T i denotes the number of test functions on which both algorithms achieve comparable performance, and L i indicates the number of test functions on which MSECBSO performs worse than the competing algorithm [42].
The overall effectiveness is calculated as
O E i % = M L i T i M × 100 % = W i M × 100 %
where a higher O E value corresponds to stronger overall performance.
By statistically summarizing the “win–tie–loss” distributions under different benchmark suites and dimensions, Table 7 provides a global and quantitative assessment of algorithmic effectiveness, offering an intuitive yet rigorous statistical basis for performance evaluation.
In the 30-dimensional scenario of the CEC2017 test suite, MSECBSO demonstrates a strong and highly generalizable performance advantage. When compared with eight algorithms including PSO, DE, and KEO, MSECBSO performs worse on only one test function, while outperforming the competitors on the remaining 29 functions. In comparison with HBO and CBSO, MSECBSO maintains superiority on all 30 test functions, with no ties or losses observed. Under this setting, the overall effectiveness of MSECBSO reaches 95.19%, which clearly confirms its strong adaptability to various function types in low- and medium-dimensional complex optimization problems.
When the dimensionality increases to 100, the effectiveness of MSECBSO further improves to 95.53%. Specifically, MSECBSO achieves full superiority on all 30 test functions when compared with DE, HBO, and CBSO. In comparisons with GWO and KEO, MSECBSO performs worse on only one test function, while in comparisons with PSO, SSA, and SO, it underperforms on only two test functions. These results indicate that, as the dimensionality of the optimization problem increases, the multi-strategy cooperative mechanism of MSECBSO—particularly the multi-elite cooperative guidance and the soft boundary rebound strategy—effectively alleviates the loss of population diversity and search direction bias in high-dimensional search spaces, leading to increasingly pronounced performance advantages.
On the CEC2022 test suite under the 10-dimensional setting, MSECBSO achieves an overall effectiveness of 97.22%, which is the highest value among all test scenarios. In comparisons with five algorithms including PSO, GWO, KEO, HBO, and CBSO, MSECBSO achieves full superiority on all 12 test functions. When compared with DE, only one test function results in a tie. In the comparison with SO, MSECBSO outperforms on nine test functions and underperforms on three, while in the comparison with SSA, the results are evenly split at six:six. This is the only comparison scenario in which MSECBSO does not achieve a majority advantage; nevertheless, it still demonstrates competitive adaptability across a subset of functions.
When the dimensionality increases to 20, the overall effectiveness of MSECBSO slightly decreases to 93.52%, yet it remains dominant. MSECBSO continues to achieve full superiority on all 12 test functions when compared with PSO, DE, KEO, HBO, and CBSO. In the comparison with GWO, MSECBSO outperforms on 11 test functions and underperforms on one. Against SO, it outperforms on 10 test functions and underperforms on two, while in comparison with BBO, it outperforms on eight test functions and underperforms on four. This variation suggests that the function construction in the CEC2022 test suite is more challenging, particularly under the 20-dimensional setting, where higher demands are placed on the dynamic adaptability of optimization algorithms. Nevertheless, MSECBSO maintains superiority on the vast majority of test functions, validating its robust performance on this new benchmark suite.
Overall, these results convincingly demonstrate that the multi-strategy cooperative framework of MSECBSO effectively compensates for the limitations of standard CBSO in terms of cooperative guidance, local exploitation accuracy, and boundary handling. As a result, MSECBSO exhibits strong stability and general applicability, consistently maintaining significant performance advantages across different dimensions and diverse categories of benchmark functions.

3.4.2. Friedman Mean Rank Test

This subsection employs the Friedman test to determine the overall ranking performance of the proposed MSECBSO in comparison with the competing algorithms. The Friedman test is a nonparametric statistical approach designed to detect significant differences in median performance among three or more related samples. It is particularly suitable for experimental scenarios involving repeated evaluations or block designs, and it is commonly used as an alternative to one-way ANOVA when the assumption of normal data distribution is violated.
The Friedman test statistic is computed according to Equation (15) [43].
Q = 12 n k k + 1 j = 1 k R j 2 3 n k + 1 ,
where n denotes the number of blocks, k represents the number of compared algorithms, and R j corresponds to the total rank assigned to the j -th algorithm. For sufficiently large values of n and k , the statistic Q approximately follows a chi-square χ 2 distribution with k 1 degrees of freedom.
The algorithm ranking distribution shown in Figure 8, together with the Friedman mean ranking test results reported in Table 8, further validates the comprehensive optimization superiority of MSECBSO from the perspective of statistical significance. By employing both the mean rank (M.R) and the total rank (T.R) as dual evaluation indicators, the performance hierarchy of the compared algorithms across different test scenarios is intuitively and quantitatively characterized.
According to the statistical results reported in Table 7, MSECBSO consistently ranks first in terms of total rank (T.R) across all test suites and dimensional settings, while its mean rank (M.R) is significantly lower than those of the competing algorithms, exhibiting strong consistency and stability. In the 30-dimensional scenario of the CEC2017 test suite, the mean rank of MSECBSO is only 1.43, which is substantially lower than that of the second-ranked BBO (2.63) and the third-ranked KEO (4.27). In contrast, the original CBSO attains a mean rank as high as 8.93, ranking tenth overall, indicating a remarkably large performance gap. When the dimensionality increases to 100, the mean rank of MSECBSO slightly rises to 1.90 but still remains the best among all algorithms. Meanwhile, BBO ranks second with a mean rank of 2.10, whereas CBSO continues to rank last with a mean rank of 9.17. These results indicate that the optimization advantage of MSECBSO does not deteriorate in high-dimensional search spaces; instead, the performance gap between MSECBSO and the original algorithm becomes even more pronounced.
On the CEC2022 test suite, the leading position of MSECBSO remains equally robust. In the 10-dimensional scenario, MSECBSO achieves an extremely low mean rank of 1.08, which is close to the theoretical optimum. The second-ranked BBO (3.58) and the third-ranked KEO (4.92) lag behind by more than two ranking units. In the 20-dimensional scenario, the mean rank of MSECBSO is 1.25, still securing the first position, followed by BBO (2.67) and SO (4.75) in second and third place, respectively. In contrast, the mean rank of CBSO drops to 7.67, ranking tenth. These results demonstrate that even when facing the more challenging and dynamically complex functions in the CEC2022 test suite, the multi-strategy cooperative mechanism of MSECBSO remains highly effective, enabling it to maintain optimal overall performance.
The function-wise ranking distribution shown in Figure 8 further reveals the generalization capability of MSECBSO across different types of benchmark functions. In the CEC2017 100-dimensional scenario, the ranking distribution indicates that MSECBSO consistently ranks first on all functions from F1 to F30, with no ranking degradation observed. In contrast, algorithms such as CBSO and PSO are predominantly ranked between seven and ten on most functions, showing only slight improvements on a few relatively simple cases. Similarly, in the CEC2022 10-dimensional scenario, MSECBSO maintains the first rank on all functions from F1 to F12, whereas algorithms such as SSA and SO only occasionally enter the top five and exhibit large performance fluctuations. This full-function coverage in ranking superiority clearly demonstrates that MSECBSO not only excels in overall performance but also maintains stable optimal behavior across unimodal, multimodal, hybrid, and composition functions, with significantly stronger generality and reliability than the competing algorithms.
Overall, the statistical results of the Friedman mean ranking test are highly consistent with the function-wise ranking distributions, jointly validating the comprehensive superiority of MSECBSO in terms of optimization accuracy, convergence speed, and robustness from both quantitative and qualitative perspectives. This advantage primarily stems from the precise control of search directions provided by the multi-elite cooperative guidance strategy, the enhanced local exploitation enabled by the embedded differential evolution mechanism, and the effective maintenance of population diversity through the soft boundary rebound strategy. The synergistic integration of these components allows MSECBSO to adapt effectively to optimization problems of varying dimensionalities and function characteristics, exhibiting strong environmental adaptability and performance stability.

3.5. Computational Efficiency Analysis

To evaluate the computational overhead of MSECBSO, the average single-run execution time of MSECBSO and the original CBSO was recorded on the CEC2017 benchmark suite with 30 dimensions. The experimental configuration included a population size of 30 and a maximum of 500 iterations. All experiments were conducted on a hardware platform equipped with an AMD Ryzen 7 9700X processor and 48 GB of RAM, using MATLAB 2024b as the software environment. The corresponding results are presented in Figure 9.
As can be observed from Figure 9, the execution times of both MSECBSO and the original CBSO vary systematically with the complexity of the test functions, while the difference in time consumption between the two algorithms remains within a reasonable range. For simple unimodal functions (e.g., F6 and F10), the average runtime of MSECBSO ranges from 0.172 to 0.181 s, compared with 0.175 to 0.188 s for the original CBSO, with a difference of less than 0.02 s. On moderately complex multimodal functions (e.g., F9 and F14), the runtime of MSECBSO lies between 0.221 and 0.307 s, whereas that of the original CBSO ranges from 0.216 to 0.287 s, corresponding to a time increase of approximately 5–7%. Even for highly complex hybrid and composition functions (e.g., F18 and F30), the maximum average runtime of MSECBSO reaches 0.904 s, compared with 0.805 s for the original CBSO, and the increase does not exceed 12%.
The slight increase in computational cost is mainly attributed to the additional operations introduced in MSECBSO, including the construction of multi-elite guidance vectors, differential evolution-based mutation and crossover, and the soft boundary rebound checking mechanism. However, these extra computations lead to substantial performance improvements, as demonstrated in Section 3.2, Section 3.3 and Section 3.4. From a cost–performance perspective, MSECBSO achieves comprehensive enhancements in convergence accuracy, robustness, and stability at the expense of no more than a 12% increase in computational cost. This trade-off is particularly advantageous for high-dimensional and complex optimization problems and fully satisfies the efficiency requirements of engineering optimization and practical applications.
Furthermore, a comparison of the average runtime across all 30 test functions shows that the overall mean execution time of MSECBSO is 0.312 s, while that of the original CBSO is 0.286 s, corresponding to a global time increase of only 9.1%. This result further confirms that MSECBSO maintains good computational efficiency while incorporating multiple improvement strategies, without introducing excessive algorithmic complexity.

4. Corporate Bankruptcy Forecasting

4.1. Mathematical Model of KNN

The K-Nearest Neighbors (KNN) algorithm is a classical instance-based and non-parametric classification method. Its core idea is as follows: for a given query sample, the distances between the query and all samples in the training set are computed, the K nearest neighbors are selected, and the class label of the query sample is determined by majority voting among these neighbors [44]. Since KNN does not require explicit training of complex model parameters, it features simplicity of implementation and strong interpretability, making it particularly suitable for classification and prediction tasks on small- to medium-sized datasets.
Let the training dataset be defined as
D = { ( x i , y i ) } i = 1 N , x i R d , y i { 0 , 1 }
where y i =   1 denotes a bankrupt enterprise and y i =   0 denotes a non-bankrupt enterprise. For an arbitrary query sample x , the distance between x and each training sample x i is computed. Commonly used distance measures include the Euclidean distance, Manhattan distance, and the more general Minkowski distance:
D p ( x , x i ) = j = 1 d   | x j x i j | p 1 / p
where p   = 2 corresponds to the Euclidean distance and p   = 1 corresponds to the Manhattan distance.
After sorting the distances in ascending order, the set of the K nearest neighbors, denoted as N K ( x ) , is selected. The basic voting rule of KNN is defined as
y ^ = a r g m a x c { 0 , 1 }   x i N K ( x )   I ( y i = c )
where I · is the indicator function.
To improve robustness, a distance-weighted voting strategy can also be employed, in which closer neighbors are assigned larger weights:
y ^ = a r g m a x c { 0 ,   1 }   x i N K ( x )   w i I ( y i = c ) , w i = 1 D ( x , x i ) + ε
where ε is a small positive constant introduced to avoid division by zero.
The predictive performance of the KNN algorithm is highly sensitive to several key hyperparameters. Among them, the number of neighbors K plays a critical role: a small K value makes the model susceptible to noise, whereas a large K value may lead to excessive smoothing and the loss of local discriminative information. In addition, the choice of distance metric (e.g., the parameter p in the Minkowski distance, or the direct use of Euclidean or Manhattan distance) and the voting scheme (equal-weight voting or distance-weighted voting) can also significantly affect classification performance [44,45]. Therefore, employing efficient and intelligent optimization algorithms to adaptively tune the hyperparameters of KNN is an effective approach to improving the accuracy and stability of bankruptcy prediction models.

4.2. Construction of the MSECBSO-KNN Bankruptcy Prediction Model

The K-Nearest Neighbors (KNN) algorithm is an instance-based and non-parametric classification method whose core principle is to measure the similarity between samples and to make classification decisions based on the class information of the nearest neighbors in the training set. Owing to its simple structure and the absence of a complex training process, KNN has been widely applied to small- and medium-scale datasets, particularly in financial risk and bankruptcy prediction tasks [44]. However, the classification performance of KNN is highly sensitive to the selection of model hyperparameters, especially the number of neighbors K and the choice of distance metric. Inappropriate parameter settings may lead to degraded prediction accuracy or insufficient model stability.
To address these issues, this study introduces the MSECBSO to perform global optimization of the key hyperparameters of the KNN model, thereby constructing an algorithm-optimized bankruptcy prediction framework referred to as MSECBSO-KNN. In this framework, MSECBSO is employed to search for the optimal combination of the neighbor number K and the distance metric parameter within a predefined parameter space, with the objective of improving the classification accuracy and generalization capability of KNN in bankruptcy prediction tasks.
Specifically, each individual position vector in MSECBSO represents a candidate KNN parameter configuration, consisting of the neighbor number K and the distance order p . By iteratively updating the positions of population individuals, MSECBSO continuously evaluates the classification performance of different parameter combinations on the training dataset and progressively converges toward the optimal solution. Once the maximum number of iterations is reached or a convergence criterion is satisfied, the algorithm outputs the optimal KNN parameter configuration. Subsequently, a KNN classifier is constructed using these optimal parameters on the training set and is applied to bankruptcy prediction.
By integrating an intelligent optimization algorithm with the KNN classifier, the proposed MSECBSO-KNN model effectively overcomes the reliance of traditional KNN on empirically selected parameters and enables adaptive hyperparameter optimization. As a result, the model provides a more robust and high-precision classification framework for bankruptcy prediction. The search ranges of the KNN hyperparameters, including the neighbor number K and the distance order p , are summarized in Table 9.
The search ranges of the KNN hyperparameters were determined based on commonly adopted settings reported in the related literature, together with empirical considerations regarding the characteristics of the dataset. Specifically, the range of the number of nearest neighbors was selected to be sufficiently broad to capture different neighborhood structures while avoiding excessively large values that may lead to over-smoothing and increased computational cost. In addition, preliminary experiments were conducted to verify that the chosen ranges provide stable and reasonable performance. Overall, these settings ensure a good balance between search effectiveness, computational feasibility, and reproducibility.
Moreover, the average classification error of KNN obtained from the inner cross-validation is adopted as the optimization objective of the MSECBSO algorithm, which is defined as
m i n f ( x ) = 1 K c v k = 1 K c v   E k N k
where K c v denotes the number of folds in the inner cross-validation (e.g., 5-fold), E k represents the number of misclassified samples in the validation set of the k -th fold, and N k is the total number of samples in that fold.
During the optimization process, MSECBSO iteratively updates the population individual positions X to continuously reduce the above objective function, thereby obtaining the optimal KNN hyperparameter configuration X b e s t .
To ensure the reliability and unbiasedness of the case study results, an outer 10-fold cross-validation is employed to evaluate the classification performance, while the inner cross-validation is used for optimizing the key hyperparameters of the classifier. Meanwhile, stratified sampling is adopted to ensure that the proportions of bankrupt and non-bankrupt enterprises remain consistent across all folds. In addition, to alleviate the randomness introduced by data partitioning, the entire procedure is repeated multiple independent times, and the averaged results are reported as the final performance metrics.
Accordingly, the overall workflow of the proposed MSECBSO-KNN classification and prediction model is illustrated in Figure 10.

4.3. Experiments and Results Analysis

To verify the effectiveness of the proposed MSECBSO-KNN bankruptcy prediction model, experiments are conducted on the Wieslaw financial dataset and JPNBDS financial dataset. They include multiple key financial indicators that reflect corporate operating conditions and financial risk, making it suitable for evaluating the performance of bankruptcy prediction models.
During the experimental process, the original financial data are first normalized to eliminate the influence of different scales and units on model training and prediction results. Subsequently, a 10-fold cross-validation strategy is employed to partition the dataset, where each fold is used in turn as the test set and the remaining samples are used as the training set, thereby ensuring the reliability and stability of the experimental results. In each training round, MSECBSO is applied to optimize the parameters of the KNN model on the training set. Based on the optimal parameter configuration, a KNN classifier is constructed and used to perform bankruptcy prediction on the corresponding test set. Finally, the prediction results obtained from the 10 folds are aggregated to derive the overall predictive performance of the model on the entire dataset.
To comprehensively evaluate the classification capability of the proposed model, multiple performance metrics are adopted for quantitative analysis, including classification accuracy (Accuracy, ACC), Matthews correlation coefficient (MCC), recall (Recall), and F1-score (F1). These metrics assess the model performance from different perspectives, reflecting its overall effectiveness in identifying bankrupt enterprises and distinguishing non-bankrupt enterprises, thus enabling an objective evaluation of prediction performance.
In addition, to reduce the impact of random factors on the experimental results, the entire experimental procedure is repeated 20 independent runs under the same parameter settings, and the average results along with corresponding statistical characteristics are reported. By comparing the proposed model with KNN classifiers optimized by other optimization algorithms, the advantages of MSECBSO in KNN hyperparameter optimization can be systematically evaluated. The experimental results are illustrated in Figure 11, Figure 12, Figure 13 and Figure 14.
Figure 11 illustrates the variation curves of the fitness value (1–ACC) obtained by different optimization algorithms on the Wieslaw dataset. MSECBSO exhibits the fastest decreasing rate of the fitness value, dropping below 0.13 after approximately 40 iterations and stabilizing around 0.125 after 100 iterations. In contrast, the fitness values of the original CBSO, PSO, GWO, and other algorithms eventually stabilize within the range of 0.14–0.17, with the final fitness value of CBSO reaching 0.158. Compared with CBSO, MSECBSO achieves a 21% reduction in the fitness value, indicating faster convergence and higher optimization accuracy during the KNN hyperparameter tuning process.
The boxplots in Figure 12 quantitatively compare the classification performance metrics of different models. In terms of accuracy (ACC), MSECBSO-KNN achieves an average ACC of 0.872, which is significantly higher than that of the original KNN (0.843), CBSO-KNN (0.855), and other optimized models (0.845–0.863). Moreover, the distribution of the boxplot is highly concentrated, with a standard deviation of only 0.008, demonstrating excellent stability. For the Matthews correlation coefficient (MCC), MSECBSO-KNN attains an average value of 0.743, representing an improvement of 2.8% over the second-ranked BBO-KNN (0.721) and 3.9% over CBSO-KNN (0.715), which reflects its advantage in handling class imbalance between positive and negative samples. In terms of recall, MSECBSO-KNN ranks first with an average value of 0.886, achieving a 6.5% improvement over the original KNN (0.832) and ensuring strong identification capability for bankrupt enterprises. Regarding the F1-score, MSECBSO-KNN reaches an average value of 0.862, outperforming all comparison models and further confirming its superior overall classification performance.
The JPNBDS dataset is more complex and imposes stricter requirements on model generalization ability. The fitness value curves in Figure 13 show that MSECBSO quickly gains a leading advantage in the early iterations, with the fitness value dropping below 0.18 after approximately 60 iterations and finally stabilizing at 0.176. In contrast, the final fitness values of CBSO, PSO, DE, and other algorithms remain within the range of 0.19–0.21. Compared with CBSO, MSECBSO improves the optimization accuracy by 12.6%, highlighting its stronger adaptability to complex scenarios.
The boxplots of classification metrics in Figure 14 further demonstrate that MSECBSO-KNN maintains its leading performance on this dataset. The average ACC reaches 0.824, which is 5.2% higher than that of CBSO-KNN (0.783) and 11.2% higher than that of the original KNN (0.741). The average MCC value increases to 0.638, representing a 5.5% improvement over the second-ranked KEO-KNN (0.605), effectively mitigating classification bias caused by data imbalance. In terms of recall, MSECBSO-KNN again ranks first with an average value of 0.872, ensuring effective identification of high-risk enterprises. The F1-score reaches 0.831, which is 4.5% higher than that of CBSO-KNN (0.795), verifying its stable performance on complex financial data.
Overall, the experimental results on both datasets demonstrate that the MSECBSO-KNN model consistently exhibits superior bankruptcy prediction performance across financial datasets with different levels of complexity. On the one hand, the multi-strategy synergistic optimization mechanism of MSECBSO enables rapid identification of optimal KNN hyperparameter combinations (i.e., the number of neighbors K and the distance metric parameter p), thereby simultaneously enhancing classification accuracy and stability. On the other hand, compared with other optimization algorithms, MSECBSO effectively avoids local optima and achieves stronger generalization capability on complex datasets. Whether evaluated by the core ACC metric or by balance-sensitive indicators such as MCC, Recall, and F1-score, MSECBSO-KNN consistently outperforms CBSO-KNN and other competing models, fully demonstrating the effectiveness and practical applicability of MSECBSO in real-world financial risk prediction tasks.

5. Discussion

The experimental results presented in this study demonstrate that the proposed MSECBSO exhibits strong optimization accuracy, robustness, and scalability across a wide range of benchmark functions and real-world classification tasks. Beyond numerical performance improvements, these findings carry important implications for large-scale financial optimization problems, which are typically characterized by high dimensionality, complex constraints, noisy data, and dynamically changing environments.
First, the multi-elite cooperative guidance strategy plays a crucial role in mitigating premature convergence, which is a common challenge in large-scale financial optimization scenarios such as portfolio allocation, risk exposure control, and systemic stress testing. In such problems, reliance on a single global-best solution may lead to unstable decision-making and sensitivity to local optima. By integrating information from multiple elite solutions, MSECBSO enhances decision robustness and promotes diversified search behavior, which is particularly beneficial for financial systems with multiple competing objectives and uncertainty sources.
Second, the embedded differential evolution search mechanism significantly strengthens local exploitation capability, enabling fine-grained parameter refinement in high-dimensional search spaces. This characteristic is highly relevant to financial optimization tasks involving large numbers of correlated variables, such as asset pricing models, credit risk parameter calibration, and hyperparameter tuning of financial machine learning models. The adaptive exploration–exploitation balance realized by the dynamic scaling factor allows MSECBSO to maintain efficiency even as problem dimensionality increases.
Third, the soft boundary rebound mechanism provides a flexible and realistic constraint-handling approach that aligns well with financial optimization contexts. Financial decision variables often operate within strict regulatory, budgetary, or risk-control boundaries, yet hard truncation may distort the underlying optimization dynamics. The proposed soft boundary strategy preserves feasibility while preventing boundary aggregation, thereby improving solution stability and interpretability in constrained financial environments.
Finally, the successful application of MSECBSO to corporate bankruptcy forecasting demonstrates its capability to bridge theoretical optimization design and practical financial analytics. This indicates that MSECBSO is not only suitable for abstract benchmark problems but also has strong potential for broader financial optimization tasks, including credit scoring, financial distress early warning systems, and intelligent decision-support frameworks in large-scale financial systems.

6. Conclusions and Future Work

This paper proposed a Multi-Strategy Enhanced Connected Banking System Optimizer (MSECBSO) to address the inherent limitations of the original CBSO algorithm when solving complex, high-dimensional optimization problems. The main algorithmic novelty of MSECBSO lies in the synergistic integration of three complementary mechanisms: a multi-elite cooperative guidance strategy to alleviate search bias and maintain population diversity, an embedded differential evolution search mechanism to enhance local exploitation accuracy, and a soft boundary rebound strategy to improve constraint handling and convergence stability. These innovations collectively enable MSECBSO to achieve a more effective and balanced exploration–exploitation process than the standard CBSO framework.
Extensive experiments on the CEC2017 and CEC2022 benchmark suites demonstrate that MSECBSO consistently outperforms several state-of-the-art metaheuristic algorithms in terms of convergence accuracy, robustness, and statistical stability across different dimensional settings and function categories. In addition, the application novelty of this study is reflected in the successful deployment of MSECBSO for corporate bankruptcy forecasting, where it is used to optimize key hyperparameters of a KNN classifier. The resulting MSECBSO-KNN model achieves superior predictive accuracy, robustness, and generalization capability compared with competing optimization-based KNN models, confirming the practical value of the proposed algorithm in real-world financial risk prediction tasks.
Despite these promising results, several directions for future research merit further investigation. First, extending MSECBSO to multi-objective financial optimization problems, such as risk–return trade-offs in portfolio optimization or cost–performance balancing in financial decision systems, represents a natural and meaningful progression. Second, adapting MSECBSO to dynamic and time-varying financial environments, where market conditions and constraints evolve over time, would further enhance its applicability to real-world financial systems. Third, integrating MSECBSO with advanced learning models, including deep neural networks and ensemble financial prediction frameworks, may offer additional performance gains for large-scale and data-intensive financial applications.
Overall, the proposed MSECBSO provides a robust and scalable optimization framework that effectively bridges methodological innovation and financial application, offering a promising foundation for future research in intelligent financial optimization and decision-making.

Author Contributions

Conceptualization, Y.Z.; methodology, Y.Z.; software, Y.Z.; validation, Y.Z.; formal analysis, Y.Z.; investigation, Y.Z.; resources, Y.Z.; data curation, Y.Z.; writing—original draft preparation, Y.Z.; writing review and editing, X.Y.; visualization, Y.Z. and X.Y.; supervision, X.Y.; funding acquisition, X.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to express their sincere gratitude to all those who contributed to the completion of this work.

Conflicts of Interest

The authors declare no conflicts of interest.

Nomenclature

The nomenclature (symbol index annotation table) for this paper is as follows:
SymbolDefinition
N population size
D problem dimension
T maximum number of iterations
t iteration index
X i position (candidate solution) of the i-th individual
f · objective (fitness) function
X b e s t global best solution found so far
E elite set (top-performing individuals)
G multi-elite guiding vector
F mutation scaling factor
C R crossover rate
V mutation vector
U trial vector
L B , U B lower and upper bounds of decision variables
K number of neighbors in KNN
p Minkowski distance order (p = 1 Manhattan, p = 2 Euclidean)
A C C / M C C / R e c a l l / F 1 classification metrics used in bankruptcy prediction

References

  1. Dannina, K.; Nandan, D.; Meenakshi, K.; Mahajan, A. A multi-class SVM based CBIR system using Forest Optimization algorithm and Firefly algorithm. Eng. Res. Express 2025, 7, 035239. [Google Scholar] [CrossRef]
  2. Habib, M.; Vicente-Palacios, V.; García-Sánchez, P. Bio-inspired optimization of feature selection and SVM tuning for voice disorders detection. Knowl.-Based Syst. 2025, 310, 112950. [Google Scholar] [CrossRef]
  3. Ji, Y.; Lu, C.; Liu, L.; Heidari, A.A.; Wu, C.; Chen, H. Advancing bankruptcy prediction: A study on an improved rime optimization algorithm and its application in feature selection. Int. J. Mach. Learn. Cybern. 2025, 16, 3461–3499. [Google Scholar] [CrossRef]
  4. Wu, X.; Zhang, Y.; Shi, M.; Li, P.; Li, R.; Xiong, N.N. An adaptive federated learning scheme with differential privacy preserving. Futur. Gener. Comput. Syst. 2022, 127, 362–372. [Google Scholar] [CrossRef]
  5. Wang, H.; Zhang, X.; Xia, Y.; Wu, X. An intelligent blockchain-based access control framework with federated learning for genome-wide association studies. Comput. Stand. Interfaces 2022, 84, 103694. [Google Scholar] [CrossRef]
  6. Wu, X.; Dong, J.; Bao, W.; Zou, B.; Wang, L.; Wang, H. Augmented Intelligence of Things for Emergency Vehicle Secure Trajectory Prediction and Task Offloading. IEEE Internet Things J. 2024, 11, 36030–36043. [Google Scholar] [CrossRef]
  7. Ma, R.-H.; Luo, X.-B.; Liu, Y.P.; Li, Q. Clonorchis sinensis infection are associated with calcium phosphate gallbladder stones: A single-center retrospective observational study in China. Medicine 2025, 104, e45739. [Google Scholar] [CrossRef]
  8. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  9. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  10. Dorigo, M.; Birattari, M.; Stützle, T. Ant Colony Optimization. In Computational Intelligence Magazine; IEEE: Piscataway, NJ, USA, 2006; Volume 1, pp. 28–39. [Google Scholar]
  11. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  12. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  13. Qi, H. Location-Robust Cost-Preserving Blended Pricing for Multi-Campus AI Data Centers. arXiv 2025, arXiv:2512.14197. [Google Scholar]
  14. Qi, H.; Chunyu, Q. Modular Landfill Remediation for AI Grid Resilience. arXiv 2025, arXiv:2512.19202. [Google Scholar] [CrossRef]
  15. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  16. Fan, D.; Zhu, X.; Xiang, Z.; Lu, Y.; Quan, L. Dimension-Reduction Many-Objective Optimization Design of Multimode Double-Stator Permanent Magnet Motor. IEEE Trans. Transp. Electrif. 2024, 11, 1984–1994. [Google Scholar] [CrossRef]
  17. Fan, D.; Miao, D.; Shan, W.; Xiang, Z.; Zhu, X. Short-Circuit Fault Demagnetization Assessment and Optimization of Double-Electrical-Port Vernier Permanent Magnet Motor. IEEE Trans. Ind. Appl. 2025, 1–10. [Google Scholar] [CrossRef]
  18. Qi, H. A Unified Metric Architecture for AI Infrastructure: A Cross-Layer Taxonomy Integrating Performance, Efficiency, and Cost. arXiv 2025, arXiv:2511.21772. [Google Scholar]
  19. Braik, M.S. Chameleon Swarm Algorithm: A bio-inspired optimizer for solving engineering design problems. Expert Syst. Appl. 2021, 174, 114685. [Google Scholar] [CrossRef]
  20. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  21. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2022, 79, 7305–7336. [Google Scholar] [CrossRef]
  22. Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary bird optimization algorithm: A new metaheuristic for solving global optimization problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
  23. Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  24. Fu, S.; Li, K.; Huang, H.; Ma, C.; Fan, Q.; Zhu, Y. Red-billed blue magpie optimizer: A novel metaheuristic algorithm for 2D/3D UAV path planning and engineering design problems. Artif. Intell. Rev. 2024, 57, 134. [Google Scholar] [CrossRef]
  25. Wang, T.-L.; Gu, S.-W.; Liu, R.-J.; Chen, L.-Q.; Wang, Z.; Zeng, Z.-Q. Cuckoo catfish optimizer: A new meta-heuristic optimization algorithm. Artif. Intell. Rev. 2025, 58, 326. [Google Scholar] [CrossRef]
  26. Hassan, M.H.; Kamel, S. Supercell thunderstorm algorithm (STA): A nature-inspired metaheuristic algorithm for engineering optimization. Neural Comput. Appl. 2025, 37, 7207–7260. [Google Scholar] [CrossRef]
  27. Qi, H.; Chunyu, Q. Waste-to-Energy-Coupled AI Data Centers: Cooling Efficiency and Grid Resilience. arXiv 2025, arXiv:2512.24683. [Google Scholar]
  28. Yan, J.; Cheng, Y.; Zhang, F.; Li, M.; Zhou, N.; Jin, B.; Wang, H.; Yang, H.; Zhang, W. Research on multimodal techniques for arc detection in railway systems with limited data. Struct. Heal. Monit. 2025. [Google Scholar] [CrossRef]
  29. Wang, H.; Song, Y.; Yang, H.; Liu, Z. Generalized Koopman Neural Operator for Data-Driven Modeling of Electric Railway Pantograph–Catenary Systems. IEEE Trans. Transp. Electrif. 2025, 11, 14100–14112. [Google Scholar] [CrossRef]
  30. Song, J.; Xu, H.; Li, J.; Zhang, S. Demand-driven kNN classification. Knowl.-Based Syst. 2025, 7, 15. [Google Scholar] [CrossRef]
  31. Veganzones, D.; Séverin, E.; Ben Jabeur, S. Forecasting corporate bankruptcy in imbalanced datasets using a new hybrid machine learning approach. Res. Int. Bus. Financ. 2025, 81, 103200. [Google Scholar] [CrossRef]
  32. Fedorova, E.; Gilenko, E.; Dovzhenko, S. Bankruptcy prediction for Russian companies: Application of combined classifiers. Expert Syst. Appl. 2013, 40, 7285–7293. [Google Scholar] [CrossRef]
  33. Murad, S.H.; Tayfor, N.B.; Mahmood, N.H.; Arman, L. Hybrid genetic algorithms-driven optimization of machine learning models for heart disease prediction. MethodsX 2025, 15, 103510. [Google Scholar] [CrossRef]
  34. Nemati, M.; Zandi, Y.; Sabouri, J. Application of a novel metaheuristic algorithm inspired by connected banking system in truss size and layout optimum design problems and optimization problems. Sci. Rep. 2024, 14, 27345. [Google Scholar] [CrossRef]
  35. Wu, G.; Mallipeddi, R.; Suganthan, P. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition and Special Session on Constrained Single Objective Real-Parameter Optimization; Technical Report; Nanyang Technological University: Singapore, 2016. [Google Scholar]
  36. Luo, W.; Lin, X.; Li, C.; Yang, S.; Shi, Y. Benchmark functions for CEC 2022 competition on seeking multiple optima in dynamic environments. arXiv 2022, arXiv:2201.00523. [Google Scholar] [CrossRef]
  37. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  38. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  39. Almutairi, S.Z.; Shaheen, A.M. A novel kangaroo escape optimizer for parameter estimation of solar photovoltaic cells/modules via one, two and three-diode equivalent circuit modeling. Sci. Rep. 2025, 15, 32669. [Google Scholar] [CrossRef] [PubMed]
  40. Ouyang, K.; Wei, D.; Sha, X.; Yu, J.; Zhao, Y.; Qiu, M.; Fu, S.; Heidari, A.A.; Chen, H. Beaver behavior optimizer: A novel metaheuristic algorithm for solar PV parameter identification and engineering problems. J. Adv. Res. 2025, in press. [Google Scholar] [CrossRef]
  41. Ouertani, M.W.; Manita, G.; Korbaa, O. Hannibal Barca optimizer: The power of the pincer movement for global optimization and multilevel image thresholding. Clust. Comput. 2025, 28, 482. [Google Scholar] [CrossRef]
  42. Fu, S.; Huang, H.; Ma, C.; Wei, J.; Li, Y.; Fu, Y. Improved dwarf mongoose optimization algorithm using novel nonlinear control and exploration strategies. Expert Syst. Appl. 2023, 233, 120904. [Google Scholar] [CrossRef]
  43. Cao, L.; Wei, Q. SZOA: An Improved Synergistic Zebra Optimization Algorithm for Microgrid Scheduling and Management. Biomimetics 2025, 10, 664. [Google Scholar] [CrossRef]
  44. Zhang, S. Challenges in KNN Classification. IEEE Trans. Knowl. Data Eng. 2021, 34, 4663–4675. [Google Scholar] [CrossRef]
  45. Zhang, S. Cost-sensitive KNN classification. Neurocomputing 2020, 391, 234–242. [Google Scholar] [CrossRef]
Figure 1. Variation curve of exponential attenuation factor.
Figure 1. Variation curve of exponential attenuation factor.
Mathematics 14 00618 g001
Figure 2. Dynamically adjusted scaling factor curve.
Figure 2. Dynamically adjusted scaling factor curve.
Mathematics 14 00618 g002
Figure 3. Visual interpretation of MSECBSO’s improvement mechanisms compared with CBSO.
Figure 3. Visual interpretation of MSECBSO’s improvement mechanisms compared with CBSO.
Mathematics 14 00618 g003
Figure 4. Convergence curves of the CBSO variant improved by different strategies.
Figure 4. Convergence curves of the CBSO variant improved by different strategies.
Mathematics 14 00618 g004aMathematics 14 00618 g004b
Figure 5. The average ranking values of CBSO variants improved by different strategies.
Figure 5. The average ranking values of CBSO variants improved by different strategies.
Mathematics 14 00618 g005
Figure 6. Comparative analysis of convergence rates among the evaluated algorithms.
Figure 6. Comparative analysis of convergence rates among the evaluated algorithms.
Mathematics 14 00618 g006aMathematics 14 00618 g006bMathematics 14 00618 g006c
Figure 7. Distributional characteristics of algorithm outcomes on the benchmark problems.
Figure 7. Distributional characteristics of algorithm outcomes on the benchmark problems.
Mathematics 14 00618 g007aMathematics 14 00618 g007bMathematics 14 00618 g007c
Figure 8. Ranking of different algorithms across various functions.
Figure 8. Ranking of different algorithms across various functions.
Mathematics 14 00618 g008aMathematics 14 00618 g008b
Figure 9. Average running time (s) of compared MSECBSO and CBSO.
Figure 9. Average running time (s) of compared MSECBSO and CBSO.
Mathematics 14 00618 g009
Figure 10. Flowchart of the MSECBSO-KNN classification prediction model.
Figure 10. Flowchart of the MSECBSO-KNN classification prediction model.
Mathematics 14 00618 g010
Figure 11. Fitness value variation curves of different algorithms on Wieslaw dataset.
Figure 11. Fitness value variation curves of different algorithms on Wieslaw dataset.
Mathematics 14 00618 g011
Figure 12. Box plots of classification metrics for KNN classifiers optimized by different algorithms on Wieslaw dataset.
Figure 12. Box plots of classification metrics for KNN classifiers optimized by different algorithms on Wieslaw dataset.
Mathematics 14 00618 g012aMathematics 14 00618 g012b
Figure 13. Fitness value variation curves of different algorithms on JPNBDS dataset.
Figure 13. Fitness value variation curves of different algorithms on JPNBDS dataset.
Mathematics 14 00618 g013
Figure 14. Box plots of classification metrics for KNN classifiers optimized by different algorithms on JPNBDS dataset.
Figure 14. Box plots of classification metrics for KNN classifiers optimized by different algorithms on JPNBDS dataset.
Mathematics 14 00618 g014
Table 1. Representative metaheuristic algorithms, inspirations, typical strengths, and common limitations in complex optimization.
Table 1. Representative metaheuristic algorithms, inspirations, typical strengths, and common limitations in complex optimization.
AlgorithmInspirationStrengths (Exploration/Exploitation)Typical Balance MechanismCommon Limitations in Complex Problems
PSO [8]Bird flockingFast convergence, simple updateInertia + cognitive/social termsPremature convergence; diversity loss
DE [9]Evolutionary mutation/recombinationStrong local refinement; robustMutation + crossover + selectionSensitive to control parameters; may stagnate
GWO [11]Gray wolf huntingGood exploration earlyHierarchical leadershipReduced exploitation precision in late stages
ACO [10]Ant foraging behavior Good exploration earlyPheromone deposition and evaporationLow efficiency in continuous optimization, easy to fall into local optima
WOA [12]Bubble-net huntingEffective global searchEncircling + spiralSusceptible to local optima on high-dimensional functions
GJO [23]Golden jackal cooperative huntingExploration capabilitySearch, pursuit, attack, food storageSlow convergence, insufficient robustness for multimodal functions
DBO [21]Dung beetle rolling/foraging/stealing behaviorsEffective global searchRolling, dancing, breeding, stealing mechanismsRedundant search steps, low efficiency in high-dimensional problems
SBOA [22]Secretary bird hunting patternsGood exploration earlyStomping, soaring, descending attacksBiased search direction, poor boundary handling
Table 2. Experimental parameter settings for all comparison algorithms.
Table 2. Experimental parameter settings for all comparison algorithms.
AlgorithmsName of the ParameterValue of the Parameter
PSO V m a x ,   w M a x ,   w M i n ,   c 1 ,   c 2 6, 0.9, 0.6, 2, 2
DE p c r , F 0.8, 0.8
GWO a [0,2]
SSA c 1 , c 2 [0,2], [0,1]
SO T h 1 , T h 2 ,   C 1 , C 2 ,   C 3 0.25 ,   0.6 ,   0.5 ,   0.05 ,   2
KEO E n e r g y   t h r e s h o l d , β 0.5, 0.5
BBO / /
HBO C o e f 0.94
CBSO R U 0 , 1
MSECBSO R U , C R ,   F m i n ,   F m a x 0 , 1 ,   0.8 ,   0.2 , 0.8
Table 3. Summary of comparative outcomes on CEC2017 benchmark problems (dim = 30).
Table 3. Summary of comparative outcomes on CEC2017 benchmark problems (dim = 30).
FunctionMetricPSODEGWOSSASOKEOBBOHBOCBSOMSECBSO
F1Mean2.3227 × 1092.6671 × 1072.6409 × 1096.4350 × 1031.0160 × 1079.6271 × 1041.2848 × 1051.4199 × 1093.3791 × 1093.1465 × 103
Std2.3097 × 1097.9026 × 1061.6890 × 1096.8692 × 1031.3077 × 1077.6760 × 1049.6557 × 1048.0071 × 1081.4994 × 1093.0600 × 103
F2Mean1.4600 × 10301.0204 × 10285.7244 × 10322.7861 × 10236.2415 × 10253.2391 × 10224.8029 × 10141.7314 × 10305.2841 × 10319.2678 × 1014
Std7.8536 × 10301.8695 × 10282.8152 × 10331.2102 × 10242.1684 × 10261.7599 × 10231.5664 × 10156.1260 × 10301.5863 × 10324.9626 × 1015
F3Mean6.6591 × 1041.3241 × 1056.1239 × 1047.3547 × 1047.1202 × 1044.8829 × 1041.9300 × 1045.8615 × 1041.3549 × 1055.4829 × 103
Std2.4833 × 1042.1435 × 1041.1437 × 1042.5737 × 1041.0364 × 1041.6456 × 1046.2777 × 1031.0015 × 1043.6823 × 1042.6052 × 103
F4Mean6.3170 × 1026.3291 × 1026.2694 × 1025.2573 × 1025.5340 × 1025.3610 × 1025.0628 × 1026.8509 × 1027.2932 × 1025.0382 × 102
Std1.0559 × 1022.2620 × 1019.7088 × 1012.7889 × 1013.8588 × 1013.1983 × 1011.9904 × 1017.6893 × 1019.0071 × 1011.5362 × 101
F5Mean7.2154 × 1026.8799 × 1026.2804 × 1026.9350 × 1025.9631 × 1026.4166 × 1025.9816 × 1026.8583 × 1027.4328 × 1025.7919 × 102
Std2.9936 × 1011.3908 × 1014.8296 × 1015.4393 × 1011.9742 × 1012.6510 × 1013.1397 × 1013.6151 × 1015.8293 × 1014.5317 × 101
F6Mean6.2207 × 1026.0389 × 1026.1415 × 1026.5605 × 1026.1825 × 1026.2721 × 1026.1507 × 1026.2655 × 1026.5484 × 1026.0039 × 102
Std8.1062 × 1005.9694 × 10−14.9069 × 1001.1855 × 1016.7061 × 1001.2453 × 1017.1889 × 1008.4510 × 1001.0702 × 1014.9994 × 10−1
F7Mean9.9505 × 1029.5322 × 1029.1629 × 1029.4647 × 1029.1205 × 1029.8913 × 1028.6338 × 1021.0360 × 1031.2172 × 1038.2677 × 102
Std2.4594 × 1011.0778 × 1014.5850 × 1016.2119 × 1014.0292 × 1016.8332 × 1013.0304 × 1015.7275 × 1018.4337 × 1014.5188 × 101
F8Mean1.0082 × 1039.8633 × 1029.0331 × 1029.5655 × 1028.9354 × 1029.2742 × 1028.8636 × 1029.6166 × 1021.0447 × 1039.0905 × 102
Std2.7660 × 1011.1408 × 1012.5959 × 1014.4072 × 1011.7992 × 1013.2574 × 1012.2738 × 1012.8089 × 1013.9674 × 1014.3834 × 101
F9Mean1.8696 × 1033.6141 × 1032.3946 × 1035.9587 × 1032.5168 × 1032.9699 × 1032.0199 × 1035.2512 × 1039.4254 × 1039.5790 × 102
Std8.8453 × 1026.4037 × 1029.4446 × 1021.9683 × 1039.8858 × 1028.2886 × 1026.0266 × 1021.4061 × 1033.6799 × 1035.4403 × 101
F10Mean7.4597 × 1036.7869 × 1035.4830 × 1035.4111 × 1034.3538 × 1035.1165 × 1034.2415 × 1035.1714 × 1036.5385 × 1037.0211 × 103
Std5.4856 × 1024.1789 × 1021.7510 × 1037.6158 × 1028.5920 × 1026.3281 × 1025.7211 × 1026.3518 × 1026.1645 × 1024.3069 × 102
F11Mean1.4712 × 1032.4263 × 1032.4723 × 1031.4249 × 1031.4458 × 1031.3035 × 1031.2619 × 1031.7558 × 1032.0421 × 1031.1692 × 103
Std9.7653 × 1016.7754 × 1021.1635 × 1038.3638 × 1011.0863 × 1028.4409 × 1016.0114 × 1013.4681 × 1023.7641 × 1023.1986 × 101
F12Mean1.0047 × 1083.2613 × 1071.0845 × 1084.1702 × 1074.8631 × 1062.8441 × 1063.1489 × 1064.0825 × 1071.3194 × 1087.9176 × 105
Std1.2197 × 1089.9222 × 1061.2034 × 1084.4829 × 1074.4411 × 1061.8708 × 1061.9115 × 1063.8682 × 1078.5016 × 1076.8845 × 105
F13Mean7.4310 × 1066.1193 × 1064.7215 × 1071.2924 × 1055.4085 × 1044.9881 × 1045.9114 × 1043.0637 × 1051.8235 × 1068.5484 × 103
Std1.2711 × 1072.4516 × 1069.8364 × 1078.3816 × 1043.8134 × 1042.7077 × 1043.8052 × 1045.3541 × 1051.2217 × 1067.3887 × 103
F14Mean9.7221 × 1044.4143 × 1056.4169 × 1051.7417 × 1055.3723 × 1041.8658 × 1043.8801 × 1046.5971 × 1052.9666 × 1051.4452 × 103
Std8.2919 × 1042.5949 × 1057.4586 × 1052.0325 × 1056.1587 × 1042.0320 × 1042.5878 × 1049.9749 × 1055.3889 × 1051.0155 × 101
F15Mean1.8285 × 1051.6031 × 1069.1747 × 1056.0277 × 1041.9965 × 1041.1114 × 1042.3469 × 1041.3499 × 1041.6480 × 1051.5588 × 103
Std1.4162 × 1051.1917 × 1061.5507 × 1065.0081 × 1041.7377 × 1047.9579 × 1031.2922 × 1041.4286 × 1049.2719 × 1044.6310 × 101
F16Mean2.9888 × 1032.9512 × 1032.6903 × 1033.1184 × 1032.5520 × 1032.7856 × 1032.5135 × 1032.8709 × 1033.2697 × 1032.5822 × 103
Std3.3014 × 1022.5090 × 1023.4167 × 1023.0109 × 1022.1372 × 1023.3063 × 1022.7951 × 1022.9179 × 1023.1698 × 1023.2361 × 102
F17Mean2.1803 × 1032.2084 × 1032.1123 × 1032.3224 × 1032.2848 × 1032.2994 × 1032.0520 × 1032.3618 × 1032.4515 × 1031.8383 × 103
Std2.2394 × 1021.1750 × 1021.6872 × 1022.1656 × 1021.7917 × 1022.7495 × 1021.9000 × 1022.3188 × 1022.4639 × 1029.1080 × 101
F18Mean2.5965 × 1061.9949 × 1061.9451 × 1062.6544 × 1061.0521 × 1062.8610 × 1055.8083 × 1053.5026 × 1062.7755 × 1061.0392 × 104
Std2.2241 × 1061.0686 × 1062.9498 × 1062.7879 × 1061.0154 × 1063.3788 × 1057.3086 × 1053.5846 × 1062.6550 × 1061.1774 × 104
F19Mean2.9448 × 1061.3464 × 1061.3490 × 1064.8373 × 1062.4526 × 1041.0947 × 1042.1571 × 1048.9584 × 1041.3635 × 1061.9337 × 103
Std1.3422 × 1077.1486 × 1051.4531 × 1063.9101 × 1064.8738 × 1041.2737 × 1042.0033 × 1042.8186 × 1051.5515 × 1062.4146 × 101
F20Mean2.5450 × 1032.5055 × 1032.5486 × 1032.6714 × 1032.4563 × 1032.7679 × 1032.5204 × 1032.5789 × 1032.6273 × 1032.2190 × 103
Std2.1892 × 1021.0782 × 1022.0066 × 1022.3465 × 1021.7789 × 1022.2333 × 1021.7192 × 1021.8392 × 1021.3454 × 1021.2826 × 102
F21Mean2.5074 × 1032.4845 × 1032.4056 × 1032.4468 × 1032.3954 × 1032.4287 × 1032.3892 × 1032.4704 × 1032.5319 × 1032.3676 × 103
Std2.4602 × 1011.3396 × 1013.4774 × 1016.3547 × 1012.3517 × 1014.3294 × 1012.7647 × 1013.3621 × 1013.6040 × 1014.2962 × 101
F22Mean5.9109 × 1033.9293 × 1035.5119 × 1034.4664 × 1034.2612 × 1035.1922 × 1032.8428 × 1034.4786 × 1036.0348 × 1032.3009 × 103
Std3.1529 × 1031.3101 × 1031.4492 × 1032.2486 × 1031.9912 × 1032.1510 × 1031.4280 × 1032.2298 × 1032.5469 × 1031.5201 × 100
F23Mean2.9219 × 1032.8396 × 1032.7820 × 1032.8094 × 1032.8007 × 1032.7933 × 1032.7604 × 1032.8417 × 1032.9220 × 1032.7049 × 103
Std6.8575 × 1011.4598 × 1014.3608 × 1014.2907 × 1013.6158 × 1014.0058 × 1012.7320 × 1016.0369 × 1014.2152 × 1013.3849 × 101
F24Mean3.1185 × 1033.0500 × 1032.9708 × 1032.9469 × 1032.9507 × 1032.9572 × 1032.9139 × 1033.0205 × 1033.0746 × 1032.8838 × 103
Std9.2673 × 1011.6200 × 1016.9385 × 1012.9939 × 1012.9053 × 1015.5196 × 1012.3829 × 1014.8346 × 1015.4512 × 1013.6257 × 101
F25Mean2.9824 × 1032.9804 × 1033.0393 × 1032.9444 × 1032.9395 × 1032.9206 × 1032.9061 × 1033.0230 × 1033.0976 × 1032.8894 × 103
Std4.8976 × 1011.8956 × 1017.3992 × 1013.0843 × 1013.1751 × 1012.5090 × 1012.3963 × 1014.9276 × 1019.6522 × 1012.5519 × 100
F26Mean4.9664 × 1035.2766 × 1034.9813 × 1035.2327 × 1035.5962 × 1035.4022 × 1034.5794 × 1035.4156 × 1036.4902 × 1034.0509 × 103
Std1.2736 × 1035.2820 × 1024.8381 × 1021.1164 × 1035.1704 × 1028.2291 × 1021.2297 × 1038.4147 × 1023.9346 × 1022.9148 × 102
F27Mean3.2768 × 1033.2715 × 1033.2709 × 1033.2776 × 1033.2971 × 1033.2582 × 1033.2363 × 1033.2608 × 1033.2481 × 1033.2183 × 103
Std5.7368 × 1019.2803 × 1003.1174 × 1013.6412 × 1013.1014 × 1013.1355 × 1011.4010 × 1012.3705 × 1012.2414 × 1011.2060 × 101
F28Mean3.4065 × 1033.3896 × 1033.4769 × 1033.3178 × 1033.3580 × 1033.3027 × 1033.2373 × 1033.4516 × 1033.5264 × 1033.2276 × 103
Std1.6612 × 1022.4252 × 1019.4288 × 1015.1739 × 1015.1589 × 1013.9701 × 1012.1412 × 1015.7311 × 1019.4884 × 1012.0673 × 101
F29Mean4.1023 × 1033.9791 × 1033.9605 × 1034.3519 × 1034.0882 × 1034.0889 × 1033.8247 × 1034.1490 × 1034.2664 × 1033.7416 × 103
Std2.4564 × 1021.3847 × 1021.8821 × 1022.5497 × 1022.5608 × 1022.2226 × 1022.0112 × 1022.5707 × 1022.5270 × 1022.6492 × 102
F30Mean2.7427 × 1061.5210 × 1061.6119 × 1071.0299 × 1073.3968 × 1058.7718 × 1042.3606 × 1055.1217 × 1054.3965 × 1069.2145 × 103
Std2.6309 × 1061.0115 × 1061.1988 × 1078.3179 × 1063.9662 × 1058.0181 × 1041.8175 × 1056.0392 × 1053.3186 × 1061.9174 × 103
Table 4. Summary of comparative outcomes on CEC2017 benchmark problems (dim = 100).
Table 4. Summary of comparative outcomes on CEC2017 benchmark problems (dim = 100).
FunctionMetricPSODEGWOSSASOKEOBBOHBOCBSOMSECBSO
F1Mean3.7004 × 10104.3218 × 10105.4789 × 10101.4752 × 10101.4114 × 10108.3588 × 1097.6183 × 1079.0772 × 10101.5605 × 10119.2554 × 107
Std1.0666 × 10104.1831 × 1099.5952 × 1094.5345 × 1092.8569 × 1092.1746 × 1092.4505 × 1071.3003 × 10102.3464 × 10104.6681 × 107
F2Mean8.6850 × 101321.0401 × 101464.6294 × 101316.7780 × 101391.1797 × 101361.4025 × 101311.6564 × 101065.7577 × 101445.1814 × 101541.0690× 10105
Std4.0378 × 101333.1958 × 101461.9283 × 101323.2692 × 101403.7045 × 101366.1934 × 101319.0727 × 101063.0474 × 101456.5535 × 1043.2616× 10105
F3Mean5.7995 × 1057.0507 × 1055.4084 × 1056.9128 × 1053.8329 × 1054.1588 × 1054.4527 × 1053.9138 × 1058.6246 × 1052.0452 × 105
Std1.0927 × 1053.7739 × 1047.7349 × 1041.7684 × 1053.3670 × 1046.9684 × 1047.4978 × 1047.3967 × 1041.3761 × 1052.3973 × 104
F4Mean4.1306 × 1038.6703 × 1035.8527 × 1032.3732 × 1032.7879 × 1032.0382 × 1039.1928 × 1029.8382 × 1032.3503 × 1049.5440 × 102
Std1.9319 × 1031.0682 × 1031.2543 × 1035.6287 × 1025.1909 × 1022.8799 × 1026.1648 × 1011.9429 × 1035.7318 × 1036.5847 × 101
F5Mean1.7205 × 1031.7510 × 1031.2485 × 1031.5364 × 1031.1711 × 1031.3485 × 1031.1208 × 1031.7071 × 1032.0784 × 1039.7810 × 102
Std9.2201 × 1014.0575 × 1017.9921 × 1011.1169 × 1026.1076 × 1011.0552 × 1026.3048 × 1015.9912 × 1011.1326 × 1022.2452 × 102
F6Mean6.7400 × 1026.4909 × 1026.4683 × 1026.7784 × 1026.4604 × 1026.6199 × 1026.5008 × 1026.7579 × 1027.0552 × 1026.1890 × 102
Std1.1366 × 1012.3381 × 1005.2436 × 1004.7322 × 1004.5578 × 1005.6264 × 1005.3556 × 1005.9350 × 1007.5030 × 1004.7060 × 100
F7Mean2.4115 × 1032.8785 × 1032.1869 × 1032.8858 × 1032.2528 × 1033.1706 × 1032.0032 × 1033.1469 × 1034.9880 × 1031.5928 × 103
Std1.2647 × 1028.2600 × 1011.5124 × 1024.2939 × 1021.0243 × 1024.1394 × 1021.6559 × 1021.4200 × 1024.2751 × 1022.2725 × 102
F8Mean2.0113 × 1032.0353 × 1031.5623 × 1031.9051 × 1031.4981 × 1031.7143 × 1031.4544 × 1032.1127 × 1032.3199 × 1031.2500 × 103
Std1.2708 × 1023.8579 × 1015.4400 × 1011.4335 × 1026.5081 × 1011.1522 × 1029.3735 × 1019.3675 × 1011.1235 × 1021.6874 × 102
F9Mean6.7244 × 1048.1942 × 1044.8508 × 1044.1399 × 1042.7101 × 1042.8307 × 1042.6658 × 1045.3474 × 1041.0066 × 1051.9636 × 104
Std1.5251 × 1046.7525 × 1031.0724 × 1044.0277 × 1037.4804 × 1034.0961 × 1035.1493 × 1035.3581 × 1032.0308 × 1045.1949 × 103
F10Mean2.9444 × 1042.9953 × 1041.8563 × 1041.9214 × 1043.1508 × 1041.8031 × 1041.5411 × 1042.4059 × 1042.8079 × 1042.9343 × 104
Std1.7628 × 1037.6048 × 1021.1451 × 1031.2292 × 1031.2625 × 1031.7758 × 1031.6401 × 1031.0998 × 1031.2116 × 1037.4672 × 102
F11Mean7.8492 × 1041.2669 × 1059.5831 × 1041.3614 × 1051.3295 × 1057.1108 × 1041.9826 × 1041.3111 × 1052.2039 × 1051.5650 × 104
Std3.3758 × 1042.1208 × 1041.6402 × 1044.0641 × 1042.3923 × 1042.9331 × 1046.7515 × 1033.1651 × 1044.3157 × 1043.9991 × 103
F12Mean1.2940 × 10108.1142 × 1091.2684 × 10101.3634 × 1091.8960 × 1096.8111 × 1082.7561 × 1081.3369 × 10102.7327 × 10105.9548 × 107
Std7.9448 × 1091.2365 × 1096.9437 × 1094.9215 × 1081.0869 × 1092.6920 × 1081.2961 × 1083.4980 × 1096.6683 × 1092.0222 × 107
F13Mean1.5362 × 1098.4150 × 1079.9307 × 1088.3179 × 1043.8962 × 1061.0522 × 1051.2597 × 1058.8617 × 1082.3723 × 1094.8009 × 103
Std1.4728 × 1093.4085 × 1076.7686 × 1083.4327 × 1044.1483 × 1064.6059 × 1044.6297 × 1044.4390 × 1087.1319 × 1082.4337 × 103
F14Mean1.2253 × 1072.6805 × 1079.7989 × 1068.9347 × 1066.5453 × 1063.9043 × 1064.1384 × 1061.5296 × 1072.7701 × 1077.2394 × 105
Std4.5925 × 1067.1433 × 1065.9165 × 1065.4701 × 1063.9666 × 1061.7950 × 1061.7509 × 1065.9818 × 1061.5467 × 1074.3053 × 105
F15Mean2.4709 × 1081.8470 × 1073.0509 × 1087.7600 × 1044.0652 × 1056.4926 × 1045.7410 × 1041.0096 × 1083.0565 × 1084.4740 × 103
Std2.8366 × 1086.5567 × 1062.6880 × 1083.1648 × 1045.0111 × 1052.8517 × 1042.3933 × 1046.4857 × 1071.5330 × 1084.5686 × 103
F16Mean1.0113 × 1041.1216 × 1046.4537 × 1037.8564 × 1037.2810 × 1036.2651 × 1035.9618 × 1039.3801 × 1031.1332 × 1047.6207 × 103
Std8.3420 × 1024.8871 × 1028.7563 × 1021.0012 × 1031.3780 × 1037.6391 × 1027.5082 × 1021.0066 × 1031.3423 × 1031.8417 × 103
F17Mean8.8494 × 1037.8203 × 1035.5393 × 1036.1349 × 1035.6128 × 1035.6035 × 1034.9881 × 1038.5740 × 1039.5808 × 1035.8244 × 103
Std1.8118 × 1032.9554 × 1027.9385 × 1026.0330 × 1024.5100 × 1025.3570 × 1025.8215 × 1021.4840 × 1031.6909 × 1037.1702 × 102
F18Mean1.9010 × 1074.2571 × 1079.7655 × 1061.1770 × 1071.1723 × 1075.5247 × 1063.7518 × 1061.9956 × 1074.7705 × 1071.1019 × 106
Std8.3759 × 1061.2186 × 1076.6599 × 1067.0369 × 1064.7914 × 1064.2177 × 1061.6338 × 1069.3385 × 1062.9124 × 1077.1793 × 105
F19Mean2.5588 × 1082.8778 × 1073.6693 × 1082.8981 × 1072.5560 × 1069.7305 × 1051.2873 × 1061.0640 × 1083.7978 × 1084.4622 × 103
Std1.9894 × 1081.3400 × 1075.2984 × 1082.1214 × 1072.8085 × 1067.7851 × 1058.3342 × 1053.7502 × 1071.4387 × 1082.7163 × 103
F20Mean7.0737 × 1036.8284 × 1035.6462 × 1035.5919 × 1037.2883 × 1035.7120 × 1034.9817 × 1035.9047 × 1037.0819 × 1036.4657 × 103
Std4.6382 × 1022.6061 × 1021.2110 × 1035.5682 × 1023.4639 × 1025.4753 × 1025.4824 × 1025.8174 × 1024.2451 × 1025.0953 × 102
F21Mean3.7420 × 1033.5755 × 1033.0621 × 1033.4445 × 1033.0974 × 1033.2036 × 1032.9568 × 1033.7268 × 1033.9463 × 1032.7363 × 103
Std1.1891 × 1024.3281 × 1018.5975 × 1011.5785 × 1029.4338 × 1011.4731 × 1021.2081 × 1021.2349 × 1021.1749 × 1021.6531 × 102
F22Mean3.2063 × 1043.2738 × 1042.2346 × 1042.2254 × 1043.2920 × 1042.1161 × 1041.8597 × 1042.6910 × 1043.0270 × 1043.1221 × 104
Std1.3910 × 1034.5949 × 1024.2895 × 1031.7016 × 1031.9914 × 1031.8110 × 1031.5382 × 1031.0628 × 1039.1354 × 1022.2700 × 103
F23Mean4.8886 × 1033.9784 × 1033.7283 × 1033.8182 × 1033.7320 × 1033.7105 × 1033.5090 × 1034.0455 × 1034.5130 × 1033.2273 × 103
Std2.9224 × 1024.0387 × 1019.5152 × 1011.8019 × 1026.4296 × 1011.1875 × 1028.8479 × 1011.1540 × 1021.7261 × 1027.2809 × 101
F24Mean5.8889 × 1034.6677 × 1034.4861 × 1034.5216 × 1034.7146 × 1034.4381 × 1033.9864 × 1034.8001 × 1035.2598 × 1033.8306 × 103
Std4.2104 × 1027.3228 × 1011.8577 × 1022.3497 × 1021.5597 × 1021.7747 × 1021.1627 × 1021.3858 × 1022.3814 × 1021.1652 × 102
F25Mean5.7569 × 1031.0721 × 1047.3121 × 1034.9959 × 1035.4714 × 1034.8145 × 1033.5636 × 1039.4540 × 1031.8661 × 1043.6444 × 103
Std8.2588 × 1027.3238 × 1021.2029 × 1034.2574 × 1024.3865 × 1023.9939 × 1026.0018 × 1019.1585 × 1023.8933 × 1036.3430 × 101
F26Mean1.9775 × 1041.9904 × 1041.7394 × 1041.9933 × 1041.9339 × 1041.9837 × 1041.5995 × 1042.5175 × 1042.6273 × 1041.1536 × 104
Std4.3166 × 1035.5096 × 1021.4168 × 1033.7215 × 1031.9224 × 1032.8843 × 1032.4765 × 1033.5008 × 1032.0675 × 1031.2219 × 103
F27Mean4.0397 × 1034.9914 × 1034.2502 × 1034.1715 × 1034.3360 × 1034.0765 × 1033.7404 × 1034.2291 × 1034.3670 × 1033.7658 × 103
Std2.7600 × 1021.2548 × 1021.8019 × 1022.1360 × 1021.7280 × 1022.0138 × 1021.2534 × 1021.8408 × 1023.3307 × 1021.1093 × 102
F28Mean7.6480 × 1031.4143 × 1049.8175 × 1036.6734 × 1039.7328 × 1036.5200 × 1033.6553 × 1031.2217 × 1042.0146 × 1043.8832 × 103
Std2.0454 × 1031.0199 × 1031.4360 × 1031.2307 × 1031.7390 × 1037.0503 × 1025.0651 × 1011.2928 × 1032.9822 × 1031.2180 × 102
F29Mean1.0914 × 1041.0951 × 1049.4951 × 1031.1681 × 1049.2808 × 1038.7372 × 1037.9569 × 1031.0837 × 1041.2765 × 1047.1796 × 103
Std7.8080 × 1024.3897 × 1029.2679 × 1021.3468 × 1039.5184 × 1028.1131 × 1026.9104 × 1021.2918 × 1031.1922 × 1031.2241 × 103
F30Mean1.0479 × 1096.4778 × 1071.1534 × 1094.3025 × 1081.9590 × 1072.6395 × 1072.0578 × 1074.5126 × 1081.3456 × 1096.6981 × 104
Std8.0762 × 1081.4919 × 1077.2015 × 1082.1177 × 1081.4621 × 1071.4282 × 1071.0126 × 1071.3851 × 1085.0816 × 1084.2445 × 104
Table 5. Summary of comparative outcomes on CEC2022 benchmark problems (dim = 10).
Table 5. Summary of comparative outcomes on CEC2022 benchmark problems (dim = 10).
FunctionMetricPSODEGWOSSASOKEOBBOHBOCBSOMSECBSO
F1Mean4.3753 × 1026.3240 × 1033.5543 × 1033.0000 × 1027.5058 × 1023.0070 × 1023.0000 × 1021.4273 × 1031.3978 × 1033.0000 × 102
Std5.9355 × 1012.7134 × 1032.1292 × 1033.0361 × 10−95.4383 × 1021.7601 × 1008.0075 × 10−38.8751 × 1021.3953 × 1033.2873 × 10−5
F2Mean4.2374 × 1024.0922 × 1024.2644 × 1024.1166 × 1024.0730 × 1024.1686 × 1024.0597 × 1024.2046 × 1024.1775 × 1024.0358 × 102
Std5.0589 × 1014.0249 × 1002.0673 × 1011.7659 × 1011.2203 × 1012.5997 × 1011.7312 × 1012.7035 × 1012.3119 × 1012.8993 × 100
F3Mean6.0253 × 1026.0000 × 1026.0114 × 1026.1769 × 1026.0212 × 1026.0244 × 1026.0024 × 1026.0418 × 1026.0835 × 1026.0000 × 102
Std1.4332 × 1001.8979 × 10−31.3967 × 1009.0201 × 1004.8794 × 1003.5602 × 1004.1707 × 10−15.1289 × 1006.3767 × 1001.7219 × 10−4
F4Mean8.2489 × 1028.2881 × 1028.1556 × 1028.2705 × 1028.1526 × 1028.1960 × 1028.1284 × 1028.2668 × 1028.2983 × 1028.0609 × 102
Std6.4105 × 1004.4930 × 1006.5539 × 1001.1235 × 1016.0342 × 1009.0553 × 1006.3054 × 1001.1639 × 1011.2623 × 1012.9505 × 100
F5Mean9.0520 × 1029.6266 × 1029.2021 × 1021.0066 × 1039.4534 × 1029.4789 × 1029.0107 × 1021.1088 × 1039.8937 × 1029.0006 × 102
Std3.5081 × 1002.8101 × 1013.9884 × 1011.9959 × 1025.1415 × 1018.1038 × 1012.0065 × 1002.3994 × 1026.9727 × 1011.5708 × 10−1
F6Mean1.3687 × 1047.0803 × 1036.1221 × 1033.6552 × 1033.4934 × 1032.8125 × 1032.6439 × 1033.8587 × 1034.8604 × 1031.8005 × 103
Std2.0988 × 1047.6929 × 1032.3975 × 1031.9428 × 1031.7092 × 1031.6363 × 1039.0933 × 1021.8015 × 1032.1678 × 1034.5815 × 10−1
F7Mean2.0306 × 1032.0071 × 1032.0356 × 1032.0465 × 1032.0291 × 1032.0351 × 1032.0218 × 1032.0245 × 1032.0267 × 1032.0004 × 103
Std3.2117 × 1012.3496 × 1001.7425 × 1011.6787 × 1011.4437 × 1011.6349 × 1011.0155 × 1018.6247 × 1009.8534 × 1005.4641 × 10−1
F8Mean2.2534 × 1032.2196 × 1032.2340 × 1032.2268 × 1032.2222 × 1032.2212 × 1032.2209 × 1032.2217 × 1032.2265 × 1032.2026 × 103
Std5.0278 × 1014.2794 × 1003.1181 × 1015.4088 × 1003.4754 × 1004.0200 × 1005.4697 × 1001.2683 × 1005.0393 × 1002.5662 × 100
F9Mean2.5303 × 1032.5313 × 1032.5809 × 1032.5656 × 1032.5310 × 1032.5342 × 1032.5293 × 1032.5430 × 1032.5314 × 1032.5293 × 103
Std3.7185 × 1009.5775 × 10−13.7338 × 1014.8170 × 1015.0704 × 1002.6826 × 1013.5273 × 10−53.8119 × 1011.1033 × 1015.9111 × 10−13
F10Mean2.5706 × 1032.4818 × 1032.5871 × 1032.5356 × 1032.5224 × 1032.5516 × 1032.5623 × 1032.5576 × 1032.5008 × 1032.5003 × 103
Std6.3118 × 1012.2077 × 1011.5035 × 1029.2691 × 1016.5138 × 1011.4055 × 1025.8971 × 1016.2326 × 1012.5672 × 10−13.8225 × 10−2
F11Mean2.7927 × 1032.7128 × 1032.7926 × 1032.7390 × 1032.6853 × 1032.7288 × 1032.6784 × 1032.7535 × 1032.6805 × 1032.6000 × 103
Std1.4895 × 1022.5839 × 1011.6038 × 1021.6676 × 1021.2280 × 1021.0488 × 1021.3046 × 1021.1134 × 1025.4149 × 1016.0989 × 10−9
F12Mean2.8778 × 1032.8663 × 1032.8688 × 1032.8643 × 1032.8715 × 1032.8648 × 1032.8674 × 1032.8677 × 1032.8640 × 1032.8622 × 103
Std2.5108 × 1011.1117 × 1008.5327 × 1001.3632 × 1005.8333 × 1002.0529 × 1002.4944 × 1006.2116 × 1001.4858 × 1001.6119 × 100
Table 6. Experimental results of CEC2022 (dim = 20).
Table 6. Experimental results of CEC2022 (dim = 20).
FunctionMetricPSODEGWOSSASOKEOBBOHBOCBSOMSECBSO
F1Mean6.4860 × 1033.3959 × 1041.7396 × 1049.6066 × 1031.9986 × 1044.1168 × 1034.3117 × 1022.1558 × 1042.6423 × 1044.4326 × 102
Std3.4742 × 1036.7211 × 1037.4894 × 1035.3048 × 1035.3413 × 1031.8576 × 1031.1533 × 1028.6603 × 1037.9662 × 1031.4102 × 102
F2Mean4.7541 × 1024.7818 × 1025.1206 × 1024.6695 × 1024.6405 × 1024.7077 × 1024.5265 × 1025.1726 × 1025.2564 × 1024.5014 × 102
Std2.0929 × 1011.0967 × 1014.8121 × 1013.3089 × 1012.2849 × 1013.0766 × 1011.6854 × 1014.9994 × 1014.7687 × 1016.1342 × 100
F3Mean6.1060 × 1026.0067 × 1026.0766 × 1026.3838 × 1026.1077 × 1026.1491 × 1026.0604 × 1026.1535 × 1026.3362 × 1026.0004 × 102
Std4.4421 × 1001.4688 × 10−14.7058 × 1001.2089 × 1016.6094 × 1008.5996 × 1006.1049 × 1009.2002 × 1001.3715 × 1011.1234 × 10−1
F4Mean9.1365 × 1029.2630 × 1028.6306 × 1028.7897 × 1028.4205 × 1028.6496 × 1028.4113 × 1028.8484 × 1029.0598 × 1028.4709 × 102
Std2.1676 × 1011.0594 × 1012.8620 × 1012.4941 × 1011.1318 × 1012.2559 × 1011.4234 × 1012.1376 × 1012.4510 × 1012.3775 × 101
F5Mean1.0161 × 1032.1000 × 1031.1806 × 1032.3845 × 1031.3287 × 1031.3699 × 1031.0442 × 1032.3944 × 1033.4295 × 1039.0444 × 102
Std7.4299 × 1012.8930 × 1022.1873 × 1026.4239 × 1022.8299 × 1022.6453 × 1021.9643 × 1025.7577 × 1021.5651 × 1035.4791 × 100
F6Mean2.0453 × 1063.7247 × 1063.5370 × 1069.4928 × 1036.4452 × 1036.4145 × 1033.5115 × 1031.8116 × 1041.9556 × 1051.9209 × 103
Std1.6185 × 1061.5387 × 1066.8903 × 1065.8189 × 1036.0702 × 1035.4035 × 1031.6569 × 1032.5033 × 1042.0129 × 1057.4691 × 101
F7Mean2.1126 × 1032.0607 × 1032.0900 × 1032.1295 × 1032.0788 × 1032.1194 × 1032.0706 × 1032.0915 × 1032.1040 × 1032.0253 × 103
Std5.4453 × 1019.8339 × 1004.5503 × 1015.4905 × 1012.6857 × 1015.8048 × 1012.8019 × 1013.7710 × 1014.0368 × 1014.3009 × 100
F8Mean2.2885 × 1032.2294 × 1032.2680 × 1032.2943 × 1032.2412 × 1032.2643 × 1032.2590 × 1032.2487 × 1032.2602 × 1032.2273 × 103
Std6.5600 × 1011.7759 × 1005.3670 × 1016.0183 × 1012.6346 × 1015.7761 × 1015.5152 × 1013.7525 × 1014.0816 × 1011.3728 × 100
F9Mean2.4963 × 1032.4823 × 1032.5298 × 1032.5447 × 1032.4810 × 1032.4808 × 1032.4812 × 1032.4942 × 1032.4956 × 1032.4808 × 103
Std2.8794 × 1018.9365 × 10−12.8384 × 1016.2918 × 1012.1126 × 10−19.4290 × 10−23.4433 × 10−19.4243 × 1001.3772 × 1011.6657 × 10−2
F10Mean3.9861 × 1032.5469 × 1033.6040 × 1034.0928 × 1033.1062 × 1033.7993 × 1032.9886 × 1033.2761 × 1032.6479 × 1032.5095 × 103
Std1.0764 × 1038.4511 × 1017.4595 × 1021.0381 × 1034.3290 × 1027.5241 × 1025.6227 × 1024.6115 × 1025.3274 × 1023.4184 × 101
F11Mean3.5433 × 1033.0828 × 1033.4779 × 1032.9793 × 1032.9518 × 1032.9324 × 1032.9092 × 1033.3013 × 1033.4555 × 1032.9435 × 103
Std3.7876 × 1021.4509 × 1023.0728 × 1021.4337 × 1021.1468 × 1025.0187 × 1016.8883 × 1011.2876 × 1021.3748 × 1025.0563 × 101
F12Mean3.0067 × 1032.9749 × 1032.9836 × 1032.9765 × 1033.0129 × 1032.9681 × 1032.9696 × 1032.9783 × 1032.9726 × 1032.9403 × 103
Std6.1075 × 1015.5298 × 1003.2064 × 1012.2123 × 1013.5936 × 1012.1821 × 1011.7285 × 1012.3114 × 1011.5258 × 1015.3479 × 100
Table 7. Effectiveness of the MSECBSO and other competitor algorithms.
Table 7. Effectiveness of the MSECBSO and other competitor algorithms.
SuitesCEC2017CEC2022
Dimensionsdim = 30 (W/T/L)dim = 100 (W/T/L)dim = 10 (W/T/L)dim = 20 (W/T/L)
MSECBSO vs. PSO(29/0/1)(28/0/2)(12/0/0)(12/0/0)
MSECBSO vs. DE(29/0/1)(30/0/0)(11/0/1)(12/0/0)
MSECBSO vs. GWO28/0/2)(29/0/1)(12/0/0)(11/0/1)
MSECBSO vs. SSA(29/0/1)(28/0/2)(6/0/6)(12/0/0)
MSECBSO vs. SO28/0/2)(28/0/2)(9/0/3)(10/0/2)
MSECBSO vs. KEO(29/0/1)(29/0/1)(12/0/0)(12/0/0)
MSECBSO vs. BBO(25/0/5)(27/0/3)(12/0/0)(8/0/4)
MSECBSO vs. HBO(30/0/0)(30/0/0)(12/0/0)(12/0/0)
MSECBSO vs. CBSO(30/0/0)(30/0/0)(12/0/0)(12/0/0)
Total (W/T/L)(257/0/13)(259/0/11)(105/0/3)(101/0/7)
O E % 95.19%95.53%97.22%93.52%
Table 8. Comparative mean ranking of algorithms under the Friedman statistical test.
Table 8. Comparative mean ranking of algorithms under the Friedman statistical test.
SuitesCEC2017CEC2022
Dimensions301001020
Algorithms M . R T . R M . R T . R M . R T . R M . R T . R
PSO7.1387.5087.33107.178
DE7.2797.6095.9256.005
GWO5.9755.6057.2596.506
SSA6.3365.7766.0067.007
SO4.3744.2745.2544.753
KEO4.2734.0734.9234.834
BBO2.6322.1023.5822.672
HBO6.6777.0377.0887.178
CBSO8.93109.17106.5877.6710
MSECBSO1.4311.9011.0811.251
Table 9. Effectiveness of the MSECBSO and other competitor algorithms.
Table 9. Effectiveness of the MSECBSO and other competitor algorithms.
Parameter NameParameter Search Space
Number   of   neighbors   K [ 1,50 ]
Distance   order   p [ 1,5 ]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Yang, X. Multi-Strategy Enhanced Connected Banking System Optimizer for Global Optimization and Corporate Bankruptcy Forecasting. Mathematics 2026, 14, 618. https://doi.org/10.3390/math14040618

AMA Style

Zhang Y, Yang X. Multi-Strategy Enhanced Connected Banking System Optimizer for Global Optimization and Corporate Bankruptcy Forecasting. Mathematics. 2026; 14(4):618. https://doi.org/10.3390/math14040618

Chicago/Turabian Style

Zhang, Yaozhong, and Xiao Yang. 2026. "Multi-Strategy Enhanced Connected Banking System Optimizer for Global Optimization and Corporate Bankruptcy Forecasting" Mathematics 14, no. 4: 618. https://doi.org/10.3390/math14040618

APA Style

Zhang, Y., & Yang, X. (2026). Multi-Strategy Enhanced Connected Banking System Optimizer for Global Optimization and Corporate Bankruptcy Forecasting. Mathematics, 14(4), 618. https://doi.org/10.3390/math14040618

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop