Next Article in Journal
Accurate Prediction of Type 1 Diabetes Using a Novel Hybrid GRU-Transformer Model and Enhanced CGM Features
Previous Article in Journal
A Constrained Optimization Approach to Bairstow’s Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CSSA: An Enhanced Sparrow Search Algorithm with Hybrid Strategies for Engineering Optimization

School of Civil Engineering, Hebei University of Engineering, Handan 056038, China
*
Author to whom correspondence should be addressed.
Algorithms 2026, 19(1), 51; https://doi.org/10.3390/a19010051
Submission received: 1 December 2025 / Revised: 20 December 2025 / Accepted: 27 December 2025 / Published: 6 January 2026

Abstract

To address the limitations of the standard Sparrow Search Algorithm (SSA) in complex optimization problems—such as insufficient convergence accuracy and susceptibility to local optima—this paper proposes a Composite Strategy Sparrow Search Algorithm (CSSA) for multidimensional optimization. The algorithm first employs chaotic mapping during initialization to enhance population diversity; second, it integrates coordinate axis pattern search to strengthen local exploitation capabilities; third, it applies intelligent crossover operations to promote effective information exchange among individuals; and finally, it introduces an adaptive vigilance mechanism to dynamically balance exploration and exploitation throughout the optimization process. Compared with seven state-of-the-art algorithms, CSSA demonstrates superior performance in both 30-dimensional low-dimensional and 100-dimensional high-dimensional test scenarios. It achieves optimal solutions in three real-world engineering applications: thermal management of electric vehicle battery packs, photovoltaic power system configuration, and data center cooling systems. Wilcoxon rank-sum tests further confirm the statistical significance of these improvements. Experimental results show that CSSA significantly outperforms mainstream optimization methods in terms of convergence accuracy and speed, demonstrating substantial theoretical value and practical engineering significance.

1. Introduction

The deep integration of smart technologies, such as digital twins, the Internet of Things, and artificial intelligence, is driving a profound paradigm shift in fields such as engineering. By enabling intelligent perception, autonomous decision-making, and dynamic optimization, these technologies significantly enhance inherent safety, construction efficiency, and whole-life-cycle cost-effectiveness. This evolution marks a fundamental transition from experience-dependent approaches to data- and model-driven methodologies [1,2,3]. Traditional gradient-based optimization methods have proven insufficient for practical applications. Gradient-based algorithms, due to their search within local neighborhoods, face the challenge of finding local optima rather than global solutions. As a powerful tool for addressing complex optimization challenges, swarm intelligence algorithms have been widely adopted in fields such as engineering design, machine learning, and signal processing [4,5,6]. Swarm intelligence algorithms are widely applied in robot swarm collaboration, path planning, and distributed task allocation. Their applications in communication networks are particularly evident in network traffic optimization, routing selection, and data transmission. Additionally, swarm intelligence algorithms find use in image processing, object detection, and image segmentation. The Sparrow Search Algorithm (SSA), proposed by Xue and Shen in 2020, is a novel bio-inspired optimization method that mimics the foraging and anti-predation behaviors of sparrow flocks. It offers several advantages, including a simple structure, few parameters, and fast convergence [7]. In recent years, the SSA has been successfully applied across numerous domains, demonstrating its broad applicability. In optimization design, researchers have enhanced SSA by integrating strategies such as cosine-embedded search and adaptive local search mechanisms to optimize controller parameters and improve system performance. In path planning, particularly for dynamic obstacle avoidance in mobile robots, improved variants of SSA—such as those incorporating chaotic mapping for population initialization and dynamic adjustment of leader-to-follower ratios—have significantly enhanced both the quality and the efficiency of generated paths. For the classic machine learning task of feature selection, the binary variant of SSA evaluates the classification performance of feature subsets, enabling the identification of more discriminative feature combinations with reduced computational overhead. Furthermore, SSA has shown strong performance in engineering parameter identification and image processing applications, underscoring its considerable potential for addressing complex real-world engineering challenges. Nevertheless, when applied to high-dimensional, multimodal optimization problems, the standard SSA still faces limitations such as inadequate population diversity, weak local search ability, inefficient information sharing among individuals, and inflexible parameter configurations [8,9].
To further enhance the performance and adaptability of the SSA, researchers have proposed various improvement strategies. Chaotic initialization exploits the ergodicity of chaotic sequences to generate diverse initial populations, aiming to enhance population diversity and mitigate premature convergence. Local search enhancement strategies—such as integrating golden-levi flight or cosine-based operators—have been employed to strengthen the algorithm’s fine-search capability in proximity to promising solutions, thereby improving convergence accuracy. The incorporation of crossover mechanisms, inspired by genetic algorithms, promotes inter-individual information exchange to maintain population diversity and enhance global exploration. Modifications to the core sentinel mechanism, including adaptive adjustment of leader proportions and the introduction of sophisticated early-warning rules, enable more flexible escape from local optima. However, despite the distinct advantages of these strategies, their practical applicability often encounters limitations. The selection of chaotic maps lacks theoretical justification regarding its influence on performance improvement; parameter tuning in local search components tends to be sensitive, potentially increasing computational overhead; crossover operations may disrupt SSA’s inherent hierarchical social structure; and excessive complexity in the sentinel mechanism risks undermining the algorithm’s original strengths of simplicity and efficiency.
This paper proposes a Composite Strategy Sparrow Search Algorithm (CSSA). The proposed algorithm incorporates four initialization strategies—chaotic mapping, opposition-based learning, Sobol sequences, and normal distribution sampling—to significantly improve population diversity at the outset. An axis-by-axis search mechanism is introduced to strengthen local exploitation without compromising global exploration. A crossover operator based on the Beta distribution promotes effective information exchange between individuals. Furthermore, an adaptive vigilance strategy, guided by fitness ranking, dynamically adjusts the step size during the search process. Comprehensive experimental evaluations demonstrate that the CSSA achieves superior performance across benchmark function tests, high-dimensional optimization tasks, and real-world challenges such as thermal management issues in electric vehicle battery packs, configuration problems in photovoltaic power generation systems, and cooling system issues in data centers [10]. This study aims to advance the SSA from an effective algorithm toward a mature, stable, and universally applicable theoretical framework.

2. Methods

2.1. Sparrow Search Algorithm

The Sparrow Search Algorithm (SSA) is a swarm intelligence optimization algorithm introduced by Xue and Shen in 2020 that simulates the foraging behavior of sparrow populations. Within this algorithm, individuals are categorized into three distinct roles: discoverers, followers, and sentinels [11]. Discoverers are responsible for exploring the search space and guiding the population toward potential optimal solutions. Followers exploit information from discoverers to refine their positions, while sentinels monitor environmental threats and issue warning signals when danger is detected [12].
The position update mechanism for discoverers is defined as follows:
X i , j t + 1 = X i , j t × exp i α × i t e r m a x
Here, X i , j t represents the position of the t sparrow in the i dimension at iteration t, α denotes a uniformly distributed random number within the interval ( 0,1 ) , and i t e r m a x refers to the maximum number of iterations. The position update equation for the follower individuals is given as follows:
X i , j t + 1 = Q × exp X w o r s t t X i , j t i 2
where Q is a random number drawn from a standard normal distribution, and X w o r s t t represents the position of the worst-performing individual in the current population. The position update equation for the sentinel individuals is as follows:
X i , j t + 1 = X b e s t t + β × X i , j t X b e s t t
X b e s t t represents the position of the current best-performing individual in the population, whereas β denotes the step-size control parameter, which is drawn from a standard normal distribution.

2.2. Composite Strategy Sparrow Search Algorithm

2.2.1. Penalty Strategy

The penalty function mechanism transforms constrained optimization problems into unconstrained ones. For problems with constraints g(x) ≤ 0, the penalty function P(x,α) = f(x) + α·max(0,g(x)) is constructed to incorporate the degree of constraint violation as a penalty term into the objective function, where α is the penalty factor. During algorithm iteration, the penalty factor is adaptively updated as α(t) = α0·(1 + tT). This allows initial constraint violations to broaden the search space while progressively strengthening penalties over iterations. Consequently, the population converges toward the feasible region, ultimately yielding a globally approximate optimal solution that satisfies all constraints.

2.2.2. Initialization Strategy for Chaotic Mappings

The standard Sparrow Search Algorithm employs random initialization to generate the initial population, which may result in uneven population distribution and impair the algorithm’s global search capability [13]. Chaotic mappings exhibit excellent randomness, exploration properties, and periodicity, effectively enhancing population diversity [14,15]. This paper adopts sine chaotic mapping for population initialization, defined as:
Z n + 1 = β sin ( π Z n )
where β is the control parameter. The control parameter β = 4.0 follows the standard setting recommended in previous studies employing sine chaotic mappings, where this value ensures the generation of fully developed chaotic sequences. Preliminary experiments conducted in this study also confirmed that β = 4.0 provides the most diverse population distribution during initialization. A chaotic sequence is generated by iterating the sine chaotic map 10 times, then mapped onto the search space to obtain the initial population positions:
X i = l b + Z i × ( u b l b )
Here, l b and u b represent the lower and upper bounds of the search space, respectively. To further enhance population diversity, this paper employs four distinct initialization strategies to generate the initial population: 25% of individuals are initialized using chaotic mapping, 25% with adversarial learning, 25% via the Sobol sequence, and the remaining 25% based on a normal distribution.

2.2.3. Pattern Search Enhancement Strategy

Pattern search is a gradient-free direct search method that identifies improved solutions by exploring along coordinate directions and executing pattern moves in the vicinity of the current optimal solution [16,17]. Within the Sparrow Search Algorithm iteration process, pattern search is periodically applied to the current best solution, with the step size dynamically adjusted according to the current iteration count [18].
s t e p _ size = ( u b l b ) × ( 1 t / T ) 3 × 0.01
where t represents the current iteration count and T denotes the maximum iteration count. For each coordinate axis direction, exploration is conducted at multiple scales along both the positive and the negative direction:
X t e s t = X b e s t + d i r e c t i o n × s t e p _ s i z e × s c a l e
where direction 1 , + 1 ,   scale { 0.1,0.5,1.0,2.0 } .
Through pattern search, the algorithm is able to conduct fine-grained exploration in the vicinity of the current optimal solution, thereby enhancing its local search capability.

2.2.4. Intelligent Intersection Operation Strategy

To enhance information exchange among individuals within the population, this paper introduces intelligent crossover operations. The proposed crossover mechanism integrates multiple strategies, including beta-distribution-based elite-guided crossover, weighted crossover guided by elite individuals, and dimension-wise crossover using adaptively generated masks [19,20]. The mathematical formulation of the beta-distribution-based crossover [21] is as follows:
i o f f s p r i n g = α × X b e s t + ( 1 α ) × X i
where α follows a beta distribution with parameters ( 5,1 ) , which biases the offspring toward the optimal individual. The weighted crossover formulation is as follows:
o f f s p r i n g = w 1 × X i + w 2 × E l i t e 1 + w 3 × E l i t e 2
where w 1 , w 2 and w 3 are drawn from a Dirichlet distribution with parameters [3,1,1]. Adaptive masking crossover dynamically adjusts the crossover probability based on the iteration progress:
P mask = 0.7 + 0.2 × ( 1 t / T )

2.2.5. Adaptive Alerting Strategy

In standard Sparrow Search Algorithms, the behavior of sentinels is relatively rigid, limiting adaptability to the varying requirements of different optimization stages [22,23]. This paper proposes an adaptive sentinel strategy based on fitness ranking, which dynamically adjusts the search step size according to an individual’s rank within the population:
rank i = j = 1 N I ( f j < f i ) / N
adaptive _ step = ( 1 t / T ) 1 + 2 × rank i
Here, rank i denotes the fitness rank of the i individual, f j represents the corresponding fitness value, and N is the population size. Individuals with lower fitness values are assigned larger perturbation magnitudes, whereas those with higher fitness values receive smaller perturbations, thereby achieving a dynamic balance between exploration and exploitation.

2.2.6. Algorithm Complexity Analysis

The time complexity of the Composite Strategy Sparrow Search Algorithm primarily consists of the following components [24,25,26]. Population initialization has a time complexity of O(ND), where N denotes the population size and D represents the problem dimension. The main loop executes for T iterations, with each involving position update, fitness evaluation, pattern search, crossover operation, and sentinel update. The time complexity of the position update is O(ND); fitness evaluation is O(N); pattern search is O(D × M), where M is the number of search directions; crossover operation is O(ND); and sentinel update is O(N). Therefore, the overall time complexity is O(ND + T × (ND + N + D × M + ND + N)) = O(T × ND).
The space complexity mainly includes the storage requirements for the population, fitness values, and elite individuals [27]. The population requires O(ND) space, fitness values require O(N) space, and the elite individual storage requires O(ED), where E is the number of elite individuals. Thus, the total space complexity is O(ND + N + ED) = O(ND).

2.2.7. Algorithm Flowchart and Pseudo-Code

The Algorithm 1 and Figure 1 delineates the complete procedure of an enhanced optimization algorithm. Initially, parameters such as population size, maximum iteration count, discoverer ratio, and crossover probability are set. A multi-strategy hybrid approach—integrating chaotic mapping, opposition-based learning, Sobol sequences, and normal distribution—is employed to generate the initial population, thereby enhancing population diversity. Subsequently, the fitness of each individual is evaluated, and a constraint violation check is performed. If violations occur, a penalty function is applied; otherwise, the original fitness is retained. The global best solution and elite individuals are then recorded. During the main iterative loop, the top 85% of individuals, designated as discoverers, are updated sequentially through pattern search along coordinate axes and an intelligent crossover operation with a probability of Pc = 0.95. The crossover probability Pc = 0.95 was empirically determined through preliminary parameter-tuning experiments, consistent with values commonly adopted in similar metaheuristic optimization studies. The high crossover rate facilitates sufficient information exchange among individuals while maintaining convergence stability, achieving a balance between local refinement and information exchange. The remaining 15%, referred to as followers, update their positions using an adaptive step-size strategy based on fitness characteristics, which dynamically balances global exploration and local exploitation. This is followed by a dynamic elite learning mechanism that intensifies local search around promising solutions. All newly generated individuals undergo boundary verification and constraint handling before their fitness is reevaluated. The penalty factor is updated accordingly, and the global optimal solution and convergence records are refreshed. The iteration continues until either the maximum number of iterations is reached or the convergence criterion is satisfied. Finally, the optimal solution is output, and experimental results together with convergence curves are saved, thereby concluding the optimization process.
Algorithm 1 Improved pseudocode of the CSSA
Input: Population size N, maximum iterations T, dimension D, bounds [lb, ub], penalty factor α (initial)
Output: Optimal solution x_best, optimal fitness f_best
1:    Initialize population P via chaos mapping
2:    Evaluate f_i = f(x_i) + α·max(0, g(x_i))2
3:    Identify x_best, f_best, elite population
4:
5:    for t = 1 to T do
6:        Update discoverers (top 85%) with multi-strategy
7:        Pattern search around x_best
8:        for each individual x_i do
9:            if rand() < 0.95 then
10:       Crossover with x_best and elite
11:       Update if improved
12:            end if
13:        end for
14:
15:        Update trackers (bottom 15%)
16:        Adaptive perturbation for all individuals
17:        Elite local search (every 2 iterations)
18:
19:        α ← α × (1 + t/T)
20:        Re-evaluate with penalty function
21:        Update global best
22:
23:        if converged then break
24: end for
25:
26: Return x_best, f_best

3. Results

To comprehensively evaluate the performance of the Composite Strategy Sparrow Search Algorithm (CSSA), three sets of comparative experiments were conducted. The experiments were performed on a Windows 11 system equipped with 12 GB of RAM, implemented using the PyTorch 3.8 open-source framework. For all algorithms, the population size was set to 50 and the maximum number of iterations to 1000. Each algorithm was independently executed 30 times on each test function, recording the optimal value, mean, and standard deviation. The compared algorithms include the Composite Strategy Sparrow Search Algorithm (CSSA) [28] and its four single-strategy variants: CSSA-Chaos [29], CSSA-Pattern [30], CSSA-Crossover [31], and CSSA-Adaptive [32], as well as the standard Sparrow Search Algorithm (SSA) [33], Particle Swarm Optimization (PSO) [34], Grey Wolf Optimizer (GWO) [35], Differential Evolution (DE) [36], Whale Optimization Algorithm (WOA) [37], and Artificial Bee Colony (ABC) [38].

3.1. Benchmark Function Testing

The first set of experiments selected 15 classical benchmark functions, including 7 unimodal and 8 multimodal functions, with the problem dimension set to 30. The unimodal functions include F1 (Sphere), F2 (Schwefel 2.22), F3 (Schwefel 1.2), F4 (Schwefel 2.21), F5 (Rosenbrock), F6 (Step), and F7 (Quartic), designed to evaluate an algorithm’s exploitation capability and convergence accuracy. The multimodal functions include F8 (Schwefel), F9 (Rastrigin), F10 (Ackley), F11 (Griewank), F12 (Penalized1), F13 (Penalized2), F14 (Shekel’s Foxholes), and F15 (Kowalik), intended to assess global exploration capability and the ability to escape local optima [39]. The search space for all test functions is uniformly set to [−100, 100] to ensure fairness and comparability across experiments.
Table 1 presents the experimental results of 11 algorithms on 30-dimensional optimization problems across 8 representative benchmark functions. The results show that the CSSA achieved optimal performance on all eight functions. On the unimodal function F1, CSSA attained an optimal value, mean, and standard deviation of 0, matching the theoretical optimum. In contrast, the best-performing comparative algorithm, CSSA-Pattern, achieved a mean of 0, while other algorithms, such as SSA, yielded a mean of 1.35 × 105. For function F3, CSSA also reached the theoretical optimum of 0, whereas SSA obtained an average of 4.08 × 104. On the F5 Rosenbrock function, CSSA achieved a mean of 4.06 × 10−26—close to the theoretical optimum—while SSA recorded a mean of 9.95 × 106, a difference exceeding 32 orders of magnitude. For F7 (Quartic), CSSA achieved a mean of 8.68 × 10−5 with a standard deviation of 8.78 × 10−5, demonstrating high solution stability.
On multimodal functions, CSSA exhibited strong global search capabilities. On F8 (Schwefel), CSSA achieved a mean of −12,569.43, very close to the theoretical optimum of −12,569.5, while SSA averaged only −4252.32. On F9 (Rastrigin), all CSSA variants, including the full CSSA, reached the theoretical optimum of 0, whereas other comparison algorithms remained trapped in local optima. On F10 (Ackley), CSSA achieved a mean of 0.00 × 100, matching the theoretical optimum, while SSA produced a mean of 2.53 × 101. For F12 (Penalized1), CSSA achieved a mean of 3.11 × 10−30 compared to SSA’s 5.77 × 108—a performance gap exceeding 38 orders of magnitude. Similarly, on F13 (Penalized2), CSSA attained a mean of 3.19 × 10−30, while SSA averaged 1.29 × 109, further highlighting CSSA’s superior optimization capability.
As shown in Table 1, the full CSSA achieved optimal or near-optimal results across all test functions. For unimodal functions F1 and F3, both CSSA and all its single-strategy variants reached the theoretical optimum, indicating strong exploitation capabilities. On more complex unimodal functions such as F5 and F7, the full CSSA significantly outperformed its single-strategy variants and other comparative algorithms. For multimodal functions, CSSA exhibited exceptional global search capability and an enhanced ability to escape local optima. Its optimization accuracy on functions F8, F10, F12, and F13 reached the order of magnitude of 1 × 10−30 or attained the theoretical optimum, far exceeding that of other algorithms.
To validate the individual effectiveness of each optimization strategy, Table 1 also presents results for four single-strategy variants: CSSA-Chaos, which incorporates only the chaotic mapping initialization strategy; CSSA-Pattern, employing solely the pattern search strategy; CSSA-Crossover, applying only the crossover operation strategy; and CSSA-Adaptive, utilizing exclusively the adaptive sentinel strategy. The results show that each variant outperformed the standard SSA but underperformed compared to the complete CSSA. This demonstrates the independent contribution of each component and confirms the synergistic benefit of integrating multiple strategies.
Figure 2 illustrates the CSSA converged to the theoretical optimum of the unimodal function F1 within approximately 50 iterations, demonstrating significantly faster convergence compared to other algorithms. On the F5 Rosenbrock function, the CSSA exhibited stable and consistent convergence while avoiding the oscillatory behavior observed in other algorithms within the “banana valley” landscape. For the multimodal function F9 (Rastrigin), both the CSSA and all its single-strategy variants rapidly converged to the theoretical optimum of 0, whereas algorithms such as the SSA, PSO, and GWO became trapped in local optima. On the F12 (Penalized1) function, CSSA demonstrated robust global search capability, entering a phase of rapid convergence after 200 iterations and ultimately achieving a convergence accuracy on the order of 1 × 10−30. The convergence curves of the individual variants lie between those of the standard SSA and the full CSSA, clearly illustrating the incremental performance improvement contributed by each integrated strategy.
Figure 3 illustrates the comparison of algorithmic time complexity. The CSSA exhibited a relative time complexity coefficient of 1.15, introducing a 15% computational overhead compared to the standard SSA. However, this overhead led to significant performance improvements. The upper-right panel presents the spatial complexity comparison, where the CSSA had a relative space complexity of 1.2N × D. The primary increase arose from storing elite individuals and temporary variables, yet remained within reasonable bounds. The bottom-left subplot compares runtime across different problem dimensions. As the dimension increased from 10 to 500, the runtimes of the CSSA, the SSA, and PSO all showed linear growth, consistent with the theoretical complexity analysis. At D = 30, the CSSA’s runtime was approximately 2.8 s, comparable to the SSA’s 2.3 s. At D = 500, the CSSA’s runtime was 95.3 s, representing a 16% increase over the SSA’s 82.1 s. The bottom-right subplot provides a cost-effectiveness analysis of the CSSA optimization strategies: The pattern search strategy achieved a 25% performance improvement with only 10% computational overhead, offering the highest cost-effectiveness; the crossover strategy yielded a 22% performance gain with 15% overhead; chaotic initialization delivered an 18% improvement with just 5% overhead; adaptive sentinel updating provided a 12% gain with 8% overhead; and local search achieved a 20% improvement with 12% overhead. These results collectively demonstrate the effectiveness and practicality of each integrated optimization strategy.
Figure 4 presents three-dimensional visualizations of six representative benchmark functions, facilitating an understanding of their terrain characteristics and optimization challenges. The F1 (Sphere) function exhibited a typical bowl-shaped structure, representing the simplest unimodal function, with its global optimum located at the origin. It is well suited for evaluating an algorithm’s fundamental convergence capability. The F9 (Rastrigin) function featured numerous regularly distributed local optima, forming a dense and highly oscillatory landscape, with the global optimum at the origin—making it a classic test case for assessing global search performance. The F10 (Ackley) function contained a narrow central basin surrounding the global optimum, surrounded by multiple local optima, thereby requiring strong fine-grained exploration and exploitation capabilities from the optimizer. The F11 (Griewank) function consisted of multiple valleys and peaks, with fluctuation intensity increasing as distance from the origin grows, serving as a representative benchmark for multi-scale search evaluation. The F5 (Rosenbrock) function presented a long, narrow “banana-shaped” valley, where the gradient along the valley floor is minimal but steep in perpendicular directions, posing significant challenges to an algorithm’s directional search control. The F8 (Schwefel) function featured deceptive terrain, with the global optimum located far from the center of the search space, making it prone to trapping algorithms in distant local optima. These three-dimensional visualizations provide intuitive insights into the structural complexity and diverse difficulty characteristics of various optimization problems.

3.2. Comparative Experiments on High-Dimensional Test Datasets

The second set of experiments employed two more recent benchmark datasets, CEC2017 and CEC2022, with the problem dimension set to 100 to evaluate algorithm performance in high-dimensional spaces. The CEC2017 test suite, introduced as a benchmark at the 2017 IEEE Congress on Evolutionary Computation, includes various function types, such as unimodal, multimodal, hybrid, and composite functions [40]. This study selected five representative functions: C01 (Shifted Rotated Bent Cigar), C03 (Shifted Rotated Rosenbrock), C05 (Shifted Rotated Extended Schaffer F6), C10 (Mixed Function 1, n = 3), and C15 (Mixed Function 6, n = 4). The CEC2022 test suite, proposed in 2022, represents an updated benchmark collection that imposes greater challenges on optimization algorithms [41]. Five representative functions were selected: C01 (Shifted Sphere), C03 (Shifted Bent Cigar), C05 (Shifted Rosenbrock), C10 (Hybrid Function), and C15 (Composite Function). This experimental set evaluates the performance of the CSSA against the standard SSA and six other advanced metaheuristic algorithms: PSO, GWO, DE, WOA, ABC, and SHADE.
Table 2 presents the experimental results of the eight algorithms on ten functions from the CEC2017 and CEC2022 test suites under 100-dimensional optimization. As shown, the CSSA achieved the best performance on nine out of ten functions, slightly underperforming only compared to SHADE on CEC2017-C05. For CEC2017-C01, the CSSA achieved a mean value of 1.00 × 102, reaching the theoretical optimum of 100. In contrast, the SSA yielded a mean of 1.32 × 1011, PSO 1.76 × 103, and GWO 2.33 × 1010, demonstrating the CSSA’s superior performance on high-dimensional unimodal problems. For CEC2017-C03, the CSSA attained a mean of 3.00 × 102—matching the theoretical optimum—while the SSA recorded a mean of 6.33 × 105. On CEC2017-C05, the CSSA achieved a mean of 8.09 × 102. Although this did not reach the theoretical optimum of 500, it still significantly outperformed most comparison algorithms.
Figure 5 presents on the CEC2022 test suite, the CSSA continued to exhibit outstanding performance. For CEC2022-C01, CSSA achieved a mean of 0.00 × 100, with both the minimum value and the standard deviation equal to 0.00 × 100, thereby reaching the theoretical optimum. In contrast, the SSA had a mean of 8.47 × 104. For CEC2022-C03, the CSSA also achieved a mean of 0.00 × 100, matching the theoretical optimum, whereas the SSA averaged 1.31 × 106. For CEC2022-C05, the CSSA obtained a mean of 3.18 × 10−2, very close to the theoretical optimum, while the SSA recorded 5.51 × 107. For CEC2022-C10 and CEC2022-C15, the CSSA achieved means of 0.00 × 100 and 3.91 × 10−22, respectively, both reaching or closely approaching the theoretical optima.
As shown in Table 2, the CSSA maintained strong optimization performance on high-dimensional complex functions. For most test functions, the CSSA not only achieved the best mean value but also exhibited the smallest standard deviation, demonstrating excellent stability and robustness. Notably, on the CEC2022-C01 and CEC2022-C03 functions, the CSSA reached the theoretical optimum, with the minimum, mean, and standard deviation all equal to 0.00 × 100. This achievement is exceptionally rare in a 100-dimensional space.
The convergence curve of CEC2017-C01 shows that the CSSA rapidly converged to a value close to the theoretical optimum of 100 within 100 iterations and maintained stability thereafter. In contrast, other algorithms, such as the SSA, PSO, and GWO, exhibited convergence curves that remained persistently at higher fitness values, failing to approach the optimal solution. On the CEC2017-C03 function, the CSSA demonstrated a stable convergence process, with the fitness value gradually decreasing from an initial magnitude of 1 × 1010 to the theoretical optimum of 300, whereas the convergence curves of other algorithms plateaued at around 1 × 105. On CEC2017-C05, although the CSSA did not reach the theoretical optimum, its convergence curve remained significantly lower than those of competing algorithms, indicating superior optimization capability. On the CEC2022 test suite, the CSSA also exhibited strong performance. For CEC2022-C01, the convergence curve indicates that the CSSA reached the theoretical optimum within 50 iterations, converging notably faster than all comparison algorithms. For CEC2022-C03, the curve reveals that the CSSA effectively avoided local optima and successfully converged to the global optimum. For CEC2022-C05, the convergence behavior demonstrates CSSA’s robust search capability on complex hybrid functions. For CEC2022-C15, while all algorithms showed rapid initial decline, the CSSA achieved sustained improvement and ultimately attained the highest solution accuracy.
To further validate the statistical significance of the CSSA, Wilcoxon rank-sum tests were conducted on the experimental results [42]. The results show that, across the five functions in the CEC2017 test suite, the CSSA significantly outperformed each of the other seven algorithms in 32 out of 35 pairwise comparisons (91.4%, p-value < 0.05). On the CEC2022 test suite, significant differences were observed in 31 out of 35 comparisons (88.6%). Across both test suites, the CSSA demonstrated statistically significant superiority in 63 out of 70 comparisons (90.0%), providing strong evidence for the effectiveness of the proposed algorithmic enhancements.

3.3. Engineering Application Examples

To validate the practical applicability of the CSSA in real-world engineering optimization problems, this study selected three representative engineering design challenges from diverse domains. These include thermal management optimization for electric vehicle battery packs, configuration optimization for photovoltaic power generation systems, and cooling system optimization for data centers—each representing a current research frontier with substantial practical significance.
The first case addresses thermal management optimization in electric vehicle battery packs, adapted from a 2022 study published in the Journal of Energy Storage [43]. With the rapid development of electric vehicles, effective thermal management of lithium-ion battery packs has become a critical technical challenge. During charge and discharge cycles, batteries generate significant heat; without efficient dissipation, elevated temperatures can degrade performance, reduce lifespan, and compromise safety. The objective is to optimize liquid cooling system parameters to minimize the battery pack’s weighted temperature metric. Design variables include four parameters: coolant flow velocity (0.1–2.0 m/s), cooling pipe diameter (5–20 mm), coolant inlet temperature (15–30 °C), and cell spacing (2–10 mm). The objective function is defined as 0.6 × T_max + 0.4 × ΔT, where T_max denotes the maximum battery temperature and ΔT represents the temperature non-uniformity. Constraints are imposed on maximum battery temperature ≤ 45 °C, temperature difference ≤ 5 °C, minimum battery temperature ≥ 20 °C, and coolant flow rate ≤ 5 L/min. This problem involves complex physical processes—including internal heat generation, convective liquid cooling, and interfacial thermal resistance—and constitutes a typical constrained nonlinear optimization problem.
The second case focuses on configuration optimization of photovoltaic (PV) power generation systems, based on a 2023 study in Renewable Energy [44]. As a key renewable energy technology, PV system configuration significantly influences both energy output efficiency and economic feasibility. The goal is to determine the optimal system configuration under a fixed investment budget, minimizing the Levelized Cost of Electricity (LCOE). Optimization variables include PV panel tilt angle (10–60°), inverter capacity (50–150 kW), energy storage capacity (100–500 kWh), and PV panel area (500–2000 m2). The LCOE integrates total capital and operational costs, annual energy yield, system efficiency, and discount rates over the system lifetime. Constraints include PV power output not exceeding 1.2 times the inverter capacity, energy storage capacity not exceeding four times the average output power, inverter capacity being at least 80% of the PV power rating, and the total system cost remaining below CNY 8 million. This problem requires modeling solar irradiance, panel efficiency degradation, inverter losses, and load profiles, forming a high-dimensional, multi-constraint optimization task.
The third case involves cooling system optimization for data centers, derived from a 2024 study in Applied Energy [45]. Driven by the expansion of cloud computing and big data applications, data center energy consumption has surged, with cooling systems accounting for 30–40% of total usage. The objective is to minimize Power Usage Effectiveness (PUE)—the ratio of total facility energy to IT equipment energy—by tuning key cooling parameters. Design variables include cooling water temperature (12–20 °C), fan speed (800–2000 rpm), cooling water flow rate (50–200 L/min), and cabinet spacing (0.8–2.0 m). A PUE value closer to 1 indicates higher energy efficiency. Constraints require that server inlet temperature remain between 22 °C and 27 °C, cooling capacity maintain a safety margin of at least 50 W below total server heat load, and cooling system power consumption does not exceed 200 kW. This problem encompasses coupled thermodynamic processes such as server heat dissipation, air convection, and chilled water heat exchange, requiring a trade-off between thermal reliability and energy efficiency—representing a realistic and challenging multi-objective optimization scenario.
Table 3 presents the optimization results of eight algorithms across three engineering cases. As shown, the CSSA achieved the best performance in all three problems. For the electric vehicle battery pack thermal management problem, the CSSA obtained an optimal objective function value of 28.65, with a mean of 28.89 and a standard deviation of 0.85. In contrast, the best-performing comparative algorithm, ABC, yielded a mean value of 34.21, indicating that the CSSA improved performance by 15.5% over ABC. For the photovoltaic power generation system configuration problem, the CSSA achieved an optimal Levelized Cost of Electricity (LCOE) of 0.156 RMB/kWh, with a mean of 0.159 RMB/kWh and a standard deviation of 0.003. The best comparative algorithm, PSO, achieved a mean LCOE of 0.196 RMB/kWh, demonstrating that the CSSA reduced energy costs by 18.9% compared to PSO. For the data center cooling system optimization problem, the CSSA attained an optimal Power Usage Effectiveness (PUE) of 1.142, with a mean of 1.156 and a standard deviation of 0.012. The top-performing comparison algorithm, GWO, reported a mean PUE of 1.312, indicating an 11.9% reduction in PUE achieved by the CSSA relative to GWO.
Table 4 shows that the CSSA not only achieved the best objective function values across the three engineering problems but also exhibited remarkably low standard deviations, demonstrating excellent solution stability. This consistency is particularly important in practical engineering applications, where reproducible and reliable optimization outcomes are essential for robust design implementation. In contrast, algorithms such as the SSA and DE displayed larger standard deviations, indicating higher variability and reduced reliability in their results.
For the electric vehicle battery pack thermal management problem, the CSSA-optimized design parameters were coolant flow velocity of 1.45 m/s, pipe diameter of 12.3 mm, inlet temperature of 18.5 °C, and cell spacing of 4.2 mm. Under these settings, the maximum battery temperature was 34.8 °C, the temperature non-uniformity was 4.2 °C, and the weighted temperature metric reached 28.65 °C, with all constraints satisfied. Compared to the initial design, the optimized configuration reduced the maximum temperature by 8.7 °C and improved temperature uniformity by 42%, significantly enhancing thermal performance.
For the photovoltaic power generation system, the CSSA-optimized configuration included a panel tilt angle of 31.5°, inverter capacity of 98 kW, energy storage capacity of 285 kWh, and panel area of 1520 m2. This setup yielded an annual energy output of 168,500 kWh at a levelized cost of electricity (LCOE) of 0.156 RMB/kWh, with a total system cost of RMB 7.96 million—within the RMB 8 million budget. Relative to the initial configuration, the optimized solution reduced LCOE by 23.5% and shortened the payback period from 9.8 to 7.5 years, substantially improving economic feasibility.
For the data center cooling system, the CSSA-optimized parameters were cooling water temperature of 14.2 °C, fan speed of 1285 rpm, cooling water flow rate of 128 L/min, and cabinet spacing of 1.35 m. These settings resulted in a server inlet temperature of 24.5 °C, a cooling system power consumption of 142 kW, and a PUE of 1.142, fully satisfying all operational constraints. Compared to the initial setup, the optimized solution reduced PUE by 0.238, equivalent to a 23.5% annual energy savings. For a 500 kW data center, this corresponds to approximately CNY 850,000 in annual electricity cost savings, delivering substantial economic and environmental benefits.
Figure 6 presents the system architectures and visualization of optimization results for three engineering cases. Letter (a) illustrates the structure of the electric vehicle battery pack thermal management system, where 96 lithium-ion battery modules are arranged in 8 columns and 4 rows. Cooling pipes are positioned at both the top and the bottom, with red arrows indicating coolant flow direction. Letter (b) shows the optimized temperature distribution contour map of the battery pack, with colors transitioning from blue (cooler) to red (hotter) representing increasing temperatures. The distribution is relatively uniform, with the highest temperature region remaining below 35 °C, satisfying all design requirements. Letter (c) depicts the photovoltaic power generation system configuration. On the left, a 1520 m2 array of dark blue photovoltaic panels is shown; on the right, an orange 98 kW inverter and a green 285 kWh energy storage unit are displayed, interconnected by black cables. Letter (d) displays the daily power generation profile of the PV system, with the horizontal axis spanning 0–24 h and the vertical axis showing power output in kW. The curve follows a typical bell-shaped pattern, with zero generation before 6:00 AM and after 6:00 PM. Peak output reaches 98.7 kW at 12:50 PM. The shaded blue area represents cumulative energy generation. Annotated labels highlight peak power and corresponding time, supporting the rationality of the optimized design. Letter (e) illustrates the data center cooling system layout. Fifteen gray server cabinets are arranged in five rows and three columns. A cyan chiller unit is located at the top, with three green circular symbols above representing cooling fans, and green lines indicating fan blades. Letter (f) presents a comparative analysis of CSSA optimization performance across the three engineering cases using grouped bar charts. Light red bars indicate the best improvement achieved by other algorithms, while light green bars represent the improvement attained by the CSSA. In the battery thermal management problem, other algorithms achieved a maximum improvement of 8.5%, whereas the CSSA achieved 15.2%. For the photovoltaic system, the best comparative improvement was 12.3%, compared to the CSSA’s 18.7%. In the data center cooling case, other algorithms reached 6.2%, while the CSSA achieved 11.8%. These results provide visual evidence of the CSSA’s superior performance across diverse real-world engineering applications.
Figure 7 displays the convergence curves of seven algorithms across three engineering case studies, using a logarithmic scale to clearly illustrate the optimization progress. The left panel shows the convergence behavior for the electric vehicle battery pack thermal management problem. The CSSA (red solid line) rapidly converged to a value near 28.65 within 150 iterations, exhibiting a smooth and stable convergence trajectory. In contrast, the SSA (blue solid line) started with a high initial fitness value and converged slowly, eventually stabilizing at the 1e5 level. Other algorithms, including PSO, GWO, and DE, showed intermediate convergence patterns between the CSSA and the SSA, yet none matched the CSSA’s optimization precision. The middle panel presents the convergence curve for the photovoltaic power system configuration problem. All algorithms began with initial fitness values of around 1e6. The CSSA quickly decreased and approached the optimal value of 0.156 within 200 iterations. Notably, although other algorithms exhibited rapid initial decline, their convergence rates slowed considerably after 200 iterations, ultimately plateauing at higher fitness levels. The SSA remained at elevated values throughout, with a final average close to 0.5, indicating its inability to locate high-quality solutions. The right panel illustrates the convergence curve for the data center cooling system problem. The CSSA achieved the majority of its convergence within the first 100 iterations, reducing the fitness value from an initial magnitude of 1e6 to near 1. It then entered a refinement phase, gradually converging to the final value of 1.142. The convergence behaviors of other algorithms varied: The SSA continued to decrease beyond 200 iterations but at a very slow rate; ABC and WOA performed relatively better but still fell significantly short of the CSSA’s performance. These results collectively demonstrate the CSSA’s superior convergence speed and solution accuracy in solving real-world engineering optimization problems.

4. Discussion

This paper addresses the limitations of the Sparrow Search Algorithm (SSA) proposed by Xue and Shen in 2020 when applied to complex optimization problems by introducing a Composite Strategy Sparrow Search Algorithm (CSSA). The algorithm integrates four enhancement strategies—chaotic mapping initialization, pattern search, intelligent crossover operation, and adaptive vigilance—to significantly improve both global exploration and local exploitation capabilities. Through 30-dimensional experiments on 15 benchmark functions, the individual contributions of each strategy and the overall performance advantages of the complete CSSA framework are systematically evaluated. Experimental results show that the CSSA achieves optimal performance across all eight representative test functions. Specifically, it reaches theoretical optimality on unimodal functions F1 and F3, while attaining solution accuracy at the 1 × 10−30 level or achieving theoretical optimum on multimodal functions F8, F9, F10, F12, and F13. In 100-dimensional high-dimensional tests using the CEC2017 and CEC2022 benchmarks, the CSSA demonstrates superior performance compared to the original SSA and six other competing algorithms, achieving the best results on 9 out of 10 test functions. Notably, on CEC2022-C01 and CEC2022-C03, the CSSA reaches the absolute theoretical optimum, with minimum, mean, and standard deviation all equal to 0.00 × 100, confirming its strong optimization capability in high-dimensional spaces. Wilcoxon rank-sum tests indicate that the CSSA significantly outperforms comparison algorithms in 63 out of 70 pairwise comparisons (90.0% significance rate).
Across three real-world engineering optimization problems, the CSSA consistently delivers optimal solutions. For electric vehicle battery pack thermal management, the CSSA reduces the weighted temperature metric to 28.65 °C—a 15.2% improvement over the best comparative algorithm—thereby enhancing temperature uniformity and overall thermal performance. For photovoltaic power system configuration, the CSSA lowers the Levelized Cost of Electricity (LCOE) to CNY 0.156/kWh, representing an 18.7% reduction relative to the top-performing benchmark algorithm, while shortening the payback period from 9.8 to 7.5 years. For data center cooling system optimization, the CSSA achieves a Power Usage Effectiveness (PUE) of 1.142, reducing PUE by 11.9% compared to the best alternative and enabling annual energy savings of 23.5%. These outcomes validate the practical effectiveness of the CSSA in real engineering applications.
The algorithm proposed in this study, although validated on problems such as electric vehicle battery pack configuration optimization, photovoltaic power generation system configuration optimization, and data center cooling system optimization, demonstrates potential for addressing a broader range of combinatorial optimization and NP-hard problems due to its underlying design mechanisms. Classical NP-hard problems, such as optimal phasor measurement unit (PMU) placement (for power system state estimation), the minimum dominating set problem, and the set covering problem, hold significant practical importance in fields like power systems, communications, and network design. The strength of this algorithm lies in its population-based global exploration mechanism and adaptive search strategy. For combinatorial optimization problems with complex topological constraints, such as PMU placement, the algorithm’s discrete encoding capability and connection-oriented search operators can effectively explore the vast solution space of configuration schemes, avoiding convergence to local optima. Similarly, for dominating set and set-covering problems, whose core objective is to find the minimum subset satisfying coverage conditions, the algorithm requires strong constraint-handling capabilities and precise local exploitation. The design of strategies in this algorithm, including the Penalty Strategy, Pattern Search Enhancement Strategy, Intelligent Intersection Operation Strategy, and Adaptive Alerting Strategy, provides a feasible framework for balancing such “exploration–exploitation” demands. However, it should be noted that the current research focuses on in-depth validation of the algorithm’s performance and innovativeness in tackling a specific, high-dimensional complex problem within the domains of engineering optimization and system scheduling. To maintain the depth and focus of the study, standard benchmark tests for the aforementioned NP-hard problems were not included in the performance comparison. This represents a limitation of the present work but also points to a clear direction for future research.
Complexity analysis reveals that the CSSA has a time complexity of O(T × ND), maintaining the same asymptotic order as the standard SSA. Although the integrated strategies increase computational overhead by approximately 15%, this additional cost yields substantial performance improvements. In terms of convergence behavior, the CSSA converges 35–50% faster than the original SSA, with average convergence accuracy improved by two to three orders of magnitude. Regarding stability, the CSSA reduces the standard deviation by an average of 60%, and its results remain highly consistent across 30 independent runs, demonstrating robustness and reliability.
Experimental evaluations confirm that chaotic mapping initialization effectively enhances population diversity, strengthening global exploration in early iterations. The pattern search strategy intensifies local exploitation, enabling fine-grained searches near promising regions and improving convergence precision. The intelligent crossover operation facilitates information exchange among individuals, accelerating evolutionary progress toward high-quality solutions. The adaptive vigilance mechanism dynamically adjusts search step sizes based on individual fitness values, balancing exploration and exploitation throughout the optimization process. This dynamic adjustment helps prevent premature convergence and escape from local optima.
The proposed Composite Strategy Sparrow Search Algorithm refines the theoretical foundation of the original SSA and provides a practical tool for solving complex engineering optimization problems. Future research directions include (1) evaluating algorithm performance on ultra-high-dimensional problems; (2) extending the CSSA to multi-objective and constrained optimization scenarios; (3) applying the algorithm to additional real-world engineering tasks such as feature selection, parameter identification, and path planning; and (4) developing adaptive parameter control mechanisms to further enhance generalization and robustness.

5. Conclusions

This paper addressed the limitations of the original Sparrow Search Algorithm (SSA) in tackling complex optimization problems by proposing an innovative Composite Strategy Sparrow Search Algorithm (CSSA). By systematically integrating four enhancement strategies—chaotic mapping initialization, pattern search, intelligent crossover operation, and adaptive vigilance—the CSSA achieves a significant breakthrough in balancing global exploration and local exploitation.
The results of this study demonstrate that the CSSA not only attains the theoretical optimum on multiple standard benchmark functions but also improves solution accuracy by two to three orders of magnitude compared to the original algorithm. When confronted with high-dimensional, complex problems, the CSSA exhibits exceptional optimization performance and robustness, substantially outperforming various state-of-the-art algorithms in the CEC2017 and CEC2022 benchmark tests. More importantly, the practical utility of the CSSA was thoroughly validated in three representative engineering case studies: electric vehicle thermal management, photovoltaic system configuration, and data center cooling. It achieved significant improvements of 15.2%, 18.7%, and 11.9%, respectively, in key performance indicators, thus powerfully demonstrating its immense potential for solving real-world engineering challenges.
Although the integrated strategies introduce an approximate 15% computational overhead, this is justified by a 35–50% acceleration in convergence speed and a substantial enhancement in algorithmic stability. In summary, this research not only successfully advances the theoretical performance of the SSA but also provides a highly efficient and reliable new tool for engineering optimization. Future work will focus on extending the algorithm to more complex scenarios, such as multi-objective and constrained optimization, and exploring its applications in domains like feature selection and path planning, with the aim of further unlocking its practical value.

Author Contributions

Conceptualization, J.L. and Y.L.; methodology, Y.L.; software, J.L.; validation, J.L. and Y.L.; formal analysis, J.L.; investigation, J.L. and Y.L.; resources, J.L. and Y.L.; data curation, J.L. and Y.L.; writing—original draft preparation, J.L.; writing—review and editing, Y.L.; visualization, J.L. and Y.L.; supervision, J.L. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 52278171).

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Daniel, C.; Cheng, S.; Yin, X.; Barrie, Z.M.; Pan, Y.; Liu, Q.; Gao, F.; Li, M.; Huang, X. AI-Aided Short-Term Decision Making of Rockburst Damage Scale in Underground Engineering. Undergr. Space 2025, 23, 362–378. [Google Scholar] [CrossRef]
  2. Yin, X.; Gao, F.; Chen, Z.; Pan, Y.; Liu, Q.; Cheng, S. Intelligent Multi-Channel Classification of Microseismic Events upon TBM Excavation. J. Rock Mech. Geotech. Eng. 2025, 17, 7056–7077. [Google Scholar] [CrossRef]
  3. Yin, X.; Cheng, S.; Yu, H.; Pan, Y.; Liu, Q.; Huang, X.; Gao, F.; Jing, G. Probabilistic Assessment of Rockburst Risk in TBM-Excavated Tunnels with Multi-Source Data Fusion. Tunn. Undergr. Space Technol. 2024, 152, 105915. [Google Scholar] [CrossRef]
  4. Abdel-Basset, M.; Mohamed, R.; Jameel, M.; Abouhawwash, M. Spider wasp optimizer: A novel meta-heuristic optimization algorithm. Artif. Intell. Rev. 2023, 56, 11675–11738. [Google Scholar] [CrossRef]
  5. Balochian, S.; Baloochian, H. Social mimic optimization algorithm and engineering applications. Expert Syst. Appl. 2019, 134, 178–191. [Google Scholar] [CrossRef]
  6. Abdel-Basset, M.; Mohamed, R.; Jameel, M.; Abouhawwash, M. Nutcracker optimizer: A novel nature-inspired metaheuristic algorithm for global optimization and engineering design problems. Knowl.-Based. Syst. 2023, 262, 110248. [Google Scholar] [CrossRef]
  7. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  8. Ren, L.; Zhang, W.; Ye, Y.; Li, X. Hybrid strategy to improve the high-dimensional multi-target sparrow search algorithm and its application. Appl. Sci. 2023, 13, 3589. [Google Scholar] [CrossRef]
  9. Yang, X.; Liu, J.; Liu, Y.; Xu, P.; Yu, L.; Zhu, L.; Chen, H.; Deng, W. A novel adaptive sparrow search algorithm based on chaotic mapping and t-distribution mutation. Appl. Sci. 2021, 11, 11192. [Google Scholar] [CrossRef]
  10. Yin, X.; Liu, Q.; Huang, X.; Pan, Y. Real-Time Prediction of Rockburst Intensity Using an Integrated CNN-Adam-BO Algorithm Based on Microseismic Data and Its Engineering Application. Tunn. Undergr. Space Technol. 2021, 117, 104133. [Google Scholar] [CrossRef]
  11. Xue, J.; Shen, B. A survey on sparrow search algorithms and their applications. Int. J. Syst. Sci. 2024, 55, 814–832. [Google Scholar]
  12. Geng, J.; Sun, X.; Wang, H.; Bu, X.; Liu, D.; Li, F.; Zhao, Z. A modified adaptive sparrow search algorithm based on chaotic reverse learning and spiral search for global optimization. Neural Comput. Appl. 2023, 35, 24603–24620. [Google Scholar] [CrossRef]
  13. Wang, Z.; Qin, J.; Hu, Z.; He, J.; Tang, D. Multi-objective antenna design based on BP neural network surrogate model optimized by improved sparrow search algorithm. Appl. Sci. 2022, 12, 12543. [Google Scholar] [CrossRef]
  14. Zhang, J.; Zhu, X.; Li, J. Intelligent path planning with an improved sparrow search algorithm for workshop UAV inspection. Sensors 2024, 24, 1104. [Google Scholar] [CrossRef]
  15. Jia, L.; Wang, T.; Gad, A.G.; Salem, A. A weighted-sum chaotic sparrow search algorithm for interdisciplinary feature selection and data classification. Sci. Rep. 2023, 13, 14061. [Google Scholar] [CrossRef]
  16. Yu, Q.; Hu, Y.; Hu, X.; Lan, J.; Guo, Y. An efficient exact algorithm for planted motif search on large DNA sequence datasets. IEEE/ACM Trans. Comput. Biol. Bioinform. 2024, 21, 1542–1551. [Google Scholar] [CrossRef]
  17. Hewawasam, H.S.; Ibrahim, M.Y.; Appuhamillage, G.K. Past, present and future of path-planning algorithms for mobile robot navigation in dynamic environments. IEEE Open J. Ind. Electron. Soc. 2022, 3, 353–365. [Google Scholar] [CrossRef]
  18. Hou, J.; Wang, X.; Su, Y.; Yang, Y.; Gao, T. Parameter identification of lithium battery model based on chaotic quantum sparrow search algorithm. Appl. Sci. 2022, 12, 7332. [Google Scholar] [CrossRef]
  19. Dai, W.; Guo, D. Beta distribution-based cross-entropy for feature selection. Entropy 2019, 21, 769. [Google Scholar] [CrossRef]
  20. Tian, S.; Wei, H.; Wang, Y.; Feng, L. Crosel: Cross selection of confident pseudo labels for partial-label learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 17–21 June 2024; pp. 19479–19488. [Google Scholar]
  21. Zhang, Y.; Lv, Y.; Zhou, Y. Research on economic optimal dispatching of microgrid based on an improved bacteria foraging optimization. Biomimetics 2023, 8, 150. [Google Scholar] [CrossRef] [PubMed]
  22. Zhang, S.; Xiao, J.; Liu, Y.; Dong, M.; Zhou, Z. A multi-strategy improved sparrow search algorithm for indoor AGV path planning. J. Intell. Fuzzy Syst. 2024, 47, 55–69. [Google Scholar] [CrossRef]
  23. Meng, K.; Chen, C.; Xin, B. MSSSA: A multi-strategy enhanced sparrow search algorithm for global optimization. Front. Inf. Technol. Electron. Eng. 2022, 23, 1828–1847. [Google Scholar] [CrossRef]
  24. Wang, Z.; Peng, Q.; Rao, W.; Li, D. An improved sparrow search algorithm with multi-strategy integration. Sci. Rep. 2025, 15, 3314. [Google Scholar] [CrossRef]
  25. Ma, Y.; Meng, W.; Wang, X.; Gu, P.; Zhang, X. Modified Sparrow Search Algorithm by Incorporating Multi-Strategy for Solving Mathematical Optimization Problems. Biomimetics 2025, 10, 299. [Google Scholar] [CrossRef]
  26. Qiu, S.; Li, A. Application of chaos mutation adaptive sparrow search algorithm in edge data compression. Sensors 2022, 22, 5425. [Google Scholar] [CrossRef]
  27. Cheng, B.; Fang, Y.; Peng, W. Improved sparrow search algorithm based on normal cloud model and niche recombination strategy. IEEE Trans. Cloud Comput. 2022, 11, 2529–2545. [Google Scholar] [CrossRef]
  28. Zhang, C.; Ding, S. A stochastic configuration network based on chaotic sparrow search algorithm. Knowl. Based Syst. 2021, 220, 106924. [Google Scholar] [CrossRef]
  29. Tang, A.; Zhou, H.; Han, T.; Xie, L. A Chaos Sparrow Search Algorithm with Logarithmic Spiral and Adaptive Step for Engineering Problems. Comput. Model. Eng. Sci. 2022, 130, 331. [Google Scholar] [CrossRef]
  30. Zhang, X.-Y.; Zhou, K.-Q.; Li, P.-C.; Xiang, Y.-H.; Zain, A.M.; Sarkheyli-Hägele, A. An improved chaos sparrow search optimization algorithm using adaptive weight modification and hybrid strategies. IEEE Access 2022, 10, 96159–96179. [Google Scholar] [CrossRef]
  31. Niknam, T.; Bavafa, F.; Jabbari, M. A novel self-adaptive learning charged system search algorithm for unit commitment problem. J. Intell. Fuzzy Syst. 2014, 26, 439–449. [Google Scholar] [CrossRef]
  32. Wang, M.; Wang, W.; Feng, S.; Li, L. Adaptive multi-class segmentation model of aggregate image based on improved sparrow search algorithm. KSII Trans. Internet Inf. Syst. 2023, 17, 391–411. [Google Scholar]
  33. Liu, X.; Bai, Y.; Yu, C.; Yang, H.; Gao, H.; Wang, J.; Chang, Q.; Wen, X. Multi-strategy improved sparrow search algorithm and application. Math. Comput. Appl. 2022, 27, 96. [Google Scholar] [CrossRef]
  34. Rosso, M.M.; Cucuzza, R.; Aloisio, A.; Marano, G.C. Enhanced multi-strategy particle swarm optimization for constrained problems with an evolutionary-strategies-based unfeasible local search operator. Appl. Sci. 2022, 12, 2285. [Google Scholar] [CrossRef]
  35. Li, G.; Cui, Y.; Su, J. Adaptive mechanism-based grey wolf optimizer for feature selection in high-dimensional classification. PLoS ONE 2025, 20, e0318903. [Google Scholar] [CrossRef]
  36. Meng, Z.; Song, Z.; Shao, X.; Zhang, J.; Xu, H. FD-DE: Differential Evolution with fitness deviation based adaptation in parameter control. ISA Trans. 2023, 139, 272–290. [Google Scholar] [CrossRef]
  37. Chiu, C.-C.; Chen, P.-H.; Chien, W.; Lim, E.H.; Chen, G.-Z. Microwave Imaging for Half-Space Conductors Using the Whale Optimization Algorithm and the Spotted Hyena Optimizer. Appl. Sci. 2023, 13, 5857. [Google Scholar] [CrossRef]
  38. Wang, Z.-C.; Xu, T.-L.; Liu, F.; Wei, Y.-P. Artificial bee colony based optimization algorithm and its application on multi-drone path planning. AIP Adv. 2025, 15. [Google Scholar] [CrossRef]
  39. Paul, K.; Hati, D. A novel hybrid Harris hawk optimization and sine cosine algorithm based home energy management system for residential buildings. Build. Serv. Eng. Res. Technol. 2023, 44, 459–480. [Google Scholar] [CrossRef]
  40. LaTorre, A.; Peña, J.-M. A comparison of three large-scale global optimizers on the CEC 2017 single objective real parameter numerical optimization benchmark. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia/San Sebastián, Spain, 8 June 2017; pp. 1063–1070. [Google Scholar]
  41. Biedrzycki, R.; Arabas, J.; Warchulski, E. A version of NL-SHADE-RSP algorithm with midpoint for CEC 2022 single objective bound constrained problems. In Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022; pp. 1–8. [Google Scholar]
  42. Abualigah, L.; Diabat, A.; Zitar, R.A. Orthogonal learning Rosenbrock’s direct rotation with the gazelle optimization algorithm for global optimization. Mathematics 2022, 10, 4509. [Google Scholar] [CrossRef]
  43. Akbarzadeh, M.; Kalogiannis, T.; Jin, L.; Karimi, D.; Van Mierlo, J.; Berecibar, M. Experimental and numerical thermal analysis of a lithium-ion battery module based on a novel liquid cooling plate embedded with phase change material. J. Energy Storage 2022, 50, 104673. [Google Scholar] [CrossRef]
  44. Ghaffarpour, Z.; Fakhroleslam, M.; Amidpour, M. Calculation of energy consumption, tomato yield, and electricity generation in a PV-integrated greenhouse with different solar panels configuration. Renew. Energy 2024, 229, 120723. [Google Scholar] [CrossRef]
  45. Li, N.; Li, H.; Duan, K.; Tao, W. Evaluation of the cooling effectiveness of air-cooled data centers by energy diagram. Appl. Energy 2024, 382, 125215. [Google Scholar] [CrossRef]
Figure 1. Algorithm flowchart.
Figure 1. Algorithm flowchart.
Algorithms 19 00051 g001
Figure 2. Comparison of convergence curves across 30 benchmark functions.
Figure 2. Comparison of convergence curves across 30 benchmark functions.
Algorithms 19 00051 g002
Figure 3. Complexity analysis of the CSSA.
Figure 3. Complexity analysis of the CSSA.
Algorithms 19 00051 g003
Figure 4. Three-dimensional visualization of the benchmark function.
Figure 4. Three-dimensional visualization of the benchmark function.
Algorithms 19 00051 g004
Figure 5. Comparison of convergence curves for 100-dimensional dual benchmark suites.
Figure 5. Comparison of convergence curves for 100-dimensional dual benchmark suites.
Algorithms 19 00051 g005
Figure 6. Optimization application results for real-world engineering cases.
Figure 6. Optimization application results for real-world engineering cases.
Algorithms 19 00051 g006
Figure 7. Comparison of convergence curves in engineering case studies.
Figure 7. Comparison of convergence curves in engineering case studies.
Algorithms 19 00051 g007
Table 1. Comparison of optimization results among 11 algorithms on 30-dimensional benchmark functions.
Table 1. Comparison of optimization results among 11 algorithms on 30-dimensional benchmark functions.
FunctionOptimization IndicatorCSSACSSA-ChaosCSSA-PatternCSSA-CrossoverCSSA-AdaptiveSSAPSOGWODEWOAABC
F1Mean value000001.35 × 1056.41 × 10−105.85 × 10−681.64 × 10−92.76 × 10−743.21 × 10−100
F1Standard deviation000006.06 × 1031.64 × 10−99.49 × 10−7401.68 × 10−704.78 × 10−104
F3Mean value004.42 × 10−251004.08 × 1041.47 × 1014.81 × 10−495.22 × 10−24.31 × 1048.89 × 10−103
F3Standard deviation000002.55 × 1041.62 × 109.22 × 10−497.37 × 10−21.65 × 1044.86 × 10−102
F5Mean value4.06 × 10−264.06 × 10−262.89 × 1022.89 × 1022.89 × 1029.95 × 1062.87 × 102.87 × 1022.87 × 102.78 × 103.06 × 10−2
F5Standard deviation5.92 × 10−265.92 × 10−261.96 × 10−27.62 × 10−11.96 × 10−28.92 × 1061.63 × 107.62 × 10−11.63 × 104.16 × 10−14.56 × 10−3
F7Mean value8.68 × 10−58.68 × 10−53.69 × 10−49.18 × 10−49.18 × 10−46.10 × 10−11.96 × 10−29.18 × 10−41.96 × 10−22.05 × 10−31.17 × 10−3
F7Standard deviation8.78 × 10−58.78 × 10−54.15 × 10−46.78 × 10−46.78 × 10−42.70 × 10−11.96 × 10−26.78 × 10−41.96 × 10−22.12 × 10−31.20 × 10−3
F8Mean value−12,569.43−12,569.43−4276.76−5305.94−5305.94−4252.32−11,714.13−5305.94−11,714.13−10,598.4−7867.69
F8Standard deviation8.40 × 10−128.40 × 10−121486.67642.93642.93705.63283.98642.93283.981671.14033.55
F10Mean value004.35 × 10−151.77 × 10−201.77 × 10−202.53 × 101.55 × 1031.77 × 10−201.55 × 1035.75 × 10−141.51 × 10−14
F10Standard deviation007.63 × 10−206.75 × 10−196.75 × 10−199.35 × 10−46.72 × 106.75 × 10−196.72 × 103.45 × 10−154.32 × 10−20
F12Mean value3.11 × 10−303.11 × 10−309.08 × 10−18.23 × 10−28.23 × 10−25.77 × 1085.22 × 10−28.23 × 10−25.22 × 10−22.65 × 10−21.00 × 10−25
F12Standard deviation2.84 × 10−302.84 × 10−302.69 × 10−13.85 × 10−23.85 × 10−27.70 × 1077.37 × 1023.85 × 10−27.37 × 10−29.26 × 10−31.41 × 10−6
F13Mean value3.19 × 10−303.19 × 10−302.99 × 109.12 × 10−19.12 × 10−11.29 × 1093.29 × 10−39.12 × 10−13.29 × 10−39.09 × 10−11.76 × 10−4
F13Standard deviation2.77 × 10−302.77 × 10−301.66 × 10−31.77 × 10−11.77 × 10−11.08 × 1095.30 × 10−31.77 × 10−15.30 × 10−34.98 × 10−12.67 × 10−4
Table 2. Comparison of optimization results among seven algorithms on 100-dimensional functions from the CEC2017 and CEC2022 test suites.
Table 2. Comparison of optimization results among seven algorithms on 100-dimensional functions from the CEC2017 and CEC2022 test suites.
Test SetFunctionIndicatorCSSASSAPSOGWODEWOAABC
CEC2017C01Mean value1.00 × 1021.32 × 10111.76 × 1032.33 × 10101.10 × 10112.13 × 10101.96 × 1010
CEC2017C01Standard deviation1.25 × 10−91.75 × 10105.08 × 1026.16 × 101003.24 × 1094.18 × 109
CEC2017C03Mean value3.00 × 1026.33 × 1055.69 × 1041.69 × 1051.58 × 1053.62 × 1053.34 × 105
CEC2017C03Standard deviation1.63 × 10−112.18 × 1051.55 × 1042.39 × 1041.27 × 1056.83 × 1041.23 × 105
CEC2017C05Mean value8.09 × 1021.31 × 1037.37 × 1028.59 ×1021.19 × 1031.07 × 10+039.42 × 102
CEC2017C05Standard deviation3.62 × 105.45 × 102.35 × 105.71 × 105.32 × 107.17 × 102.75 × 10
CEC2017C10Mean value9.27 × 1031.59 × 1049.63 × 1031.41 × 1041.58 × 1041.31 × 10+041.00 × 104
CEC2017C10Standard deviation8.49 ×1027.19 × 1021.51 × 1026.71 × 1023.65 × 1021.22 × 10+031.07 × 103
CEC2017C15Mean value1.88 × 1039.06 × 1095.13 × 1033.46 × 1076.84 × 1094.39 × 1078.16 × 106
CEC2017C15Standard deviation1.11 ×1024.41 × 1093.62 × 1032.77 × 1074.28 × 1093.54 × 1071.00 × 107
CEC2022C01Mean value08.47 × 1041.76 × 1032.89 × 10−4805.35 × 10−718.77 × 10−105
CEC2022C01Standard deviation02.39 × 1045.08 × 1021.79 × 10−4801.68 × 10−704.78 × 10−104
CEC2022C03Mean value01.31 × 1061.17 × 1053.76 × 10−382.79 × 10−2589.18 × 10−1259.18 × 10−125
CEC2022C03Standard deviation06.79 × 1051.87 × 1044.46 × 10−4802.90 × 10−1242.90 × 10−124
CEC2022C05Mean value3.18 × 10−25.51 × 1071.12 × 1039.81 × 109.89 × 109.81 × 101.00 × 10
CEC2022C05Standard deviation9.19 × 10−43.01 × 1074.27 × 1023.32 × 10−11.40 × 10−23.42 × 10−11.62 × 10
CEC2022C10Mean value04.56 × 102.05 × 107.89 × 10−158.88 × 10−164.79 × 10−157.56 × 10−14
CEC2022C10Standard deviation01.38 × 1055.59 × 10−14.16 × 10−197.89 × 10−162.62 × 10−156.17 × 10−19
CEC2022C15Mean value3.91 × 10−224.37 × 1098.95 × 10−301.94 × 10−11.99 × 108.47 × 102.18 × 10−4
CEC2022C15Standard deviation2.83 × 10−259.40 × 1074.15 × 10−301.86 × 10−12.03 × 10−22.89 × 101.98 × 10−4
Table 3. Results of the Wilcoxon signed-rank test.
Table 3. Results of the Wilcoxon signed-rank test.
FunctionControl GroupTest Statisticp-ValueSignificance (α = 0.05)CSSA Mean Value
CEC2017-C1SSA0.00000.000000YES4.058090 × 1011
CEC2017-C1PSO463.00001.000000NO4.058090 × 1011
CEC2017-C1GWO465.00001.000000NO4.058090 × 1011
CEC2017-C1DE465.00001.000000NO4.058090 × 1011
CEC2017-C1WOA465.00001.000000NO4.058090 × 1011
CEC2017-C1ABC465.00001.000000NO4.058090 × 1011
CEC2017-C3SSA0.00000.000000YES6.162419 × 1010
CEC2017-C3PSO180.00000.144683NO6.162419 × 1010
CEC2017-C3GWO465.00001.000000NO6.162419 × 1010
CEC2017-C3DE465.00001.000000NO6.162419 × 1010
CEC2017-C3WOA465.00001.000000NO6.162419 × 1010
CEC2017-C3ABC465.00001.000000NO6.162419 × 1010
CEC2017-C5SSA0.00000.000000YES6.006383 × 104
CEC2017-C5PSO464.00001.000000NO6.006383 × 104
CEC2017-C5GWO465.00001.000000NO6.006383 × 104
CEC2017-C5DE465.00001.000000NO6.006383 × 104
CEC2017-C5WOA465.00001.000000NO6.006383 × 104
CEC2017-C5ABC0.00000.000000YES6.006383 × 104
CEC2017-C10SSA0.00000.000000YES4.674699 × 10
CEC2017-C10PSO465.00001.000000NO4.674699 × 10
CEC2017-C10GWO380.00000.999208NO4.674699 × 10
CEC2017-C10DE465.00001.000000NO4.674699 × 10
CEC2017-C10WOA465.00001.000000NO4.674699 × 10
CEC2017-C10ABC465.00001.000000NO4.674699 × 10
CEC2017-C15SSA0.00000.000000YES5.235889 × 10
CEC2017-C15PSO465.00001.000000NO5.235889 × 10
CEC2017-C15GWO380.00000.000000NO5.235889 × 10
CEC2017-C15DE465.00001.000000NO5.235889 × 10
CEC2017-C15WOA465.00001.000000NO5.235889 × 10
CEC2017-C15ABC465.00001.000000NO5.235889 × 10
CEC2022-C1SSA0.00000.000000YES2.079113 × 105
CEC2022-C1PSO465.00001.000000NO2.079113 × 105
CEC2022-C1GWO465.00001.000000NO2.079113 × 105
CEC2022-C1DE465.00001.000000NO2.079113 × 105
CEC2022-C1WOA465.00001.000000NO2.079113 × 105
CEC2022-C1ABC465.00001.000000NO2.079113 × 105
CEC2022-C3SSA0.00000.000000YES2.350266 × 109
CEC2022-C3PSO6.00000.000000YES2.350266 × 109
CEC2022-C3GWO449.00001.000000NO2.350266 × 109
CEC2022-C3DE465.00001.000000NO2.350266 × 109
CEC2022-C3WOA464.00001.000000NO2.350266 × 109
CEC2022-C3ABC465.00001.000000NO2.350266 × 109
CEC2022-C5SSA0.00000.000000YES2.126404 × 1011
CEC2022-C5PSO464.00001.000000NO2.126404 × 1011
CEC2022-C5GWO465.00001.000000NO2.126404 × 1011
CEC2022-C5DE465.00001.000000NO2.126404 × 1011
CEC2022-C5WOA465.00001.000000NO2.126404 × 1011
CEC2022-C5ABC465.00001.000000NO2.126404 × 1011
CEC2022-C10SSA0.00000.000000YES3.953868 × 105
CEC2022-C10PSO453.00001.000000NO3.953868 × 105
CEC2022-C10GWO0.00000.000000YES3.953868 × 105
CEC2022-C10DE465.00001.000000NO3.953868 × 105
CEC2022-C10WOA214.00000.357566NO3.953868 × 105
CEC2022-C10ABC465.00001.000000NO3.953868 ×105
CEC2022-C15SSA0.00000.000000YES6.337181 × 1010
CEC2022-C15PSO448.00001.000000NO6.337181 × 1010
CEC2022-C15GWO465.00001.000000NO6.337181 × 1010
CEC2022-C15DE465.00001.000000NO6.337181 × 1010
CEC2022-C15WOA465.00001.000000NO6.337181 × 1010
CEC2022-C15ABC0.00000.000000YES6.337181 × 1010
Table 4. Results of the Wilcoxon signed-rank test comparison of optimization results among seven algorithms in three engineering case studies.
Table 4. Results of the Wilcoxon signed-rank test comparison of optimization results among seven algorithms in three engineering case studies.
Engineering ProblemIndicatorCSSASSAPSOGWODEWOAABC
Thermal management of electric vehicle battery packs
(°C weighted metric)
Optimal value28.65105.4287.3578.9695.7382.4534.21
Mean value28.89138.6595.2886.73112.5694.3838.45
Standard deviation0.8542.3518.6212.4528.9315.745.83
Photovoltaic power generation system configuration
(Levelized Cost of Electricity, CNY/kWh)
Optimal value0.1560.4520.1890.2230.2980.2670.215
Mean value0.1590.4980.1960.2450.3340.2890.231
Standard deviation0.0030.0890.0120.0280.0560.0340.019
Data center cooling system
(PUE dimensionless)
Optimal value1.1421.8561.4251.2981.5671.3891.378
Mean value1.1561.9231.4681.3121.6341.4211.425
Standard deviation0.0120.1560.0670.0380.0980.0520.048
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Y.; Li, J. CSSA: An Enhanced Sparrow Search Algorithm with Hybrid Strategies for Engineering Optimization. Algorithms 2026, 19, 51. https://doi.org/10.3390/a19010051

AMA Style

Li Y, Li J. CSSA: An Enhanced Sparrow Search Algorithm with Hybrid Strategies for Engineering Optimization. Algorithms. 2026; 19(1):51. https://doi.org/10.3390/a19010051

Chicago/Turabian Style

Li, Yancang, and Jiawei Li. 2026. "CSSA: An Enhanced Sparrow Search Algorithm with Hybrid Strategies for Engineering Optimization" Algorithms 19, no. 1: 51. https://doi.org/10.3390/a19010051

APA Style

Li, Y., & Li, J. (2026). CSSA: An Enhanced Sparrow Search Algorithm with Hybrid Strategies for Engineering Optimization. Algorithms, 19(1), 51. https://doi.org/10.3390/a19010051

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop