Next Article in Journal
Core-Sheath Structured Yarn for Biomechanical Sensing in Health Monitoring
Previous Article in Journal
Comprehensive Adaptive Enterprise Optimization Algorithm and Its Engineering Applications
Previous Article in Special Issue
A Categorical Model of General Consciousness
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ADVCSO: Adaptive Dynamically Enhanced Variant of Chicken Swarm Optimization for Combinatorial Optimization Problems

1
Mechanical and Electrical Engineering College, Hainan University, Haikou 570228, China
2
Research Institute of Information Technology, Tsinghua University, Beijing 100101, China
3
School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(5), 303; https://doi.org/10.3390/biomimetics10050303
Submission received: 29 March 2025 / Revised: 2 May 2025 / Accepted: 8 May 2025 / Published: 9 May 2025
(This article belongs to the Special Issue Exploration of Bio-Inspired Computing)

Abstract

:
High-dimensional complex optimization problems are pervasive in engineering and scientific computing, yet conventional algorithms struggle to meet collaborative optimization requirements due to computational complexity. While Chicken Swarm Optimization (CSO) demonstrates an intuitive understanding and straightforward implementation for low-dimensional problems, it suffers from limitations including a low convergence precision, uneven initial solution distribution, and premature convergence. This study proposes an Adaptive Dynamically Enhanced Variant of Chicken Swarm Optimization (ADVCSO) algorithm. First, to address the uneven initial solution distribution in the original algorithm, we design an elite perturbation initialization strategy based on good point sets, combining low-discrepancy sequences with Gaussian perturbations to significantly improve the search space coverage. Second, targeting the exploration–exploitation imbalance caused by fixed role proportions, a dynamic role allocation mechanism is developed, integrating cosine annealing strategies to adaptively regulate flock proportions and update cycles, thereby enhancing exploration efficiency. Finally, to mitigate the premature convergence induced by single update rules, hybrid mutation strategies are introduced through phased mutation operators and elite dimension inheritance mechanisms, effectively reducing premature convergence risks. Experiments demonstrate that the ADVCSO significantly outperforms state-of-the-art algorithms on 27 of 29 CEC2017 benchmark functions, achieving a 2–3 orders of magnitude improvement in convergence precision over basic CSO. In complex composite scenarios, its convergence accuracy approaches that of the championship algorithm JADE within a 10−2 magnitude difference. For collaborative multi-subproblem optimization, the ADVCSO exhibits a superior performance in both Multiple Traveling Salesman Problems (MTSPs) and Multiple Knapsack Problems (MKPs), reducing the maximum path length in MTSPs by 6.0% to 358.27 units while enhancing the MKP optimal solution success rate by 62.5%. The proposed algorithm demonstrates an exceptional performance in combinatorial optimization and holds a significant engineering application value.

1. Introduction

With the rapid advancement of intelligent manufacturing and IoT technologies, multi-subproblem collaborative optimization scenarios have become prevalent in engineering applications, such as path planning and warehouse scheduling. However, traditional mathematical programming methods often encounter challenges of a high computational complexity and susceptibility to local optima when handling high-dimensional nonlinear and multimodal optimization problems [1]. In recent years, swarm intelligence algorithms inspired by biological collective behaviors have emerged as powerful tools for addressing complex optimization problems due to their self-organizing and parallel search characteristics [2]. Notable examples, including the Snake Optimizer (SO) [3], Divine Religions Algorithm (DRA) [4], Artificial Protozoa Optimizer (APO) [5], Red-Billed Blue Magpie Optimizer (RBMO) [6], Hiking Optimization Algorithms (HOAs) [7], and Sled Dog-Inspired Optimizer (SDO) [8], have demonstrated significant success in engineering optimization and parameter tuning [9]. Consequently, developing efficient and robust novel intelligent optimization algorithms holds substantial theoretical significance and practical value.
The field of swarm intelligence optimization has witnessed a proliferation of innovative bio-inspired algorithms. Representative developments include the Golden Jackal Optimization (GJO) inspired by cooperative hunting behaviors [10], the Butterfly Optimization Algorithm (BOA) based on pheromone diffusion mechanisms [11], and the Sparrow Search Algorithm (SSA) mimicking sparrows’ exploration-following patterns [12]. While these algorithms exhibit distinct advantages across various optimization problems, selecting appropriate algorithms according to specific problem characteristics remains crucial for enhancing solution efficiency.
Despite significant progress in swarm intelligence optimization algorithms for addressing complex optimization challenges, several inherent limitations persist. To tackle these issues, researchers have proposed various innovative improvements. For instance, to address the limitations of the Sand Cat Swarm Optimization (SCSO) algorithm, which suffers from an excessive reliance on the current best solution leading to search stagnation and local optima entrapment, Adegboye et al. integrated Dynamic Pinhole Imaging and the Golden Sine Algorithm, significantly enhancing the algorithm’s global search capability [13]. To overcome the challenge of the insufficient precision in parameter extraction for photovoltaic models using the Hippopotamus Optimization (HO) algorithm, Wang et al. introduced Lévy flight strategies, quadratic interpolation mechanisms, and a swarm elite learning mechanism, markedly improving the algorithm’s solution accuracy and robustness, thereby providing high-quality parameter configurations for photovoltaic systems [14]. For the Dung Beetle Optimizer (DBO), whose performance is unstable due to parameter sensitivity, Xia et al. proposed adaptive dynamic parameter adjustment strategies and a linear scaling method to fine-tune individual positions within dynamic boundaries, effectively maintaining the population diversity while boosting algorithmic stability [15]. For the Parrot Optimizer (PO) algorithm, which struggles with imbalanced exploration-exploitation dynamics, Adegboye et al. incorporated Competitive Swarm Optimization (CSO) and the Salp Swarm Algorithm (SSA) into the PO framework, thereby strengthening the solution diversity and exploratory behavior [16]. These cutting-edge advancements provide novel methodologies for solving real-world complex optimization problems.
The Chicken Swarm Optimization (CSO) algorithm, proposed by Meng et al. in 2014 [17], has gained widespread adoption in resource scheduling and path optimization due to its clear structure and ease of implementation. However, no single algorithm is universally applicable to all optimization problems, prompting researchers to develop CSO enhancements. To address CSO’s premature convergence issue, Liang et al. introduced replication and elimination–dispersal operations from the bacterial foraging algorithm (BFA), replacing weak chicks with poor optimization capabilities, and integrated the collective collaboration mechanism of particle swarm optimization (PSO) to enhance the global search efficiency through subgroup division and information exchange [18]. However, the algorithm parameters require manual configuration, and its significantly increased computational complexity renders it unsuitable for large-scale optimization problems. To improve CSO’s ability to escape local optima in charger placement problems and enhance population utilization, Sanchari et al. incorporated the random walk mechanism of the Ant Lion Optimization (ALO) algorithm, boosting global exploration via a multi-strategy collaboration approach [19]. Nevertheless, limitations remain, including initialization strategies disconnected from solution space characteristics and rigid role allocation mechanisms. To overcome CSO’s inherent weaknesses in solution precision and convergence speed, Li et al. proposed an X-best-guided strategy combined with Lévy flight step sizes for rooster position updates, alongside a dynamic constraint mechanism to maintain population diversity [20]. However, the oversimplified diversity preservation strategy still leads to population homogenization over prolonged iterations due to the elite individual dominance. Addressing CSO’s inefficiency in locating faulty nodes within wireless sensor networks and its inability to distinguish between software and hardware failure types, Nagarajan et al. proposed dynamically adjusting subgroup numbers using fuzzy rules and integrated a Poisson Hidden Markov Model for probabilistic modeling [21]. While this approach optimizes search efficiency and enhances detection accuracy, its parameter settings rely heavily on prior experience, lacking adaptive adjustment capabilities. To tackle CSO’s slow convergence and poor adaptability to dynamic environments, which hinder its application in battery state-of-charge estimations, Afzal et al. incorporated open-ended learning with a dynamic knowledge update mechanism [22]. This enables real-time population updates and historical best-solution retention, effectively improving the algorithm robustness. However, the significantly increased complexity limits its practical deployment. Aiming to resolve CSO’s susceptibility to local optima and the trade-off between convergence speed and precision in complex problems, Gamal et al. developed a hybrid Chicken Swarm Genetic Algorithm (GSOGA) [23]. While CSO accelerates the rapid search and convergence, GA enhances the population diversity through crossover and mutation operations, mitigating local stagnation. Though this hybrid method improves the overall performance, it incurs substantial computational overhead when handling large-scale problems. In contrast, Wang et al. proposed an Adaptive Fuzzy Chicken Swarm Optimization (FCSO) [24], embedding fuzzy logic to monitor optimization velocity and population density during iterations. This self-adjusting mechanism balances exploration and exploitation, while a cosine function integration into the rooster position updates boosts the convergence speed and local refinement. However, the algorithm exhibits limited adaptability in high-dimensional complex optimization scenarios. For multi-objective optimization challenges where CSO struggles with search efficiency and solution quality, Huang et al. introduced a Non-Dominated Sorting Chicken Swarm Algorithm (NSCSO) [25]. By incorporating fast non-dominated sorting, elite opposition-based learning, and a crowding distance strategy, the method enhances the solution diversity and distribution. Despite these advancements, the NSCSO underperforms on highly nonlinear problems and remains overly dependent on the quality of initial randomized population positions. Table 1 summarizes these CSO improvement efforts.
In conclusion, while CSO outperforms many swarm intelligence algorithms and has inspired numerous enhanced variants, it still grapples with inherent limitations such as low solution precision, sluggish convergence rates, and premature stagnation. This study proposes an Adaptive Dynamically Enhanced Variant of Chicken Swarm Optimization (ADVCSO) through three key innovations. First, an elite perturbation initialization strategy based on good point sets generates uniformly distributed initial populations using low-discrepancy sequences, enhanced by the Gaussian perturbation of elite individuals for local exploration. Second, a population diversity-based dynamic role allocation mechanism adaptively adjusts subgroup proportions and update cycles through cosine annealing scheduling, effectively balancing exploration and exploitation. Third, a hybrid mutation strategy incorporating Cauchy–Gaussian phased mutation operators and an elite dimension inheritance enables a dynamic equilibrium between global exploration and local refinement. These synergistic enhancements significantly improve the population diversity maintenance, adaptive role allocation, and local optima avoidance, establishing a novel methodological framework for multi-subproblem collaborative optimization.
The principal contributions of this work include the following:
  • The development of an elite perturbation initialization strategy based on good point sets, which not only significantly enhances the quality of initial solutions but also improves the population’s overall search capability;
  • Designed a dynamic role allocation mechanism based on population diversity, effectively maintaining population diversity and strengthening the algorithm’s self-adaptive capabilities;
  • Implemented a hybrid mutation strategy for population position updates, incorporating Cauchy–Gaussian phased mutation operators and an elite dimension inheritance strategy to bolster the algorithm’s ability to escape local optima and effectively mitigate premature convergence risks;
  • Validated through ablation studies, the CEC2017 benchmark functions, Multi-Traveling Salesman Problems, and Multi-Knapsack Problems, the ADVCSO demonstrates an exceptional global search capability and stability, offering a novel paradigm for practical combinatorial optimization challenges.
The remainder of this paper is organized as follows: Section 2 outlines the fundamental principles of CSO. Section 3 details the ADVCSO methodology and enhancement strategies. Section 4 presents ablation studies on the ADVCSO, followed by experiments on the CEC2017 test function suite and applications to two real-world combinatorial optimization problems. Finally, Section 5 concludes with discussions on algorithmic innovations, experimental findings, limitations, and future research directions.

2. Chicken Swarm Optimization Algorithm

The Chicken Swarm Optimization (CSO) algorithm simulates the hierarchical structure and foraging behaviors of chicken societies and abstracts them into an optimization model. In this framework, each individual in the population represents a potential solution, classified into roles—roosters, hens, and chicks—based on their fitness values, and the flock is divided into several subgroups corresponding to the number of roosters. Specifically, roosters are the optimal individuals leading each subgroup, and chicks are those with the poorest fitness and lowest status, while the remaining individuals are designated as hens, with some randomly selected as mother hens with assigned chicks. During foraging, roosters lead their subgroups in search of food, hens follow roosters to search for or compete for high-quality food, while chicks can only forage by following their mother hens. Once roles are assigned, they remain fixed until re-evaluated and re-established after G generations of iterations.
In the algorithm, let N denote the total population size, with N R , N H , and N C representing the numbers of roosters, hens, and chicks, respectively. The position of the i -th individual in the j -th dimension at iteration t is denoted as x i , j t . The initial population is generated randomly within the search space using Equation (1):
x i = l b + R a n d u b l b
where ub and lb represent the upper and lower bounds, and R a n d is a random number uniformly distributed in 0 , 1 .
The individuals with the lowest fitness value are selected as the roosters. As leaders of subgroups, roosters autonomously determine foraging directions. Their position is updated, as shown in Equations (2) and (3):
x i , j t + 1 = x i , j t · 1 + N 0 , σ 2
σ 2 = 1 ,   i f   f i < f k exp f k f i f i + ε ,   otherwise , k 1 , N R , k i
where N 0 , σ 2 is a Gaussian distribution random number. The f i and f k denote the fitness values of the current rooster and a randomly selected rooster, respectively. ε is a small constant to prevent a division by zero.
Individuals with better fitness are selected as hens, which move following their subgroup’s rooster and may scavenge better food resources. The hen’s position is updated, as shown in Equations (4) and (5):
x i , j t + 1 = x i , j t + S 1 · R a n d · x r 1 , j t x i , j t + S 2 · R a n d · x r 2 , j t x i , j t
S 1 = e x p f i f r 1 f i + ε ,             S 2 = e x p f r 2 f i
where x r 1 , j t is the position of the hen’s mate rooster, specifically the rooster of the subgroup to which the hen belongs. x r 2 , j t is a randomly selected rooster or hen ( r 1 r 2 ).
Except for the roosters and hens, other individuals are defined as chicks. The chicks strictly follow their mother hens during foraging, and the chick’s position is updated, as shown in Equation (6):
x i , j t + 1 = x i , j t + F L · x m , j t x i , j t
where F L [ 0 , 2 ] , x m , j t is the position of the chick’s mother hen.

3. ADVCSO Algorithm

The original CSO algorithm demonstrates strengths in low-dimensional function optimization through its role-based cooperative mechanism. However, its application to multi-subproblem collaborative optimization faces three critical challenges: (1) an uneven initial solution distribution caused by random initialization, leading to an inadequate search space coverage; (2) rigid role proportions and update cycles that fail to adapt to dynamic requirements across optimization phases, resulting in a significant exploration–exploitation imbalance; and (3) the limited ability of single mutation strategies to escape local optima in complex multimodal problems. To address these challenges, this study proposes the ADVCSO algorithm, which integrates good-point-set-based elite perturbation initialization, a population diversity-driven dynamic role mechanism, and hybrid mutation strategies. This section elaborates on the ADVCSO algorithm, its enhancement strategies, and establishes the ADVCSO mathematical model.

3.1. Good-Point-Set-Based Elite Perturbation Initialization Strategy

In natural environments, chicken swarms often exhibit spatial regularity in their distribution rather than complete randomness. However, the original CSO employs random initialization, leading to sparse solutions in certain dimensions and low-quality initial populations. Inspired by biological principles, this study designs a good-point-set-based elite perturbation initialization strategy to replace random initialization, mimicking the natural distribution patterns of chicken swarms.
The good point set can be mathematically described as follows: Let V D denote a D-dimensional unit hypercube in the search space. A set of points, r V D , is termed a good point set and is defined by Equation (7):
P n ( k ) = r 1 ( n ) k , r 2 ( n ) k , , r D ( n ) k , 1 k n ,
where r represents the good point, r D ( n ) k denotes the decimal part within it, and n is the number of good points. A key characteristic of good point sets is their extremely small discrepancy, representing a distribution that approaches complete uniformity. The magnitude of individual discrepancies φ n is measured by the deviation, which satisfies Equation (8):
φ n = C ( r , ε ) n 1 + ε ,
where C ( r , ε ) is a constant dependent on r and ε ( ε is an arbitrary positive integer). Equation (8) demonstrates that as the number of points increases, the discrepancy will decrease [26].
To generate the good point set, this study adopts the cosine sequence method to determine the parameter r . The calculation is defined by Equation (9):
r = 2 cos 2 π k / p , 1 k D
where p is the smallest prime number satisfying p 3 / 2 D .
Furthermore, to simulate the localized exploration behavior of leading roosters in chicken swarms, a small-scale Gaussian perturbation is applied to the top 10% of elite individuals (by fitness). The perturbation is formulated in Equation (10):
x i , j t + 1 = x i , j t + η · u b l b · N 0 , I
where η is the perturbation intensity coefficient (set to 0.05 in this study). The perturbation range is dynamically scaled by u b l b to align with the search space dimensions.
Figure 1 compares population initialization results using the good point set versus random initialization. It clearly demonstrates that the good point set method generates an initial population that more uniformly covers the search space, while the Gaussian perturbation prevents clustering in localized regions.

3.2. Population Diversity-Driven Dynamic Role Allocation Mechanism

In natural environments, the role proportions within chicken swarms dynamically adapt to resource availability: during resource abundance, the ratios of hens and chicks increase to promote population reproduction, whereas under resource scarcity or threats, rooster proportions rise to enhance the collective exploration and defense capabilities. However, the original CSO employs fixed role proportions, failing to adapt to varying demands across optimization phases. Inspired by this biological behavior, this study proposes a population diversity-driven dynamic role allocation mechanism. By the real-time monitoring of population diversity states, the mechanism dynamically adjusts role proportions: increasing hen and chick ratios to drive local refinement when diversity is high and expanding rooster proportions to strengthen global exploration when diversity declines significantly. This adaptive mechanism shifts the search focus based on the optimization progress, significantly enhancing the algorithm robustness in complex multimodal problems. The specific calculation process is as follows:
The population diversity metric is defined as the standard deviation of fitness values, calculated via Equation (11):
σ t = 1 N i = 1 N f i f ¯ 2
where σ t represents population diversity at iteration t . f ¯ = 1 N i = 1 N f i denotes the average population fitness.
A diversity decay ratio ρ is introduced to dynamically regulate role proportions. The updated role allocation formulas are defined in Equations (12)–(15):
ρ = σ 0 σ t
N R = m a x ( 1 , m i n ( N · ( 0.3 + 0.2 ρ ) , N 2 ) )
N H = m a x ( 1 , m i n ( N · ( 0.3 0.1 ρ ) ,         N N R 1 ) )
N C = m a x ( 0 , N N R N H )
As shown in Figure 2, the ADVCSO algorithm exhibits significant dynamic changes in role proportions throughout the iteration process. In the early iterations, the rooster proportion fluctuates frequently and remains generally high, with peaks exceeding 80%, indicating that the algorithm is in the exploration phase, strengthening the global search by increasing the proportion of roosters; correspondingly, the hen proportion decreases, reducing the resource allocation for local exploitation. As iterations progress, fluctuations in role proportions gradually diminish, with the rooster proportion decreasing to approximately 30%, while hen and chick proportions stabilize at around 28% and 42%, respectively, reflecting a balance between exploration and exploitation. In the later stages of the algorithm iteration, the rooster proportion continues to exhibit small fluctuations, demonstrating that the algorithm maintains certain exploration capabilities to prevent premature convergence. Notably, the rooster proportion shows sudden increases near iterations 300, 390, and 750, which aligns with the algorithm’s need to continuously escape from local optima traps. This diversity-aware dynamic role allocation mechanism adjusts the search focus according to the optimization progress, significantly enhancing the algorithm’s robustness in complex multimodal problems.
Additionally, ADVCSO incorporates a cosine annealing strategy to dynamically adjust the role update cycle G . A nonlinear decay function progressively compresses G , maintaining a larger G in early iterations for a thorough exploration and reducing G in later stages to accelerate local convergence. This approach mitigates the oscillation caused by frequent role switching while balancing the convergence speed and precision. The update rules are formulated in Equations (16) and (17):
decay t = 0.5   ( 1 + c o s ( π t T ) )
G c u r r e n t = m a x ( 5 , ( G · d e c a y t ) )

3.3. Hybrid Mutation Strategy

Chicken swarms exhibit complex social learning mechanisms, where individuals of different hierarchies demonstrate distinct behavioral traits. However, the original CSO employs a single position update strategy, rendering it prone to local optima, particularly in complex multimodal problems. Inspired by biological principles, this study proposes a hybrid mutation strategy to address these limitations.

3.3.1. Rooster Mutation Strategy Based on Behavioral Differences

In nature, roosters’ foraging behaviors exhibit distinct phases: large-scale exploratory leaps when exploring new territories and small-scale refined searches within familiar areas. This behavioral dichotomy motivates the design of a Cauchy–Gaussian phased mutation strategy. The Cauchy distribution, with its heavy-tailed properties, facilitates long-range jumps in the search space to enhance the global exploration, while the Gaussian distribution introduces minor stochastic variations for localized refinement. To optimize the ADVCSO’s search efficiency, the mutation mode is adaptively switched based on optimization phases: the Cauchy mutation is applied when population diversity is low, and the Gaussian mutation is activated as diversity improves. The mutation-enhanced position update is governed by Equations (18)–(20):
τ t = 0.3 + 0.4 1 ρ
ξ ~ Cauchy 0 , 1 ,   t < τ t · T 0.5 · N 0 , 1 ,   otherwise
x i , j t + 1 = x i , j t + α · u b l b · ξ
where τ t is the mutation switching threshold, controlling the transition between Cauchy and Gaussian mutations. α is the mutation intensity coefficient (empirically set to 0.05 to avoid elite individual loss).
Additionally, inspired by roosters’ age-dependent behavioral shifts—younger roosters prioritize exploration, while experienced ones focus on territorial defense—an adaptive learning factor α t is introduced to modulate position updates. This adaptive learning factor α t simulates how roosters adjust their behavioral patterns as they gain experience. In the early iterations, when t is small, the learning factor α t is also small, representing young roosters leading their subgroups in a global search; as the number of iterations increases, roosters gain experience, their behavior shifts toward refined exploitation, and the learning factor α t increases accordingly, further regulating the roosters’ position updates. The learning factor evolves nonlinearly with iterations, as formulated in Equations (21) and (22):
α t = 0.5 1 c o s π t T
x i , j t + 1 = x i , j t · 1 + α t · N 0 , σ 2

3.3.2. Dimension Updating and Elite Inheritance Mechanism Based on Social Learning

In chicken societies, hens and chicks learn successful behaviors from roosters through observation and imitation, particularly in foraging and risk avoidance. However, the original CSO suffers from inefficiency: hens randomly learn from any individual, while chicks rigidly follow their mother hens, risking local optima entrapment. To address this, a dimension learning and elite inheritance mechanism is proposed. This mechanism mimics selective learning from leader roosters while preserving individuality through minor perturbations, effectively balancing exploitation efficiency and diversity maintenance. The specific calculation process is as follows:
During position updates, hens probabilistically learn specific dimensions from the global best solution (leader rooster). The update is defined by Equations (23) and (24):
D 1 , 2 , , D , D = 0.2 D
x i , d t + 1 = x g , d t + β · u b d l b d · N 0 , 1 ,   d D x i , d t ,   otherwise
where D is a randomly selected subset of dimensions. β is the learning intensity coefficient (set to 0.05). u b d l b d are the dynamic scale perturbations.
Chicks probabilistically inherit critical dimensions from the global best solution. The update follows Equations (25) and (26):
D e l i t e 1 , 2 , , D ,     D e l i t e = 2
x i , d t + 1 = x g , d t ,   if   Rand < ω ,   d D e l i t e x i , d ,   otherwise
where D e l i t e denotes two randomly selected elite dimensions. ω is the inheritance probability (set to 0.2).
Additionally, in chicken swarms, the following behavior of hens is influenced by their social status within the group. High-ranking hens tend to forage independently, while low-ranking hens rely heavily on following roosters. However, the original CSO neglects individual status differences, resulting in an over-reliance on random exploration by low-fitness individuals. To address this, this study introduces a rank-adaptive strategy to modulate hens’ following behaviors, with specific update rules defined in Equations (27)–(29):
γ i = 1 r a n k i N , r a n k i 1 , 2 , , N
S 1 = e x p f i f r 1 f i + ε , S 2 = γ i · e x p f r 2 f i
x i , j t + 1 = x i , j t + S 1 · R a n d · x r 1 , j t x i , j t + S 2 · R a n d · x r 2 , j t x i , j t
where γ i represents the fitness-based rank of individual i within the population.

3.4. Architecture of ADVCSO

To address the premature convergence issue encountered by the basic CSO algorithm in solving high-dimensional optimization problems, the ADVCSO algorithm is proposed. The algorithm integrates three key enhancement strategies into a cohesive framework: the good point set initialization, population diversity-driven dynamic role allocation, and hybrid mutation strategy. Algorithm 1 provides the pseudocode implementation, detailing the specific steps and processes. The overall architecture of the ADVCSO is illustrated in Figure 3, which illustrates the algorithmic flowchart of the algorithm.
Algorithm 1: Pseudocode of ADVCSO algorithm
Initialize population using good point set and apply perturbation to top 10% of elites;
Define parameters: epoch, Gmax, pop_size, N, G, and jixiaoliang;
1:While t < Gmax:
2:    If (t % Gcurrent == 0):
3:        Sort population by fitness and dynamically assign roles:
4:           -Roosters (NR): top-ranked individuals
5:           -Hens (NH): middle-ranked individuals
6:           -Chicks (NC): remaining individuals
7:        Update mating pairs and mother–chick relationships;
8:        Adaptively adjust Gcurrent via cosine annealing;
9:     End if
10:    For i = 1 to NR:
11:        Update roosters’ positions with adaptive learning factor;
12:        Apply hybrid mutation (Cauchy/Gaussian based on diversity threshold);
13:     End for
14:    For i = 1 to NH:
15:        Update hens’ positions using rank-based following strategy;
16:        Replace partial dimensions with global best values;
17:     End for
18:    For i = 1 to NC:
19:        Update chicks’ positions via spiral learning with FL factor;
20:        Inherit elite dimensions from global best;
21:     End for
22:    Evaluate new solutions and update local/global best;
23:    Adjust mutation threshold based on population diversity;
24:End while
25:Return global best solution;
  • Initialization Module: The algorithm initializes the population using a good point set based on low-discrepancy sequences, which generates uniformly distributed initial solutions across the search space. This strategy achieves the following:
    (a)
    Creates an initial population where individuals follow a quasi-random distribution rather than complete randomness.
    (b)
    Applies perturbation to the top 10% elite individuals to avoid early stagnation.
    (c)
    Enhances the quality and diversity of initial solutions, significantly improving the search space coverage.
  • Role Allocation Manager: based on population diversity metrics, this module dynamically assigns roles (roosters, hens, and chicks) using the following process:
    (a)
    The population diversity calculation according to Equation (11).
    (b)
    The dynamic adjustment of role proportions following Equations (12)–(15).
    (c)
    Sorting individuals by fitness and assigning roles based on calculated proportions.
    (d)
    The establishment of hierarchical relationships including mate pairs and mother–chick relationships.
  • Hybrid Mutation Strategy: this module implements an adaptive mutation and dimension learning mechanism that adheres to the following process:
    (a)
    Switches between the Cauchy distribution for global exploration and the Gaussian distribution for local refinement based on the search stage and diversity threshold defined in Equation (18).
    (b)
    Incorporates an adaptive learning factor that evolves nonlinearly with iterations according to Equations (21) and (22).
    (c)
    Applies a selective mutation based on the individual status and population diversity state.
    (d)
    Enables hens to learn specific dimensions from the global best solution with a probability of 0.2.
    (e)
    Allows chicks to inherit elite dimensions from global best solutions with a controlled probability.
As shown in Algorithm 1, the ADVCSO implementation begins with the initialization of key parameters, followed by population generation using the good point set method. The dynamic role allocation mechanism is executed at lines 3–7 of the algorithm, where the population diversity is calculated and role proportions are adjusted accordingly. Lines 10–21 detail the position update mechanism for different roles, incorporating the hybrid mutation strategy described in Section 3.3.
The interactions between these architectural components follow the precise workflow illustrated in Figure 3:
  • The population diversity is measured;
  • Role proportions are adjusted according to the diversity ratio;
  • Roles are assigned based on fitness rankings;
  • Position updating occurs using role-specific rules enhanced with the hybrid mutation;
  • The mutation mode switches adaptively based on the population state and iteration progress;
  • Dimension learning enables effective knowledge transfer between solutions.
This modular design effectively balances exploration and exploitation capabilities, significantly enhancing the algorithm robustness in complex multimodal problems. The comprehensive architecture not only addresses the limitations of the original CSO—including the initial distribution quality, rigid role proportions, and single mutation strategy—but also creates a synergistic effect through the systematic integration of these enhancement strategies, as evidenced by the performance improvements demonstrated in the experimental results section.

3.5. The Complexity Analysis of the ADVCSO Algorithm

The computational efficiency of metaheuristic algorithms directly impacts their practical applicability to complex optimization problems. This section analyzes the time and space complexity of the proposed ADVCSO algorithm.

3.5.1. Time Complexity

For the time complexity analysis, let N denote the population size, d the problem dimension, T the total number of iterations, and G the update frequency of the role assignment. The time complexity of the ADVCSO consists of initialization and iteration phases.
The initialization phase includes parameter setting, population generation using good point sets, and an initial fitness evaluation. The good point set initialization requires generating appropriate prime numbers ( O ( d · log d ) ) and creating low-discrepancy sequences for each individual ( O ( N · d ) ). Combined with the initial fitness evaluation ( O ( N · f e ) , where f e is the cost of a single fitness evaluation), the total initialization complexity is O ( N · d + N · f e ) .
During the iteration phase, each iteration involves several key operations. The population diversity calculation and role assignment require O ( N · log N ) operations but are only performed every G iterations, contributing O ( T · N · log N G ) to the total complexity. The position updating differs for each role (roosters, hens, and chicks), but collectively requires O ( N · d ) operations per iteration. The hybrid mutation strategy, including dimension learning mechanisms, adds operations that scale with O ( N · d ) . Each iteration also requires a fitness evaluation of the entire population ( O ( N · f e ) ). Considering all these components, the time complexity of the iteration phase is O ( T · N · ( d + f e + log N G ) ) . The overall complexity of the ADVCSO is shown in Equation (30):
O ( A D V C S O ) = O ( N · d + N · f e ) + O ( T · N · ( d + f e + log N G ) )
Since the initialization is performed only once, the overall time complexity of the ADVCSO is dominated by the iteration phase: O ( T · N · ( d + f e + log N G ) ) . For high-dimensional problems where d log N G , and considering that f e is often proportional to d , O ( A D V C S O ) can be approximated by Equation (31):
O ( A D V C S O ) = O ( T · N · d )
In the original CSO, the initialization phase requires O ( N · d ) operations for random population generation and O ( N · f e ) for fitness evaluation. The iteration phase involves role assignments every G iterations ( O ( N · log N G ) ) and position updating for all individuals ( O ( N · d ) ). Thus, the total computational complexity of the standard CSO algorithm can be expressed as O ( C S O ) = O ( T · N · d ) .
This analysis reveals that despite its enhanced features, the ADVCSO maintains the same asymptotic time complexity as the original CSO algorithm.

3.5.2. Space Complexity

The space complexity of the ADVCSO is primarily determined by the storage requirements for the population ( O ( N · d ) ), fitness values ( O ( N ) ), role assignments ( O ( N ) ), and best solutions ( O ( N · d ) ). The dominant term is O ( N · d ) , which represents the overall space complexity of the algorithm. Similarly, O ( N · d ) is the dominant term in CSO’s space complexity. Therefore, both algorithms have the same asymptotic space complexity.
Experimental measurements confirm this theoretical analysis. When tested on CEC2017 benchmark functions with d = 30 and N = 50 , the ADVCSO requires approximately 69.6 ms per iteration, which is comparable to CSO’s 57.7 ms. This slight difference in execution time is negligible considering the significant performance improvements achieved.
This analysis demonstrates that the ADVCSO achieves an enhanced optimization performance without incurring significant additional computational costs, making it suitable for practical applications in high-dimensional and complex optimization scenarios.

4. Experimental Design and Analysis

To evaluate the performance of the ADVCSO, ablation studies were first designed to validate the effectiveness of each enhancement strategy, followed by comparative experiments which were conducted against other swarm intelligence algorithms on the CEC2017 benchmark functions, the Multi-Traveling Salesman Problem (MTSP), and the Multi-Knapsack Problem (MKP). The experimental setup included Windows 11 OS, Intel i7-12700H processor, 16.0GB RAM, and Python 3.12.

4.1. Ablation Studies

To verify the effectiveness of each enhancement strategy in the ADVCSO algorithm, this study designed a series of ablation experiments by incrementally adding improvement strategies to the original CSO algorithm to analyze each strategy’s contribution to the algorithm performance. The specific algorithm variants are shown in Table 2.
The experiments were conducted on four typical functions from the CEC2017 test suite: unimodal function F2 (F22017), multimodal function F4 (F42017), hybrid function F11 (F112017), and composition function F25 (F252017). Each algorithm variant was run for 1000 iterations on each function, resulting in the convergence curve comparison shown in Figure 4.

4.1.1. Analysis of Individual Enhancement Strategies

The good point set initialization strategy significantly improved the early convergence speed. For example, on F2, CSO_GPS demonstrated a noticeably faster convergence than the original CSO throughout the iteration process, ultimately achieving a fitness value of 2.87 × 104, significantly outperforming the original CSO’s 5.09 × 104. This result indicates that the good point set initialization effectively improved the initial distribution quality of the population, providing better initial solutions and accelerating the early convergence. The improvement was particularly significant on F25, confirming that the good point set initialization enhanced the search space coverage and provided a more uniform initial solution distribution.
The dynamic role allocation mechanism balanced exploration and exploitation capabilities. CSO_DRA exhibited a better performance than the original CSO on most test functions. On function F25, CSO_DRA continued to show an optimization capability after approximately 400 iterations, while the original CSO tended to stabilize, indicating that the dynamic role allocation strategy effectively delayed premature convergence. Notably, on function F4, CSO_DRA’s convergence curve exhibited periodic declining characteristics, directly related to its mechanism of dynamically adjusting role proportions and update cycles. This demonstrates that the strategy can adaptively balance global exploration and local exploitation capabilities at different optimization stages, effectively breaking through search plateaus.
Among all four types of test functions, CSO_MUT showed the best single-strategy improvement effect. On function F2, CSO_MUT achieved a fitness value of 1.40 × 103, nearly an order of magnitude lower than the original CSO. On function F11, CSO_MUT decreased almost synchronously with the ADVCSO, reaching a final fitness value of 7.91 × 106, far lower than other single-strategy variants. Especially on highly nonlinear functions like F25, CSO_MUT’s convergence curve showed multiple breakthrough declines, which were closely related to its phased mutation operator design and elite dimension inheritance mechanism, indicating that this strategy can effectively overcome premature convergence problems caused by single update rules.

4.1.2. Analysis of Synergistic Effects of Combined Strategies

CSO_GPS_DRA showed a better performance than either individual strategy on F2 and F11, indicating a positive synergistic effect between these two strategies. Particularly on F11, CSO_GPS_DRA’s final fitness value was 1.03 × 108, which was significantly lower than CSO_GPS (1.64 × 109) and CSO_DRA (4.40 × 108). This synergistic effect stems from the good point set initialization providing higher quality initial solution distributions, while the dynamic role allocation mechanism more effectively allocates computational resources based on these good initial solutions, further improving search efficiency. However, on F4, CSO_GPS_DRA’s performance improvement was not significant, and even slightly inferior to CSO_DRA at certain iteration stages, suggesting that the combination of these two strategies may have complex interactions on different problem types and does not always produce additive gains.
CSO_GPS_MUT demonstrated an outstanding performance on all test functions. On F4, CSO_GPS_MUT’s convergence curve approached that of the complete ADVCSO, with a final fitness value of approximately 1.10 × 103. On F25, CSO_GPS_MUT performed similarly to CSO_MUT but converged faster in early iteration stages, proving that the good point set initialization and hybrid mutation strategy have significant complementarity: the good point set initialization provides high-quality initial solutions, offering a better mutation foundation for the hybrid mutation strategy, thereby improving the algorithm’s search efficiency.
CSO_DRA_MUT exhibited the best overall performance among all dual-strategy combinations. On F2, CSO_DRA_MUT achieved a final fitness value of 8.53 × 103, performing the best among all dual-strategy combinations. Its periodic breakthrough convergence characteristics were particularly evident, indicating that the dynamic role allocation ensures the efficient allocation of computational resources, while the hybrid mutation strategy provides the ability to escape local optima, creating a powerful synergistic effect when combined.

4.1.3. The Performance Analysis of the Complete ADVCSO

The ADVCSO, integrating all three improvement strategies, demonstrated the best performance on all four function types. Taking F2 as an example, the ADVCSO achieved a fitness value of 4.35 × 102, which is an order of magnitude lower than the best dual-strategy combination CSO_DRA_MUT (8.53 × 103) and nearly two orders of magnitude lower than the original CSO (5.09 × 104). During the convergence process, the ADVCSO’s convergence curve exhibited typical “staircase” descent characteristics, indicating the algorithm’s ability to repeatedly break through local optima and continuously find better solutions. This characteristic stems from the synergistic effect of the three improvement strategies: the good point set initialization provides high-quality initial solutions, the dynamic role allocation ensures efficient resource allocation, and the hybrid mutation strategy provides the ability to escape local optima.
The results of the ablation experiments indicate that among the three improvement strategies, the hybrid mutation strategy contributed most significantly to the algorithm performance, followed by the dynamic role allocation strategy, while the good point set initialization strategy showed notable effects in the early convergence stage. Different strategy combinations typically produced synergistic effects, with the combination of the dynamic role allocation and hybrid mutation being particularly outstanding, complementing each other’s advantages and jointly improving the algorithm’s exploration–exploitation balance. After integrating all improvement strategies, the complete ADVCSO algorithm formed a more powerful synergistic effect among strategies, enabling the algorithm to demonstrate an excellent performance when addressing different types of optimization problems, thus validating the effectiveness of the improvement methods proposed in this study.

4.2. CEC2017 Benchmark Experiments

In this study, Table 3 is utilized to evaluate the performance of the ADVCSO, which includes 29 test functions from the CEC2017 benchmark suite. These functions exhibit diverse characteristics, covering unimodal, multimodal, hybrid, and composite types, designed to assess algorithmic capabilities in local exploitation, global exploration, and other critical performance metrics [27] (note that the original F2 function in the CEC2017 test suite was officially removed due to defects). In addition to comparisons with the original CSO, this study selects several state-of-the-art algorithms that have demonstrated outstanding performances in recent IEEE CEC competitions for benchmarking: the Giant Trevally Optimizer (GTO) [28], Coyote Optimization Algorithm (COA) [29], Hunger Games Search (HGS) [30], and Sea Lion Optimization Algorithm (SLO) [31]. To ensure the reliability of the test results, all algorithms are uniformly configured with a problem dimensionality of 30, a population size of 50, and 1000 iterations. Table 4 lists the key initial parameter settings for each algorithm. These parameter selections are informed by a comprehensive understanding of the problem domain and insights from prior research, aiming to enhance the algorithms’ effectiveness and generalizability across diverse datasets and scenarios.
Under the aforementioned experimental configuration and algorithmic parameter settings, the ADVCSO and other swarm intelligence algorithms were executed on the CEC2017 test functions, yielding the convergence curves illustrated in Figure 5. Additionally, each algorithm was independently run 50 times, with results summarized in Table 5, including metrics such as mean values and standard deviations.
Figure 5 presents the convergence curve comparison between the ADVCSO and CSO, GTO, COA, HGS, and SLO on the CEC2017 test functions. The results demonstrate that the ADVCSO algorithm achieved the best convergence precision and speed on the vast majority of test functions, ranking first in all cases except F7 and F9. The ADVCSO’s excellent performance on different types of functions stems from the match between its three key improvement strategies and various function characteristics. On the unimodal functions F1 and F2, the ADVCSO’s convergence curves descend rapidly, which is attributable to the high-quality initial solution distribution provided by the good point set initialization strategy, enabling the algorithm to quickly lock onto promising search regions. Furthermore, in complex multimodal environments (such as F11), the ADVCSO exhibited the ability to continuously break through local optima, with convergence curves showing multiple distinct descending plateaus, indicating that the algorithm successfully escaped local optima traps by dynamically adjusting role proportions to balance exploration and exploitation capabilities, thereby avoiding premature convergence and achieving excellent convergence results. In composition functions (such as F22 and F27), which are highly nonlinear functions, the ADVCSO could still effectively avoid local optima when other algorithms stagnated, benefiting from the phased mutation mechanism in the hybrid mutation strategy. After reaching preset iteration stages, the algorithm switches from the Cauchy mutation to the Gaussian mutation, enhancing its refined search capability while maintaining the search intensity.
Table 5 further validates the above analysis results, showing that the ADVCSO outperforms comparison algorithms in terms of mean values and standard deviations for most functions. Specifically, the ADVCSO achieves the lowest mean values across all functions except F7, F9, and F17, underscoring its enhanced convergence precision and stability. Notably, on function F27, the ADVCSO attains a mean value of 2.95 × 103, significantly lower than competitors and approaching the theoretical optimum. Moreover, the ADVCSO exhibits the smallest standard deviations in nearly all test functions, validating its exceptional robustness and consistency. On functions F1, F4, F11, F12, F14, F16, F18, F28, and F29, its standard deviations are significantly lower than that of other comparative algorithms by more than two orders of magnitude.
These results collectively illustrate how the ADVCSO’s enhancement strategies work synergistically under different optimization stages and problem characteristics, enabling the algorithm to intelligently adjust its search behavior, solidifying its efficacy in addressing complex multimodal optimization problems.
To further validate the improvement effects of the ADVCSO, this section also selected recently published state-of-the-art optimization algorithms from high-level journals for comparison, including the Artificial Lemming Algorithm (ALA) [32], Chinese Pangolin Optimizer (CPO) [33], and the CEC champion algorithm JADE [34]. All algorithms were configured with uniform parameters: a problem dimension of 30, a population size of 50, and 1000 iterations. The convergence curves obtained by running the ADVCSO and the aforementioned algorithms on the CEC2017 test function suite are shown in Figure 6. Additionally, each algorithm was independently run 50 times, with results presented in Table 6, which shows metrics such as mean values and standard deviations for each algorithm.
Figure 6 displays the convergence curves of various algorithms on functions F1-F29 from the CEC2017 test function suite. The blue curve representing the ADVCSO in the unimodal function F2 shows a convergence speed slightly inferior to JADE but still significantly superior to the ALA and CPO, ultimately converging to a better fitness value. In the simple multimodal functions F6 and F8, the ADVCSO exhibited a notably faster initial convergence than JADE, demonstrating the powerful advantage of its elite perturbation initialization strategy based on good point sets. Notably, in the composition function F28, all algorithms started with very high initial fitness values, reaching 1013–1015. The ADVCSO performed similarly to the CPO during the first 400 iterations but showed leap-like improvements between iterations 400 and 600, ultimately converging to 9.83 × 104, significantly outperforming the CPO (5.86 × 106) and ALA (1.39 × 107), demonstrating the effectiveness of the dynamic role allocation mechanism and hybrid mutation strategy improvements.
Table 6 specifically presents the mean values and standard deviations of each algorithm on all 29 functions in the CEC2017 test function suite. The results show that the ADVCSO outperformed the ALA on 23 functions and the CPO on 25 functions. Particularly on the hybrid complex functions F14–F16, the ADVCSO’s average values were 1–3 orders of magnitude lower than the ALA and CPO, indicating that the ADVCSO can effectively reduce the probability of premature convergence and escape local optima traps. Although JADE demonstrated a powerful performance in the experiments, the ADVCSO also showed a strong competitiveness. For example, in most composition functions, such as F21, F24, F25, F26, F27, etc., the ADVCSO’s performance was very close to this champion algorithm, differing by only 10⁻2 orders of magnitude, demonstrating its strong adaptability to complex multimodal optimization problems.
The comprehensive experimental results indicate that the ADVCSO algorithm effectively overcomes the deficiencies of the traditional CSO algorithm by introducing an elite perturbation initialization strategy based on good point sets, a dynamic role allocation mechanism, and a hybrid mutation strategy. It significantly outperforms classic algorithms on most test functions and demonstrates high-performance advantages compared to recently published high-level algorithms like the ALA and CPO. Although there remains a certain gap compared to the champion algorithm JADE, the ADVCSO has demonstrated a comparable or even better performance on some complex functions.

4.3. Multi-Traveling Salesman Problem Solving

The Multiple Traveling Salesman Problem (MTSP) extends the classical Traveling Salesman Problem (TSP) by scaling the number of salesmen from one to multiple, optimizing the coordinated path planning across cities. In the MTSP, multiple salesmen depart from a central starting city, collaboratively visit all cities, and return to the origin. The objective is to identify a set of routes that minimizes the total distance (or cost) traversed by all salesmen, with the constraint that each city (except the origin) must be visited exactly once [35]. The MTSP finds broad applications in logistics distribution, task allocation, and related fields. The mathematical model of the MTSP is established as follows:
Let m denote the number of salesmen. The decision variable x i j k 0 ,   1 indicates whether salesman k travels from city i to city j . The objective function minimizes the maximum path length among all salesmen, formulated as Equation (32):
m i n max 1 k m i = 0 n j = 0 n d i j x i j k
where d i j represents the Euclidean distance between cities i and j .
The constraints of the MTSP model are as follows:
  • City Visit Uniqueness (Equation (33))
k = 1 m j = 0 n x i j k = 1 , i 0
2.
Route Continuity (Equation (34))
j = 1 n x 1 j k = 1 , i = 1 n x i 0 k = 1
A probability-based encoding scheme is adopted to compute path lengths in the MTSP model. For each city i i 0 , a probability vector p i = p i 1 , p i 2 , , p i m 0 , 1 m defines the likelihood of assigning i to different salesmen, ensuring k = 1 m p i k = 1 . The decoding process follows Equations (35)–(38):
k i = max 1 k m p i k
P k = 0 i   |   k i = k 0
L k = l = 0 P k 1 d P k l , P k l + 1
f x = m a x 1 k m L k + 0.1 σ L
where k i represents the salesman k assigned to city i (highest probability). P k represents the complete route of salesman k . L k represents the total path length of salesman k . f x is the fitness function incorporating the path-balancing penalty (standard deviation of path lengths scaled by weighting factor σ L ).
In the experiments, 50 randomly generated cities were used, with the distribution center fixed at the first city and 10 salesmen deployed. The proposed ADVCSO was compared against the GTO, African Vultures Optimization Algorithm (AVOA) [36], and HGS. All algorithms were configured with a population size of 50 and 1000 iterations. The optimal path plans for the MTSP obtained by the ADVCSO and other algorithms are illustrated in Figure 7, while statistical results from 20 independent runs are summarized in Table 7.
Figure 7 reveals that the ADVCSO generates paths with a balanced spatial distribution, where each salesman’s assigned region is logically partitioned, avoiding path overlaps and revisits, which underscores the algorithm’s collaborative optimization capability. Table 7 demonstrates the ADVCSO’s superiority across all key metrics: it achieves the best maximum path length (358.27), total path length (1744.09), and fitness value, outperforming the suboptimal algorithms by 6.0% and 5.6% in the maximum and total lengths, respectively. Furthermore, the ADVCSO exhibits the smallest standard deviations among all compared algorithms, confirming its exceptional robustness in combinatorial optimization, even under extreme scenarios.

4.4. Multi-Knapsack Problem Solving

The Multiple Knapsack Problem (MKP) extends the classical Knapsack Problem. In the MKP, given a set of items and multiple capacity-constrained knapsacks, the objective is to allocate items to knapsacks to maximize the total value of all items while satisfying each knapsack’s capacity limit [37]. The MKP has broad applications in resource allocation, portfolio optimization, and related fields.
This study employs integer encoding to establish the MKP model. Considering that during the algorithm iteration process, individual position updates may produce infeasible solutions that do not satisfy constraints, the following repair strategies are adopted to ensure all solutions remain within the feasible domain: When single knapsack constraints are violated, such as when an item i is allocated to multiple knapsacks (there exist multiple j where z i = j ), the allocation with the highest value-to-weight ratio ( v i w i ) is retained while other allocations are set to zero. When capacity constraints are violated, such as when the total weight of a knapsack j exceeds its capacity c j , this study arranges the items in that knapsack in descending order of the value-to-weight ratio ( v i w i ), then sequentially removes items with the lowest value-to-weight ratio until the capacity constraint is satisfied. Removed items are set to an unassigned state ( z i = 0 ). Additionally, by introducing penalty terms in the objective function, the feasibility of solutions is ensured, guiding the algorithm toward the feasible domain. The mathematical model of the MKP is established as follows:
Let n denote the number of items with weights w i and values v i   i 1 ,   2 ,   ,   n and m knapsacks with capacities c j   j 1 ,   2 ,   ,   m . The decision variable x i j 0 ,   1 indicates whether item i is placed in knapsack j . The objective function maximizes the total value of the assigned items without exceeding capacities, which is formulated as Equation (39):
m a x i = 1 n j = 1 m v i x i j
The constraints of the MKP model are as follows:
  • Single-Knapsack Constraint (Equation (40))
j = 1 m x i j 1
2.
Capacity Constraint (Equation (41))
i = 1 n w i x i j c j
An integer encoding scheme is employed to model the MKP. Each individual is encoded as a vector z = z 1 , z 2 , , z n , where z i 0 , 1 , 2 , , m . z i = j indicates that item i is assigned to knapsack j . The decoding process follows Equations (42)–(46):
S j = i   |   z i = j
p i = v i w i , i S j
i S j w i c j
j = max 1 j m v i | k S j w k + w i c j , z i = 0
f z = i = 1 n j = 1 m v i δ z i , j 1000 · j = 1 m m a x 0 , i = 1 n w i δ z i , j c j
where S j represents a set of items initially assigned to knapsack j . p i represents sorting S j in a descending order of value density and removing items until the capacity constraint is met. j represents selecting feasible backpacks for unallocated items i z i = 0 . f z is the fitness function of the MKP, penalizing infeasible solutions by returning 0 if any item remains unassigned. An overcapacity penalty is further integrated to enforce a strict adherence to knapsack capacities.
In the experiments, a multi-knapsack instance with 10 items and 3 knapsacks (capacities: 5.3, 4.5, and 6.2) was selected. The weights and values of the items are listed in Table 8, with the maximum total value of the instance being 21.4. The ADVCSO was compared against the GTO, AVOA, and HGS, with all algorithms configured to a population size of 50 and 30 iterations. Each algorithm was executed 20 times, with results summarized in Table 9.
Table 6 shows that the ADVCSO and other algorithms successfully identified the global optimal solution—placing items 4 and 9 in knapsack 1, item 6 in knapsack 2, and items 1, 5, and 8 in knapsack 3—validating their fundamental capability in a feasible solution discovery. However, the ADVCSO demonstrates a significant superiority across critical metrics: the worst fitness, mean fitness, standard deviation, and the number of optimal solutions found. Notably, the ADVCSO achieved the optimal solution 13 out of 20 runs, compared to the HGS’s 8/20—a 62.5% improvement in success rate—confirming its enhanced global search capability and stability in combinatorial optimization.
In summary, the ADVCSO demonstrates significant advantages across various collaborative optimization problems. Comparative analyses against state-of-the-art algorithms reveal that the ADVCSO consistently achieves an optimal convergence precision with faster convergence rates, enhancing exploration capabilities in early stages while avoiding local optima entrapment in later phases, thereby strengthening the exploitation efficacy. Notably, in the MTSP and MKP experiments, the ADVCSO maintains a superior performance and consistency throughout the entire optimization process, validating its efficacy in addressing complex combinatorial optimization challenges.

5. Conclusions

To address the universal challenges of balancing global convergence and solution quality in complex combinatorial optimization problems, this study proposes an Adaptive Dynamically Enhanced Chicken Swarm Optimization (ADVCSO) algorithm for solving multi-subproblem collaborative optimization challenges. Inspired by chicken swarm social behaviors, we introduce a population diversity-driven dynamic role allocation mechanism and develop a hybrid mutation strategy, resolving the critical limitations of the original CSO—low flexibility and susceptibility to local optima—while significantly enhancing the performance in complex multimodal problems. Additionally, inspired by the natural distribution patterns of chicken swarms, a good point set initialization strategy is implemented to improve the uniformity and diversity of initial populations, addressing the poor initial solution quality of CSO and accelerating convergence.
Experimental analyses demonstrate that the ADVCSO not only consistently approaches optimal values on benchmark tests but also exhibits an exceptional robustness and stability in real-world combinatorial optimization applications. For the Multi-Traveling Salesman Problem (MTSP), the ADVCSO achieves optimal maximum and total path lengths of 358.27 and 1744.09, respectively. In the Multi-Knapsack Problem (MKP), it attains the global optimum 13 times out of 20 runs, outperforming all competitors across metrics and validating its effectiveness and collaborative optimization capability in practical scenarios.
The ADVCSO represents a paradigm breakthrough in complex combinatorial optimization, particularly in addressing the long-standing technical bottleneck of multi-subproblem coupled optimization, marking an innovative advancement in intelligent optimization methodologies. Future research may focus on enhancing the ADVCSO’s scalability and generalization for diverse dynamic scenarios. Furthermore, integrating deep reinforcement learning into intelligent optimization frameworks could extend its applicability to ultra-large-scale distributed optimization tasks, ensuring efficiency and reliability in real-world applications, such as network optimization and energy scheduling, thereby fostering the co-evolution of theoretical innovation and engineering implementation.

Author Contributions

Conceptualization, K.W.; methodology, K.W.; software, K.W.; validation, K.W.; data curation, K.W.; resources, K.W.; writing—original draft preparation, K.W., L.W. and M.L.; writing—review and editing, K.W., L.W. and M.L.; supervision, M.L.; project administration, L.W.; funding acquisition, L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This project was supported by the Hainan Provincial Natural Science Foundation of China (Grant No. 621RC512) and National Defense Science and Technology 173 Program of China (Grant No. 173LA13007102).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are included within the article and are also available from the corresponding authors upon request. All code is available at https://github.com/du5-05/ADVCSO.git (accessed on 24 April 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ADVCSOAdaptive Dynamically Enhanced Variant of Chicken Swarm Optimization
CSOChicken Swarm Optimization
SOSnake Optimizer
DRADivine Religions Algorithm
APOArtificial Protozoa Optimizer
RBMORed-Billed Blue Magpie Optimizer
HOAHiking Optimization Algorithm
SDOSled Dog-Inspired Optimizer
GJOGolden Jackal Optimization
BOAButterfly Optimization Algorithm
SSASparrow Search Algorithm
SCSOSand Cat Swarm Optimization
HBAHoney Badger Algorithm
BESBald Eagle Search Algorithm
PECSOImproved Enhanced Chicken Swarm Algorithm
AFSAArtificial Fish Swarm Algorithm
GAGenetic Algorithm
FCSOAdaptive Fuzzy Chicken Swarm Optimization
NSCSONon-Dominated Sorting Chicken Swarm Algorithm
MTSPMulti-Traveling Salesman Problem
TSPTraveling Salesman Problem
MKPMulti-Knapsack Problem
GTOGiant Trevally Optimizer
COACoyote Optimization Algorithm
HGSHunger Games Search
SLOSea Lion Optimization Algorithm
AVOAAfrican Vultures Optimization Algorithm
ALAArtificial Lemming Algorithm
CPOChinese Pangolin Optimizer
JADEAdaptive Differential Evolution with Optional External Archive

References

  1. Yang, X.-S. Nature-inspired optimization algorithms: Challenges and open problems. J. Comput. Sci. 2020, 46, 101104. [Google Scholar] [CrossRef]
  2. Tang, W.; Cao, L.; Chen, Y.; Chen, B.; Yue, Y. Solving engineering optimization problems based on multi-strategy particle swarm optimization hybrid dandelion optimization algorithm. Biomimetics 2024, 9, 298. [Google Scholar] [CrossRef] [PubMed]
  3. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  4. Mozhdehi, A.T.; Khodadadi, N.; Aboutalebi, M.; El-Kenawy, E.-S.M.; Hussien, A.G.; Zhao, W.; Nadimi-Shahraki, M.H.; Mirjalili, S. Divine Religions Algorithm: A novel social-inspired metaheuristic algorithm for engineering and continuous optimization problems. Clust. Comput. 2025, 28, 253. [Google Scholar] [CrossRef]
  5. Wang, X.; Snášel, V.; Mirjalili, S.; Pan, J.-S.; Kong, L.; Shehadeh, H.A. Artificial Protozoa Optimizer (APO): A novel bio-inspired metaheuristic algorithm for engineering optimization. Knowl.-Based Syst. 2024, 295, 111737. [Google Scholar] [CrossRef]
  6. Fu, S.; Li, K.; Huang, H.; Ma, C.; Fan, Q.; Zhu, Y. Red-billed blue magpie optimizer: A novel metaheuristic algorithm for 2D/3D UAV path planning and engineering design problems. Artif. Intell. Rev. 2024, 57, 134. [Google Scholar] [CrossRef]
  7. Oladejo, S.O.; Ekwe, S.O.; Mirjalili, S. The Hiking Optimization Algorithm: A novel human-based metaheuristic approach. Knowl.-Based Syst. 2024, 296, 111880. [Google Scholar] [CrossRef]
  8. Hu, G.; Cheng, M.; Houssein, E.H.; Hussien, A.G.; Abualigah, L. SDO: A novel sled dog-inspired optimizer for solving engineering problems. Adv. Eng. Inform. 2024, 62, 102783. [Google Scholar] [CrossRef]
  9. Tang, J.; Liu, G.; Pan, Q. A review on representative swarm intelligence algorithms for solving optimization problems: Applications and trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  10. Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  11. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  12. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control. Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  13. Adegboye, O.R.; Feda, A.K.; Ojekemi, O.R.; Agyekum, E.B.; Khan, B.; Kamel, S. DGS-SCSO: Enhancing sand cat swarm optimization with dynamic pinhole imaging and golden sine algorithm for improved numerical optimization performance. Sci. Rep. 2024, 14, 1491. [Google Scholar] [CrossRef]
  14. Wang, W.; Tian, J. An Improved Hippopotamus Optimization Algorithm Utilizing Swarm-Elite Learning Mechanism’s Levy Flight and Quadratic Interpolation Operators for Optimal Parameters Extraction in Photovoltaic Models. In International Conference on Intelligent Computing; Springer Nature Singapore: Singapore, 2024; pp. 264–277. [Google Scholar]
  15. Xia, H.; Chen, L.; Xu, H. Multi-strategy dung beetle optimizer for global optimization and feature selection. Int. J. Mach. Learn. Cybern. 2025, 16, 189–231. [Google Scholar] [CrossRef]
  16. Adegboye, O.R.; Feda, A.K.; Tejani, G.G.; Smerat, A.; Kumar, P.; Agyekum, E.B. Salp Navigation and Competitive based Parrot Optimizer (SNCPO) for efficient extreme learning machine training and global numerical optimization. Sci. Rep. 2025, 15, 13704. [Google Scholar] [CrossRef]
  17. Meng, X.; Liu, Y.; Gao, X.; Zhang, H. A new bio-inspired algorithm: Chicken swarm optimization. In Proceedings of the Advances in Swarm Intelligence: 5th International Conference, ICSI 2014, Hefei, China, 17–20 October 2014; Proceedings, Part I 5. Springer International Publishing: Berlin, Germany, 2014; pp. 86–94. [Google Scholar]
  18. Liang, J.; Wang, L.; Ma, M. An Improved Chicken Swarm Optimization Algorithm for Solving Multimodal Optimization Problems. Comput. Intell. Neurosci. 2022, 2022, 5359732. [Google Scholar] [CrossRef]
  19. Deb, S.; Gao, X.-Z. A hybrid ant lion optimization chicken swarm optimization algorithm for charger placement problem. Complex Intell. Syst. 2021, 8, 2791–2808. [Google Scholar] [CrossRef]
  20. Li, Y.; Lu, Y.; Li, D.; Zhou, M.; Xu, C.; Gao, X.; Liu, Y. Trajectory optimization of high-speed robotic positioning with suppressed motion jerk via improved chicken swarm algorithm. Appl. Sci. 2023, 13, 4439. [Google Scholar] [CrossRef]
  21. Nagarajan, B.; Svn, S.K.; Selvi, M.; Thangaramya, K. A fuzzy based chicken swarm optimization algorithm for efficient fault node detection in Wireless Sensor Networks. Sci. Rep. 2024, 14, 27532. [Google Scholar] [CrossRef]
  22. Afzal, M.Z.; Wen, F.; Saeed, N.; Aurangzeb, M. Enhanced state of charge estimation in electric vehicle batteries using chicken swarm optimization with open ended learning. Sci. Rep. 2025, 15, 10833. [Google Scholar] [CrossRef]
  23. Benha University; Gamal, M.; El-Sawy, A.; AbuEl-Atta, A. Hybrid Algorithm Based on Chicken Swarm Optimization and Genetic Algorithm for Text Summarization. Int. J. Intell. Eng. Syst. 2021, 14, 319–331. [Google Scholar] [CrossRef]
  24. Wang, Z.; Qin, C.; Wan, B.; Song, W.W.; Yang, G. An adaptive fuzzy chicken swarm optimization algorithm. Math. Probl. Eng. 2021, 2021, 8896794. [Google Scholar] [CrossRef]
  25. Huang, H.; Zheng, B.; Wei, X.; Zhou, Y.; Zhang, Y. NSCSO: A novel multi-objective non-dominated sorting chicken swarm optimization algorithm. Sci. Rep. 2024, 14, 4310. [Google Scholar] [CrossRef]
  26. Liu, S.; Jin, Z.; Lin, H.; Lu, H. An improve crested porcupine algorithm for UAV delivery path planning in challenging environments. Sci. Rep. 2024, 14, 20445. [Google Scholar] [CrossRef]
  27. Awad, N.H.; Ali, M.Z.; Suganthan, P.N. Ensemble sinusoidal differential covariance matrix adaptation with Euclidean neighborhood for solving CEC2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia-San Sebastián, Spain, 5–8 June 2017; pp. 372–379. [Google Scholar]
  28. Sadeeq, H.T.; Abdulazeez, A.M. Giant trevally optimizer (GTO): A novel metaheuristic algorithm for global optimization and challenging engineering problems. IEEE Access 2022, 10, 121615–121640. [Google Scholar] [CrossRef]
  29. Pierezan, J.; Coelho, L.D.S. Coyote optimization algorithm: A new metaheuristic for global optimization problems. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018. [Google Scholar] [CrossRef]
  30. Yang, Y.; Chen, H.; Heidari, A.A.; Gandomi, A.H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl. 2021, 177, 114864. [Google Scholar] [CrossRef]
  31. Masadeh, R.; Mahafzah, B.A.; Sharieh, A. Sea lion optimization algorithm. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 388–395. [Google Scholar] [CrossRef]
  32. Xiao, Y.; Cui, H.; Khurma, R.A.; Castillo, P.A. Artificial lemming algorithm: A novel bionic meta-heuristic technique for solving real-world engineering optimization problems. Artif. Intell. Rev. 2025, 58, 84. [Google Scholar] [CrossRef]
  33. Guo, Z.; Liu, G.; Jiang, F. Chinese Pangolin Optimizer: A novel bio-inspired metaheuristic for solving optimization problems. J. Supercomput. 2025, 81, 517. [Google Scholar] [CrossRef]
  34. Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  35. Bektas, T. The multiple traveling salesman problem: An overview of formulations and solution procedures. Omega 2006, 34, 209–219. [Google Scholar] [CrossRef]
  36. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  37. Awasthi, A.; Bär, F.; Doetsch, J.; Ehm, H.; Erdmann, M.; Hess, M.; Klepsch, J.; Limacher, P.A.; Luckow, A.; Niedermeier, C.; et al. Quantum computing techniques for multi-knapsack problems. In Science and Information Conference; Springer Nature Switzerland: Cham, Switzerland, 2023; pp. 264–284. [Google Scholar]
Figure 1. Initialization performance comparison. (a) Good-point-set-based elite perturbation initialization. (b) Random initialization.
Figure 1. Initialization performance comparison. (a) Good-point-set-based elite perturbation initialization. (b) Random initialization.
Biomimetics 10 00303 g001
Figure 2. The effect of the dynamic role allocation mechanism.
Figure 2. The effect of the dynamic role allocation mechanism.
Biomimetics 10 00303 g002
Figure 3. The flowchart of the ADVCSO algorithm.
Figure 3. The flowchart of the ADVCSO algorithm.
Biomimetics 10 00303 g003
Figure 4. Convergence curve comparison of CSO algorithm variants.
Figure 4. Convergence curve comparison of CSO algorithm variants.
Biomimetics 10 00303 g004
Figure 5. Comparison of convergence curves of various algorithms (1).
Figure 5. Comparison of convergence curves of various algorithms (1).
Biomimetics 10 00303 g005aBiomimetics 10 00303 g005bBiomimetics 10 00303 g005c
Figure 6. Comparison of convergence curves of various algorithms (2).
Figure 6. Comparison of convergence curves of various algorithms (2).
Biomimetics 10 00303 g006aBiomimetics 10 00303 g006bBiomimetics 10 00303 g006cBiomimetics 10 00303 g006d
Figure 7. Various algorithms solve the optimal path planning for the MTSP.
Figure 7. Various algorithms solve the optimal path planning for the MTSP.
Biomimetics 10 00303 g007
Table 1. Summary of CSO enhancement efforts.
Table 1. Summary of CSO enhancement efforts.
Researcher (s)YearInitialization EnhancementsPosition Update EnhancementsMutation Strategy Enhancements
InternalExternalPre-UpdatePost-Update
Liang et al. [18]2022
Deb et al. [19]2021
Li et al. [20]2023
Nagarajan et al. [21]2024
Afzal et al. [22]2025
Gamal et al. [23]2021
Wang et al. [24]2021
Huang et al. [25]2024
Table 2. Algorithm variants for ablation studies.
Table 2. Algorithm variants for ablation studies.
Algorithm VariantsDescription
ADVCSOComplete algorithm combining all three improvement strategies
Original CSOOriginal Chicken Swarm Optimization algorithm
CSO_GPSCSO algorithm using only good point set initialization strategy
CSO_DRACSO algorithm using only dynamic role allocation mechanism
CSO_MUTCSO algorithm using only hybrid mutation strategy
CSO_GPS_DRACSO algorithm combining good point set initialization and dynamic role allocation mechanism
CSO_GPS_MUTCSO algorithm combining good point set initialization and hybrid mutation strategy
CSO_DRA_MUTCSO algorithm combining dynamic role allocation mechanism and hybrid mutation strategy
Table 3. Specific situation of CEC2017 test set.
Table 3. Specific situation of CEC2017 test set.
No.Functions F i * = F i x *
Unimodal Functions1Shifted and Rotated Bent Cigar Function100
2 1Shifted and Rotated Zakharov Function200
Simple
Multimodal
Functions
3Shifted and Rotated Rosenbrock’s Function300
4Shifted and Rotated Rastrigin’s Function400
5Shifted and Rotated Expanded Scaffer’s F6 Function500
6Shifted and Rotated Lunacek Bi Rastrigin Function600
7Shifted and Rotated Non-Continuous Rastrigin’s Function700
8Shifted and Rotated Levy Function800
9Shifted and Rotated Schwefel’s Function900
Hybrid
Functions
10Hybrid Function 1 (N = 3)1000
11Hybrid Function 2 (N = 3)1100
12Hybrid Function 3 (N = 3)1200
13Hybrid Function 4 (N = 4)1300
14Hybrid Function 5 (N = 4)1400
15Hybrid Function 6 (N = 4)1500
16Hybrid Function 6 (N = 5)1600
17Hybrid Function 6 (N = 5)1700
18Hybrid Function 6 (N = 5)1800
19Hybrid Function 6 (N = 6)1900
Composition
Functions
20Composition Function 1 (N = 3)2000
21Composition Function 2 (N = 3)2100
22Composition Function 3 (N = 4)2200
23Composition Function 4 (N = 4)2300
24Composition Function 5 (N = 5)2400
25Composition Function 6 (N = 5)2500
26Composition Function 7 (N = 6)2600
27Composition Function 8 (N = 6)2700
28Composition Function 9 (N = 3)2800
29Composition Function 10 (N = 3)2900
Search Range: [−100,100]D 2
1 The original F2 function in the CEC2017 test suite was officially removed due to defects. 2 D denotes the problem dimensionality.
Table 4. Algorithm parameters.
Table 4. Algorithm parameters.
Algorithm NameParameterInitial Setting
CSOUpdate cycle5
The proportion of roosters, hens, and chicks20%, 30%, and 50%
ADVCSOUpdate cycleVary with iterations
The proportion of roosters, hens, and chicksVary with iterations
GTOPosition-change-controlling parameter0.4
Initial value of jump slopeRandom
COANumber of packs10
Number of coyotes per pack5
Probability of leaving a pack0.125
HGSThe probability of updating position0.08
Largest hunger/threshold10,000
SLOThe speed of sound of sea lion leaderRandom
AVOAProbability of status transition in the exploration phase0.6
Probability of status transition in phase 10.4
Probability of status transition in phase 20.6
Probability of selecting the 1st best0.8
Table 5. Comparison results of algorithms on the CEC2017 test functions. Significant values are the best results in the comparison of the algorithm performance (1).
Table 5. Comparison results of algorithms on the CEC2017 test functions. Significant values are the best results in the comparison of the algorithm performance (1).
Func.ADVCSOCSOGTOCOAHGSSLO
MeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
F11.63E+083.58E+071.70E+103.99E+096.83E+091.54E+099.77E+093.34E+097.60E+093.43E+091.14E+105.06E+09
F26.39E+021.00E+023.83E+048.01E+033.33E+043.28E+033.29E+046.61E+033.10E+048.46E+034.17E+041.17E+04
F35.21E+021.17E+021.67E+035.81E+028.35E+022.00E+021.18E+032.24E+021.18E+034.16E+023.03E+031.74E+03
F48.98E+027.24E+011.47E+044.33E+031.21E+043.30E+031.68E+043.84E+032.48E+046.52E+033.07E+047.86E+03
F5500.00107.23E−04500.00201.34E−03500.00272.01E−03500.00952.80E−03500.00443.54E−03500.00856.38E−03
F61.44E+045.64E+031.44E+044.55E+032.01E+041.04E+042.80E+049.36E+031.04E+046.80E+031.20E+046.87E+03
F7700.42712.32E−01700.19911.43E−01700.39343.00E−01700.66012.75E−01700.49143.43E−01700.60614.53E−01
F88.05E+022.36E+008.11E+023.68E+008.09E+022.79E+008.09E+022.65E+008.15E+025.78E+008.17E+025.94E+00
F96.56E+035.20E+026.70E+039.70E+025.54E+037.24E+026.13E+033.19E+026.36E+031.08E+036.49E+039.12E+02
F103.33E+041.69E+041.99E+052.34E+051.20E+054.67E+045.07E+052.13E+054.21E+062.82E+074.12E+055.77E+05
F115.30E+062.08E+061.82E+098.00E+086.28E+082.67E+087.24E+082.59E+081.06E+081.74E+089.19E+081.36E+09
F123.02E+051.90E+051.56E+091.04E+094.55E+082.71E+083.14E+081.29E+086.02E+077.37E+076.37E+081.01E+09
F135.29E+053.90E+052.58E+061.84E+063.59E+061.06E+066.70E+053.16E+052.17E+062.81E+063.69E+062.95E+06
F141.22E+055.98E+042.91E+082.22E+085.58E+074.19E+075.88E+073.13E+071.30E+072.99E+073.19E+085.69E+08
F154.75E+056.41E+054.67E+065.31E+069.85E+054.96E+052.38E+061.50E+061.27E+061.77E+062.97E+064.88E+06
F163.67E+038.20E+028.04E+041.03E+052.39E+076.70E+072.82E+077.16E+071.60E+071.11E+081.84E+117.59E+11
F172.62E+054.33E+058.27E+051.54E+063.10E+062.66E+063.08E+051.06E+051.42E+058.65E+046.43E+059.66E+05
F182.22E+052.67E+051.68E+101.61E+105.02E+096.25E+093.04E+091.99E+098.29E+081.60E+097.44E+091.99E+10
F192.33E+032.70E+023.85E+035.51E+026.12E+031.34E+034.04E+034.09E+021.04E+043.07E+037.83E+032.31E+03
F202.56E+031.23E+028.42E+033.21E+035.98E+032.41E+034.38E+031.12E+031.35E+049.46E+032.05E+048.42E+03
F212.28E+037.69E+002.49E+037.71E+012.63E+031.94E+022.42E+032.62E+013.83E+037.47E+023.44E+035.94E+02
F223.95E+032.30E+022.03E+043.35E+031.37E+044.26E+031.48E+042.35E+032.12E+041.03E+043.18E+048.87E+03
F233.09E+032.10E+021.33E+042.87E+038.05E+033.01E+031.01E+041.63E+031.60E+046.75E+031.79E+045.46E+03
F242.87E+032.40E+013.38E+032.11E+023.17E+037.87E+013.21E+031.49E+023.31E+032.08E+024.23E+038.09E+02
F253.34E+031.01E+013.67E+031.68E+025.54E+031.46E+033.45E+037.98E+014.99E+031.67E+037.39E+032.58E+03
F263.18E+033.33E+013.30E+036.24E+013.72E+033.33E+023.23E+032.89E+013.64E+032.51E+023.97E+033.14E+02
F272.95E+031.82E+023.55E+031.61E+023.45E+034.01E+023.50E+036.71E+013.57E+035.45E+024.58E+039.38E+02
F287.32E+051.60E+061.04E+099.44E+083.20E+081.94E+081.09E+097.76E+082.56E+083.97E+081.11E+103.63E+10
F293.87E+066.82E+062.04E+091.28E+094.45E+082.83E+089.03E+085.23E+082.69E+082.98E+082.98E+093.28E+09
Table 6. Comparison results of algorithms on the CEC2017 test functions. Significant values are the best results in the comparison of the algorithm performance (2).
Table 6. Comparison results of algorithms on the CEC2017 test functions. Significant values are the best results in the comparison of the algorithm performance (2).
Func.ADVCSOALACPOJADE
MeanStdMeanStdMeanStdMeanStd
F11.61E+085.22E+071.90E+087.50E+074.99E+082.42E+081.00E+027.89E−06
F26.58E+021.08E+023.90E+031.53E+031.56E+045.37E+032.00E+022.41E−14
F35.10E+021.19E+025.12E+026.90E+015.49E+027.66E+013.33E+021.02E+01
F48.85E+027.57E+011.22E+031.72E+022.17E+039.48E+024.54E+027.96E+00
F5500.00117.67E−04500.00433.64E−03500.00431.13E−02500.00074.75E−05
F61.39E+044.15E+032.40E+041.20E+044.53E+041.07E+041.91E+031.14E+03
F7700.46192.16E−01700.65095.15E−01700.31311.62E−01700.02692.15E−02
F88.04E+022.03E+008.13E+025.00E+008.12E+026.24E+008.00E+024.69E−01
F96.63E+034.93E+026.06E+037.78E+025.31E+033.71E+024.47E+033.64E+02
F103.25E+042.27E+042.20E+059.80E+041.98E+054.22E+051.82E+032.32E+03
F116.13E+063.12E+069.59E+078.24E+076.16E+074.91E+078.22E+036.28E+03
F123.38E+051.49E+051.11E+074.84E+061.28E+078.73E+065.12E+034.31E+03
F136.93E+055.71E+051.78E+061.24E+062.68E+061.90E+061.21E+041.84E+04
F141.34E+059.26E+043.99E+062.31E+064.56E+062.69E+067.63E+035.04E+03
F155.76E+056.00E+056.97E+058.49E+054.14E+055.49E+051.81E+032.85E+02
F163.73E+031.12E+031.02E+054.80E+043.81E+062.61E+072.07E+033.80E+02
F173.39E+054.69E+055.26E+057.55E+051.05E+061.09E+069.47E+031.26E+04
F182.09E+052.50E+051.40E+078.52E+077.58E+082.43E+094.24E+038.20E+03
F192.36E+033.13E+024.56E+031.04E+039.08E+032.21E+032.08E+031.59E+02
F202.57E+039.96E+012.72E+033.29E+023.47E+036.59E+022.21E+037.70E+01
F212.28E+038.38E+002.38E+033.57E+014.08E+038.69E+022.27E+035.25E+00
F223.86E+032.73E+023.76E+033.31E+028.46E+032.50E+032.42E+035.97E+01
F233.13E+032.27E+023.39E+031.50E+024.27E+035.45E+022.50E+032.19E+01
F242.86E+033.26E+012.97E+036.82E+013.04E+037.51E+012.82E+031.09E−01
F253.34E+031.39E+013.41E+035.98E+005.96E+031.68E+033.33E+036.95E−01
F263.19E+033.90E+013.21E+035.18E+013.82E+034.40E+023.11E+031.20E+01
F272.91E+031.63E+023.20E+036.08E+013.42E+035.12E+022.75E+037.48E+01
F284.42E+051.23E+061.37E+083.46E+081.26E+084.22E+086.69E+031.80E+03
F292.96E+064.84E+065.90E+077.78E+071.03E+081.72E+081.08E+045.20E+03
Table 7. Comparison results of algorithms for solving the MTSP. Significant values are the best results in the comparison of the algorithm performance.
Table 7. Comparison results of algorithms for solving the MTSP. Significant values are the best results in the comparison of the algorithm performance.
ADVCSOGTOAVOAHGS
MeanStdMeanStdMeanStdMeanStd
Max Distance358.2721.02381.2821.13388.9628.60454.4524.25
Total Distance1744.0989.331847.04112.361898.86136.452124.22137.56
Fitness359.1021.37382.2421.12389.7428.65456.9624.46
Table 8. Value and weight of items.
Table 8. Value and weight of items.
Items12345678910
Value+2.3+1.5+3.4+1.6+5.2+4.3+2.8+3.9+4.1+2.5
Weight1.23.42.51.61.94.35.12.83.54.2
Search Range: [0, 21.4]
Table 9. Comparison results of algorithms for solving the MKP. Significant values are the best results in the comparison of the algorithm performance.
Table 9. Comparison results of algorithms for solving the MKP. Significant values are the best results in the comparison of the algorithm performance.
ADVCSOGTOAVOAHGS
Best Fitness21.4021.4021.4021.40
Worst Fitness20.9020.7020.7020.70
Mean Fitness21.2220.9221.0421.07
Standard Deviation0.240.290.270.28
Number of Optimal Solutions Found13578
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, K.; Wang, L.; Liu, M. ADVCSO: Adaptive Dynamically Enhanced Variant of Chicken Swarm Optimization for Combinatorial Optimization Problems. Biomimetics 2025, 10, 303. https://doi.org/10.3390/biomimetics10050303

AMA Style

Wu K, Wang L, Liu M. ADVCSO: Adaptive Dynamically Enhanced Variant of Chicken Swarm Optimization for Combinatorial Optimization Problems. Biomimetics. 2025; 10(5):303. https://doi.org/10.3390/biomimetics10050303

Chicago/Turabian Style

Wu, Kunwei, Liangshun Wang, and Mingming Liu. 2025. "ADVCSO: Adaptive Dynamically Enhanced Variant of Chicken Swarm Optimization for Combinatorial Optimization Problems" Biomimetics 10, no. 5: 303. https://doi.org/10.3390/biomimetics10050303

APA Style

Wu, K., Wang, L., & Liu, M. (2025). ADVCSO: Adaptive Dynamically Enhanced Variant of Chicken Swarm Optimization for Combinatorial Optimization Problems. Biomimetics, 10(5), 303. https://doi.org/10.3390/biomimetics10050303

Article Metrics

Back to TopTop